I gave three AI and the Law presentations in the past seven days and have three more lined up in the next few months: 2 at Stanford and 1 in Miami. The legal profession is undoubtedly interested in this topic.
So I decided this morning to jot down some of the more intriguing issues that came up. I do want to note that though the title says this is a “top 10” list, there are actually many more, but I will keep it to just 10 here.
- There is a fractal quality to this area of law. Similar (of course, not identical) to the Mandelbrot Set, there are countless iterations, connections to legacy contractual principles. No matter how complex the technology involved, the connection to well-established legal paradigms can be traced, over and over again. I am fascinated by the prospect of mathematically representing the law of AI, and this may very well be another paper.
- The AI taxonomy I proposed in 2012 is taking hold in the AI field. (I divided AI apps into categories according to a computational capability continuum and mission. You can read more about this framework here, and here.) The AI taxonomy is useful for providing the legal system with a reference point that can guide a wide variety of decisions, from legislative to contractual (more on that in point number 3). In 2014, two and a half years after my presentation, the Society of Automobile Engineers (SAE) drafted a striking similar classification, the “Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems.”
- When licensing AI systems, creative contract drafting is required. And since every licensing deal is essentially an exercise in risk-shifting, the AI taxonomy can be used by drafters to logically assign liability between the parties.
- The “AS-IS, WHERE IS, WITH ALL FAULTS” disclaimer should be carefully used as it may have unintended consequences. For example, in Level D apps which have learning and sophisticated autonomous capabilities, this disclaimer may be meaningless unless the licensor is able to assign a time-stamp, restricting those warranty boundaries to a specific moment in time.
- Vicarious liability is an interesting concept, but remains purely academic at this point. The Restatement (Third) of Agency does not accommodate an AI as a fiduciary.
- Hadley v. Baxendale remains as relevant as ever, even more than 160 years later. This is a great example of how legacy contractual principles remain connected to complex technology deal settings.
- How would you prove a developer/coder was grossly negligent in designing an AI app? When we are dealing with a Level C or D/E app, how do we assign a reasonable coder standard, one that properly balances all interests? One solution is to use an iterative liability principle, which I first wrote about in 2011.
- AI cannot infringe. AI has no legal rights. AI does not own intellectual property it creates.
- The Naruto v. Slater (the monkey selfie) case is instructive vis-a-vis AI ownership of intellectual property. The case involved a question of whether a monkey could own the copyright and control the distribution of its selfie. The case settled, but the court also made it clear that copyright law did not recognize non-human ownership.
- AI can help dilute the confidentiality, integrity and availability threats that are going to plague the IoT ecosystem. It will do so through AI-enabled computational law apps (CLAI) that help educate consumers, manufacturers, regulators and law makers.