AI and the Law: A Compendium of Key Observations from Presentations and Writings in 2018

The following highlights key observations I have made during AI presentations and writings over the last 12 months.

  • The persistent AI common law vacuum strengthens the role of applicable standards, injecting a “surrogate” quality to their role in defining ultimately what is and isn’t acceptable/permissible in AI design. This void elevates their importance and the longer it persists the more likely these standards will eventually morph into common law, with only a few perceptible changes, mostly reflecting linguistic adaptation to a legal framework. Similar phenomena can be seen in the Federal Trade Commission’s cybersecurity enforcement, turning NIST’s Cybersecurity Framework into a de facto common law.
  • AI can help dilute the confidentiality, integrity and availability threats that are going to plague the IoT ecosystem. It will do so through AI-enabled computational law apps (CLAI) that help educate consumers, manufacturers, regulators and law makers. For more on CLAI, see Artificial Intelligence and Computational Law: Democratizing Cybersecurity.
  • AI capabilities in data security and privacy are diverse and hold promise for delivering robust capabilities. Even though recent privacy-centered legal frameworks, specifically, the GDPR and CCPA do not call for the use of AI, it is not inconceivable that they will be amended to follow a HIPAA-like principle, making AI adoption an “addressable” implementation (as HIPAA does with encryption) and in its absence require that covered organizations implement compensating controls.
  • Iterative liability (which I introduced in 2011 in “The Case for Robot Personal Liability: Part II – Iterative Liability”) is a concept that is intrinsically connected to the programmer’s liability to the Level D application’s algorithm’s evolution, not to the AI application’s chain of custody and use.
  • While much of the focus in AI discourse centers on building intelligent entities, not all of these entities also need to have a human-like thinking scale to qualify as AI. There is ample room for entities with “non-human” computational capabilities.
  • There is a fractal quality to AI and applicable law. Similar (of course, not identical) to the Mandelbrot Set, there are countless iterations, connections to legacy contractual principles. No matter how complex the technology involved, the connection to well-established legal paradigms can be traced, over and over again.
  • The AI taxonomy I proposed in 2012 is taking hold in the AI field. (I divided AI apps into categories according to a computational capability continuum and mission. You can read more about this framework here, and here.) The AI taxonomy is useful for providing the legal system with a reference point that can help guide a wide variety of decisions, from legislative to contractual (more on that below). In 2014, two and a half years after my presentation, the Society of Automobile Engineers (SAE) drafted a striking similar classification, the “Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems.”
  • When licensing AI systems, creative contract drafting is required. And since every licensing deal is essentially an exercise in risk-shifting, the AI taxonomy can be used by drafters to logically assign liability between the parties.
  • How we think about the “AS-IS, WHERE IS, WITH ALL FAULTS” disclaimer is likely to change.
  • Vicarious liability is an interesting concept, but remains purely academic at this point. The Restatement (Third) of Agency does not accommodate an AI as a fiduciary.
  • How would you prove a developer/coder was grossly negligent in designing an AI app? If we are dealing with a Level C or D/E app, how do we define and assign a reasonable coder standard, one that properly balances all interests? One solution is to use an iterative liability principle, which I first wrote about in 2011.
  • AI cannot infringe. AI has no legal rights. AI does not own intellectual property it creates.
  • Artificial entities, such as corporations, have rights and obligations. So why not AI? There is an important public policy issue on the table. Is it necessary/beneficial to endow AI with any rights?
  • The Naruto v. Slater (the monkey selfie) case is instructive vis-a-vis AI ownership of intellectual property. The case involved a question of whether a monkey could own the copyright and control the distribution of its selfie. The case settled, but the court also made it clear that copyright law did not recognize non-human ownership.