AI and the Law: 2019 in Review

2019 focused on AI taxonomy, defining AI, the role of XAI and the question of AI entity rights. Below is a quick look at some of the main discussion points.

Do AI Entities Need Rights?

  • There is no logical driver for granting AI rights.
  • Human-based incentives do not translate well into a non-human framework.
  • We need to answer the “why” it is useful to grant AI entities rights before doing so.
  • A number of postscripts dealt with the EU stance on rights, how AI entities would enforce IP rights (they can’t) and whether a hybrid version of the work-made-for-hire principle would be appropriate in favor of the owner/creator of the AI.

Mitigating Liability with XAI: The Case for Standardization

  • A properly developed XAI is one that contains “perfect” information (which is explained below).
  • The use of XAI as a liability shield.
  • Standardizing XAI is important for promoting outcome predictability.
  • A postscript dealt with increased scrutiny of the use of AI in privacy applications as a standardization driver.

The Role of Explainable AI (XAI) in Regulating AI Behavior: Delivery of “Perfect” Information

  • XAI plays an important part in regulating AI behavior.
  • “Perfect” information is information that is: (1) relevant, (2) easily understood, and (3) not prone to misrepresentation.

Defining Artificial Intelligence

  • AI is frequently hastily and carelessly defined as an application that possesses human-like processing capabilities. This is only partially accurate.
  • The common denominator in AI applications is that in each case what we have is an agent that receives percepts from its operational surroundings and performs actions.
  • Exaggerating an application’s capabilities by anointing it with AI powers can be legally risky for the licensor.

Artificial Intelligence and the Law: Five Observations

  • AI is not a type of application. It is an application-enabling infrastructure.
  • Having a universally-accepted AI taxonomy is a first step in establishing an AI-relevant legal framework.
  • ISO, NIST, IEEE, DARPA, IARPA, etc., standards should be synchronized with AI law.
  • Maintaining AI-related supply chain transparency and accountability will become more challenging and complex, but blockchain-enabled smart contracts can help manage the risk.
  • A number of postscripts touched on “blind execution” as less desirable in complex AI systems (in contrast to rules-based expert systems); the increasing problem of deep fakes; the risk of using brain-machine interfaces and the connection to the use of XAI; certifying AI training as a behavior-regulating policy; and using NIST metrics to drive AI performance audits.

Mapping Artificial Intelligence Taxonomy

  • Mapping Section 238(g) of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 to the Level A through D AI taxonomy.
  • AI ethics is driven by a power:complexity ratio.

NIST and the development of Artificial Intelligence (Common Law)

  • The common law vacuum strengthens the role of AI-centric standards.
  • Predicting that one of NIST’s deliverables under the Executive Order on Maintaining American Leadership in Artificial Intelligence, will see an AI application taxonomy, which will likely contain principles similar to those I proposed in at the SLS 2012 IP Scholar’s Conference.