Mapping Artificial Intelligence Taxonomy

Section 238(g) of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 offers five definitions for artificial intelligence. These, in turn, can be mapped to the AI taxonomy. (Italics = the Act, bold = the AI taxonomy.)

The following represents a first pass on mapping these definitions and might be adapted later.

(1) Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets. Level B through D.

(2) An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action. Level C and D.

(3) An artificial system designed to think or act like a human, including cognitive architectures and neural networks.Level C and D.

(4) A set of techniques, including machine learning, that is designed to approximate a cognitive taskLevel B through D.

(5) An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision making, and actingLevel B through D.

The next step is to map these five definitions to the AI risk ratio, which deals, in a nutshell, with determining how much security is enough when designing and deploying AI applications, be they cyber or cybernetic. This will be the subject of a separate post.

***Postscript***

January 7, 2020: A universal adherence to standards relative to AI ethics is necessary in order to help ensure (1) safe, (2) reliable, and (3) robust implementation of AI. Problem is that intense competition (specifically between the U.S and China) can also breed and accelerate an ethics divergence that erodes these three foundations. Integrating simple ethics principles, such as represented by the power:complexity ratio, is a step in the right direction.

November 12, 2019: AI ethics is driven by a power:complexity ratio. These ethics should represent the foundational framework not only for the design, but also the deployment of AI applications (cyber and cybernetic). As such, a license requirement for deploying certain types of AI applications is logical, both as a matter of policy and law. (See also here for a discussion of AI application taxonomy.)