Defining Artificial Intelligence

Fans of the English language appreciate its intrinsic pliability; it seamlessly tolerates illogical dissonance without jeopardizing its structure. Thus we have come to accept and even be comfortable with “near miss” as a way to describe what logically is a “near hit” and (the personally irritating) “irregardless” as a substitute for “regardless.” And to help resist Vulcan-like temptations, Bryan Garner urges that we prioritize idiom and usage over logic.

But what does all this mean for the definition of AI? All too often, AI is hastily and carelessly defined as an application that possesses human-like processing capabilities. This is, of course, only partially accurate. But can we tolerate definitional pliability when it comes to AI? Perhaps. But we shouldn’t prefer to jettison logic in favor of idiom, especially where the former is more desirable because it promotes precision. The logic nugget can be found and is derived from the common denominator in AI applications: In each case what we have is an algorithm that receives percepts from its operational surroundings and performs actions. (See also, Russell & Norvig, Artificial Intelligence, A Modern Approach, Third Ed.) And this is also the proper definition of AI. This definition is sufficiently broad so that the existence of an AI application that manifests human-like processing capabilities (such as playing Go) is by no means dispositive of an equally accurate classification of AI in an application that “merely” mimics a bird’s obstacle-avoidance behavior.

***Postscript***

April 4, 2021: In his “Artificial Intelligence—The Revolution Hasn’t Happened Yet” essay, UC Berkeley Professor Michael I. Jordan argues that “the bigger problem is that the use of this single, ill-defined acronym [AI] prevents a clear understanding of the range of intellectual and commercial issues at play.” Jordan does not focus on the percept:action functionality, but instead uses the term “human-imitative AI.” (Essentially, this is a reference to Artificial General Intelligence, and, yes, we are not there yet.) Though his arguments are interesting, I remain convinced that the percept:action focus remains the correct and practical one for the fundamental normative definition of AI; i.e., it sufficiently forms the foundation from which all other definitions of various types of AI can emerge and inform the legal framework that sets the design demands and attendant liabilities.

November 15, 2019: Definitions are critical in defining liability, and, of course, in many other aspects. Forbes’ “Automation is Not Intelligence” highlights some of the challenges in defining AI. The article doesn’t address the legal challenges, but I will. So one challenge is contractual. Exaggerating an application’s capabilities by anointing it with AI powers can be legally risky for the licensor. This risk can be evident at least (and not only) from the perspective of an express warranty; specifically, the claim’s potential for sabotaging the licensor’s risk profile through increasing the likelihood of a legitimate breach of contract claim. (It’s not AI, your honor! It’s just automation!) Maintaining focus on the percept:action functionality of an application can help avoid unnecessary waste. Only those that meet that functionality are properly labeled and marketed as ‘AI applications.’