AI and the Corrosive Effect of Careless Definition

Just a couple of days ago I found an attention-grabbing headline. It suggested that an AI rulemaking event in California was imminent. Certainly interesting, right? Spoiler alert: The article is misleading.

Pedestrian use of the term “AI” is heavily influenced by pop culture and is riddled with inaccuracies. But some folks should know better. At the very least, lawmakers, regulators, and lawyers. These stakeholders need to resist filling the definitional vacuum with nonsensical content and drama-laden hyperbole. And yet they often fail when it comes to AI. Maybe it is an attention-grabbing tactic: a writer/speaker doesn’t know much about AI, but throws AI into what he/she says or writes and voila!: instant credibility, higher visibility, and importance. The opposite is true, of course.

The careless manner in which AI is frequently used reflects poorly on the speaker/writer. It eviscerates their argument and if that were all it did, no big deal (except for them, of course). But there is a much more pervasive problem with this carelessness. When it comes from sources who should know better, who can influence the discourse around this critical technology, it has a magnifying corrosive effect. Their intrinsic credibility inevitably carries a destructive characteristic in that it degrades the effort to build a logical, AI-centric nomenclature, and more broadly, an AI-centric legal framework.

Now let’s turn back to the article I was referring to above. It claims that the California Privacy Protection Agency (CPPA) is looking into setting rules that are likely to “impose AI regulations.” The author begins the piece by noting that AI regulation in the U.S. has been sparse (which I agree with, but it’s worse than just “sparse”) and then continues on to “report” that the CPPA is undertaking AI regulation with its invitation for preliminary comments on proposed rule-making. Alright, but as you read further you see that the author points to the Cal. Civ. Code §1798.185(a)(16) use of “automated decision-making processes” as the evidence that this rulemaking involves AI. Now hold on just one minute! That’s where the author lost me and posting this became necessary.

The author makes a big leap. First of all, automated decision-making processes do not necessarily mean that they are AI. Automation is not necessarily AI, decision-making capabilities are not either and combining these features does not render the application as a whole an AI. Applications with automated decision-making capabilities can take many kinds of forms none of which are AI. Take Excel, for example. It has a built-in automated decision-making process, which any person can enable through the various formulas that use the IF function. Does that make Excel AI? Of course not. Let’s go one step further: My car determines when it is optimal to shift gears based on conditions of speed, road incline/decline, etc. Is my car employing AI? No.

It takes much more than just “automated decision-making process” capabilities to elevate an application to an AI status. At the most basic level, the key to distinguishing between something that is and isn’t an AI is that the latter is an agent (e.g., a software program) that receives percepts and performs actions. The process of forming actions from percepts is complex and requires significant computational capability. While an application with automated decision-making process could have such attributes, it is not inevitable that the application is AI and announcing it as such in a headline is careless and corrosive.

When it comes, true rulemaking involving AI will need to be much more robust than a casual reference such as Cal. Civ. Code §1798.185(a)(16). It will use proper taxonomy (referencing level-type apps), reference standards, and attributes. There will be no question as to what we are dealing with.

***PostScript***

June 20, 2022:

The draft Canadian “Artificial Intelligence and Data Act” Bill C-27 defines “artificial intelligence system” as a “technological system, that autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.” Though it uses popular AI terminology (autonomous, neural network, machine learning), this definition falls short of clarifying what AI is, and there is still plenty of debate around that.