Sentient Shmentient: The Sentient AI Claim Highlights Broader Challenges in AI

About a couple of months ago, a Google engineer made the claim that LaMDA is sentient. Of course, this went viral; how could it not? For the most part, however, this claim was widely dismissed and rightfully so. But there is more to it. What this claim does do is highlight the ongoing challenge of properly and effectively defining AI.

As a term, “AI” is vulnerable to various sweeping and broad claims and definitions. The overall effect of this endless campaign of careless labeling, be it the “automated decision-making processes” (Cal. Civ. Code §1798.185(a)(16)) to the fantastical sentient claim, is so dilutive that it makes AI less and less understood. Put differently, when pretty much everything can be labeled as “AI,” nothing really is AI.

The definition problem also affects the AI as an inventor question. Recently, the US Court of Appeals for the Federal Circuit ruled that the Patent Act only applies to humans, dealing Stephen Thaler yet another setback in his effort to secure inventor status for his “creativity machine” called DABUS. Though the Federal Circuit’s ruling does not come as a surprise, eventually, AI will be accepted as an inventor. However, the threshold question of what type of AI should be required as a precondition for granting this status has not been addressed. So I will take here the first step in fixing that. I will offer that my post “The AI Utility Levels Schema – Building an AI Classification” offers a formal/disciplined framework for answering this question.