Germinating Seeds of Agency: Part III

“Watson’s” recent Jeopardy! performance arguably moves us ever so closer to Kurzweil’s Singularity. It also injects further credence to his prediction that in less than 3 decades a computer will in fact match human intelligence (HI) and pass the Turing Test. When that happens (I don’t consider this an ‘if”) the AI we know today, or think we know, will look (and feel) very different.

Since in this series I focus on examining the rationale and principles necessary for bestowing Agent status on AI (specifically, AiCE) I want to pose the question of whether HI is even necessary. In other words, even if we might agree that HI is a useful attribute for AI/AiCE the question remains whether it is necessary or even a desirable attribute.

Because it anticipates that Agents are, at the most fundamental level, human beings we may reasonably conclude that Agency Law, at least implicitly, requires that an Agent possess HI. This attribute directly ties to our understanding and expectation with regard to capacity; i.e., the ability to understand a plethora of relevant laws and distill them into proper action for the benefit of the principal. But is this particular ability immutably tied to humans? I don’t think so.

The common denominator of the societal desire (and equally fear) of witnessing the emergence of HI in AI/AiCE is disbelief. A deep-seated, arguably nothing more substantial than a socially-impregnated and perpetuated paradigm that AI is incapable of making the type of “high-quality” decisions humans can. This line of thinking leads to a (forced) conclusion that for AI/AiCE to be effective it must possess HI. Should this become an immutable Truth, AiCE would need to pass the Turing Test (or something equivalent) before graduating to Agent status.

What constitutes “high-quality” decisions does not lend itself to monopoly. Rather, it is a question of opinion. Take, for instance, a litigation setting. Counsel for opposing parties work long hours combing through case law and evidence, distilling vast amounts of data down to complaints, counterclaims, motions, etc. Both are handling the same case, yet arrive at different conclusions in service to their clients. Can any single piece of advice they render be objectively categorized as “low quality” if it fails to yield a legal victory? Suppose it only yields a victory on appeal. Does it then enjoy a rebirth and reclaim the “high-quality” throne? If so, why?

We regard such a legal process as perfectly normal, and at the same time many of us admit and voluminously critique its flaws; flaws of a system that is clearly the product of HI. Does this suggest that notwithstanding these “wrinkles” we regard the legal outcome of the case as rightfully belonging to the “high-quality” genre? Even if the answer is in the affirmative is it so due to social conditioning?

I think it is. As such I think there’s room for adopting the same flexible principles and with them arriving at the conclusion that AI can render sufficiently “high-quality” decisions without HI. Stated differently, attaining HI and/or passing the Turing Test should not be a condition precedent for bestowing AI with an Agent status.