Germinating Seeds of Agency: Part IV

In Part III of this series, I concluded with the proposition that “attaining HI and/or passing the Turing Test should not be a condition precedent for bestowing AI with an Agent status.” But let’s assume, arguendo, that HI is a non-negotiable attribute for Agent status. What practical effect does that fact present?

This particular inquiry begins with a high-level consideration of 2 points (randomly picked) which were made by Ray Kurzweil in his “The Singularity is Near” book:

(A) “Most long-range forecasts of what is technically feasible in future time periods dramatically underestimate the power of future developments…” and
(B) “…our technology will match and then vastly exceed the refinement and suppleness of what we regard as the best of human traits.”

There is a vast arsenal of excuses available in contemporary discourse for dismissing the notion of AI rights in general, and Agent status for AiCE in particular. I will narrow in on one: The opinion that AI will never attain sufficient technical sophistication permitting it to achieve HI status and effectively undertake functions traditionally reserved for humans.

This instance of dramatic underestimation is deceivingly benign. Its dangerous nature lurks in that it contains the recipe for perpetuating a technically stagnant state of social/legal infrastructure. After all, the exponential technological development Kurzweil so aptly describes will suffer no pause; that much it has successfully proven. Failing to properly prepare for the future technical prowess of AI thus inferiorly positions legal institutions. They will likely be incapable of effectively dealing with this species of AI when the time comes.

This dramatic underestimation nicely dovetails with Kurzweil’s second point and once again underscores the danger that inhabits this seemingly benevolent illusion. As Kurzweil predicts, the future prowess of AI will “vastly” exceed the attributes that presently demark human activity as being special. Put differently, AI is posed to surpass HI. That in and of itself should serve as a sobering point to those who argue that AI will “never” attain sufficient technical sophistication. The non-negotiable aspect, which may have been put up as a bar, is rendered irrelevant.

Now take this line of thought a bit further and place it in context with Professor Mnookin’s challenge to the legal community to “think differently” about the rights of AI, discussed here. Once this is thrown into the mix, I think the practical effect of an HI condition precedent in AI calls for an AI-friendly legal regime to be implemented.

Before wrapping up here, it is important to tie together the abstract call for an appropriate AI legal regime with an actual proposal. I have written and argued for such a regime in this Blog and my SLS talk: This proposed regime is the Uniform AiCE Transactions Act (UATA), which you can also read more about here. Suffice it to say at this juncture that the principles on which I envision UATA being built need not be limited to AiCE, and are simple enough to implement (albeit not necessarily easy) to fit the needs arising from the use of the sort of hyper-intelligent AI Kurzweil predicts is waiting for us in the not-too-distant future.