Internet of People: Revisiting the AI Taxonomy

Let’s consider wetware in a computational analytical framework. Imagine taking the human brain’s infinitesimal capacity to learn, adapt, grow and regenerate (activities that, by the way, require very little power, about 20 watts). Combine them with the exponential growth and computational power of nanoscale AI implanted in the brain or other parts of the body.

The app formats are virtually limitless. Imagine, for example, apps that operate in a “hive” format, similar to that of the Ubiquitous Network Transient Autonomous Mission Entities (UNTAME). These apps cooperate and regenerate, ensuring operational and defensive continuity.

What we have now is living, evolutionary algorithms. Algorithms that bring about the dawn of a new paradigm; our epistemic boundaries irreversibly stretched and reconfigured.

All of this, of course, impacts the law. It forces it to change and provide answers to new legal quandaries. We are forced to confront fascinating liability questions.

In the “computational capability continuum” that I proposed in 2012 at Stanford Law School, I described an AI taxonomy consisting of AI apps categorized from A through D; “Level A” apps being the simplest and the “Level D” the most sophisticated. To recap, the “Level D” app manifests intelligence levels so sophisticated that it can identify and reprogram any portion of its behavior (including in unpredictable ways) due to its self-awareness capacity. It can also create other apps without any human involvement (knowledge or contribution). The Level D can manifest behavior in ways that are indistinguishable from human behavior.

Now I introduce the “Level E” app. The key difference between it and Level D is that Level E consists of human-embedded nanoscale AI. Level D, in contrast, is limited to external type apps and operational environment. It is limited to the internet of things ecosystem. The Level E drives what I refer to as the “Internet of People” or (IoP). An ecosystem that exists entirely within the body and seamlessly interfaces with the outside world. (Note: I also discuss this in my Transformative Computing post, mentioning Ray Kurzweil’s prediction of our cloud-connected neocortex.)

How and to what extent a Level E app impacts its human host’s action, decision making and other behavior will drive various liability issues and present a wide spectrum of legal challenges. But as I offered in my AI Taxonomy presentation, the liability ascribed to the developer of the Level E’s alpha iteration is properly curbed to that particular version. Damage caused by subsequent iterations will not be attributed to the original developer so long as the facts indicate the app behaved autonomously from the alpha.

The use of Level E apps will also bring about interesting host consent questions. The intimate operational environment will demand developers present hosts with terms and conditions in a way that cannot be easily assailed. For more on that, I recommend reading my Maximizing Representative Efficacy Part I and Part II.)

The law in this area will not change proactively. The law never does, at least not in the high-tech sector. But thinking about these issues now, proposing taxonomies, liability metrics and other analytical and normative frameworks will make it easier to deal with when the challenges become real.

***

Update: On January 16, 2014, the Society of Automotive Engineers (SAE) adopted a six-level automated driving taxonomy. Their Taxonomy and Definition Related to On-Road Vehicle Automated Driving Systems closely follows the AI taxonomy I first presented at Stanford Law School in June 2012. It is good to see that these issues are approached in a uniform manner, which can be helpful in promoting normative regulation and standards-development for AI.