The Case for Robot Personal Liability: Part III – Iterative Liability and AI Taxonomy

Considering the proliferation of AI over the last decade, it is unsurprising that the concept of “iterative liability” (introduced in Part II nearly eight years ago) remains relevant, arguably even more so today. The same can also be said about the need for a normative framework by which our legal system can gauge and categorize the actions of AI entities. The foundations of this framework, in the form of an AI taxonomy, were described about seven years ago. This post revisits these two concepts.

With respect to iterative liability, it will be useful to begin with a recap of a couple of items: First, “iterative liability” describes a propagating legal liability standard, one that can be applied to cyber and cybernetic design. This standard is adaptable to pretty much any AI designs that are capable of self-replication and iterative changes (think machine learning, and more on that below). Under this liability framework, the “parent” entity is not (at least not as a matter of default) held liable for the actions of its “progeny,” at least not all of them.  Second, iterative liability is a macro-level framework in that it is not intended solely to benefit one type of AI development, but can be conducive for promoting the overall development of AI (which is by most, if not all measures, a desirable attribute). As such, iterative liability is both AI developer-friendly and pragmatic. It effectively captures the operational reality that holding a human developer (and an AI replicator) liable along the entire evolutionary chain (iteration) of a learning-capable autonomous cyber entity is innovation-inhibiting. It also reflects the reality that the foreseeability of an AI entity’s actions erodes through its evolutionary process. The pristine clarity of the starting point gradually blurs, until it invariably fades away, much like the light-dot in a CRT television. Of course, this is not to say that liability vanishes; it does not. The liability that is attached to a particular developer, be it a human or an AI entity, is in motion and moves away from the original developer as the iterations increase.

As for the AI taxonomy, four levels of AI apps (or entities) were described beginning with Level A and ending with Level D. (In a later post I introduced Level E, which is the nano-scale version of Level D. For the purposes of this post, I will stick with Level D, but its application to Level E should be readily apparent.) This AI taxonomy was founded on a computational capability-continuum which could be synced with a legal framework that can effectively deal with the potentially unpredictable behavior of such apps. Of particular relevance here is the Level D app. To recap, the “Level D app manifests intelligence levels so sophisticated that it can identify and reprogram any portion of its behavior (in unpredictable ways); i.e., it has a self-awareness capacity and can create other apps without human involvement.”(Another example of this replication is discussed in the UNTAME post from 2010.)

The conclusion: Iterative liability and AI taxonomy represent two critical concepts that together can help ensure our legal system effectively deals with harm and damage caused by AI apps. Absent the adoption of these types of AI-specific, mission centric concepts, our legal system will be severely handicapped. It will be woefully ineffective, if not entirely incapable, of yielding effective solutions.

***Postscript***

August 13, 2019: Deals involving Level D/E applications will require that the buyer internalize the attendant risks. But when we shift away from a sophisticated B2B transaction, and are dealing instead with a B2C setting in which, for example, a patient will receive a Level E implant, the risk mitigation framework is murky, at best. Can a consumer properly appreciate the risk in this setting? Stated differently, can a consumer give valid consent? A proper ex-ante evaluation of a consumer’s consent to a Level E application may be difficult, but it syncs with the concept of iterative liability.

May 27, 2019: Level C and D/E AI algorithms contain fractal characteristics. The algorithm’s evolutionary process (“progression” in the fractal sense) is capable of manifesting an infinite number of iterations (inexhaustible complexity). I first discussed this principle nearly 9 years ago in the environmental-based learning post. Behaviorally, progression implicates operational unpredictability and from a liability perspective it underscores and supports the importance of isolating the Level C, D/E developer from legal responsibility for harm caused by the app.