The Case for Robot Personal Liability: Part II – Iterative Liability

The Canadian philosopher and scholar Marshall McLuhan has been quoted to observe that “First we build the tools, then they build us.”

One could argue about what does “us” mean? Did McLuhan refer to people? Was he merely referring to machines building our society?

Interesting questions as these may be, we do not need to get bogged in them. Instead, let’s consider, if only for a moment, the legal significance, in terms of liability, of cyber and cybernetic entities building more sophisticated AI iterations of themselves. (See also my discussion on UNTAME.)

Enter the concept of “iterative liability” (IL) in AI. It refers to liability standards that propagate into and within each new entity version. To illustrate this, consider, for example, a cybernetic entity that creates another cybernetic entity. Under IL, the parent entity does not (necessarily) need to be held responsible for the actions of its “progeny.” Rather, the new entity is, by law, subject to the full liability standards spectrum that are addressed in UATA. (IL, of course, can also be applied to humans creating AI entities, but my main interest in this post is analyzing how it would apply solely within the context of AI.)

IL makes sense in AI applications. Tracing liability to the original creator of an AI entity throughout all subsequent versions (i.e., the ‘iterations’) is wrong for a number of reasons. Perhaps most significantly among these is that the original creator could not have foreseen the range of actions of the subsequent versions. IL takes into consideration the fact that what we have here is AI, which generates “child” entities, all of which follow evolutionary algorithms at exponential if not greater rates. IL, therefore, accommodates for the simple (yet complex) fact that there are too many variables at play here, dispositive variables when it comes to tracing liability to the original creator.

What kind of liability are we talking about? We start modestly with the kind we know, such as garden variety torts. We start by building liability standards that mirror those we (humans) are subject to today.

If all this irritates your epistemic taste buds, allow me to highlight (yes, once again) that we are dealing with a very unique type of technology here. It is one that will, in a relatively short period of time, exceed in human intelligence. And there is no reason to expect it will stop at that level. I think the converse is true. AI will continue to evolve in accordance with evolutionary algorithms or it might even create new ones that are presently difficult/impossible to grasp.

None of this is to suggest that we are looking down the barrel of a bleak future, one such as what we typically find infesting pop culture treatment of AI. Although I definitely find James Cameron’s ‘Terminator’ series entertaining, that is where it stops. Even with my over exposure to these grim treatments, I am optimistic about the future of AI, and AiCE in particular. Seen through the prism of UATA, as an example, I can see the creation of a dynamic legal system that will enable us to efficiently manage the behaviors of these cyber and cybernetic iterative entities.

I’ll conclude here by noting that while initially it may seem as if UATA is a model to be applied solely in the United States, that is not the case. I believe that AI in general, and AiCE in particular, are of unprecedented magnitude in terms of potential and growth. This all presents a quantum shift, one that already is beginning, if only very modestly, to tease and test how we will address complex legal issues. As such, this is hardly a United States-only challenge and I see it, and analyze it, within an international conceptual and analytical framework.

Tags: