Ubiquitous Network Transient Autonomous Mission Entities

Autonomous functionality, on its own, is not (relatively speaking) technically novel and is by no means unique to AiCE. It pervades a wide variety of applications, such as aviation. It is also making in-roads in automotive settings, which led to some interesting discussions with Dr. Sven Beiker, Executive Director of the Center for Automotive Research at Stanford (CARS). He and I discussed how AiCE could be integrated into cars in order to assist drivers with legal issues arising from various driving conditions.

Applying AiCE, at first glance, to car-centric applications is somewhat of a departure from the original operational setting I had set out for AiCE, namely, the internet. And while I plan to continue collaborating with Dr. Beiker, I wanted to illustrate here the relevancy of another non-computational law application, that is, nevertheless, conceptually closer to AiCE. This example comes courtesy of the Oak Ridge National Laboratory’s “Ubiquitous Network Transient Autonomous Mission Entities” (UNTAME) project.

UNTAME is a futuristic defensive strategy comprised of a cooperative framework of robotic code (cybots) which share a single mission: Protect large computer networks from hackers. The cybots exist in a “hive”, an apt metaphor that describes both their “habitat” and ability to cooperate and regenerate, the latter being a valuable attribute because it promotes defensive continuity. Thus, if some cybots “die” in the line of duty, they are quickly replaced by others who already know everything their fallen comrades did and can pick right up where those left off, quickly sealing network security vulnerabilities. Regeneration is also useful because it increases the odds that “good” cybots will more quickly identify the “rogue” ones and destroy them. Their “transient” feature also strengthens defensive continuity since a decentralized model makes them relatively less vulnerable to counterattack.

The folks behind UNTAME understand the risks, fear and skepticism around their undertaking. They describe code auditing efforts, speak of the need for system transparency, incubating the code in a test environment until it can prove itself, and designing the cybots to operate within strict operational boundaries (“mission directives”) as examples of what they’re doing to dampen the concerns.

The fictional Sarah Connor of Terminator lore would probably not buy any of those explanations. She symbolized the fear and pop critique of excessive, if not obsessive, reliance on automation that can lead to (initially) unintended consequences. You probably remember how “Skynet” (the villainous military computer network trying to kill her in the first two installments) was designed to be independent and smart, and that it achieved an unintended “awareness”. As the story goes, that wasn’t good. In fact, people who were not “wearing 2 million sunblock” had a “very bad day” when the nukes started to fly.

While UNTAME certainly doesn’t stand poised to vaporize the world, it is nonetheless a vivid example of how this breed of high-tech easily spawns complex, and in no uncertain terms, irrelevant discourse. We query, for example, how UNTAME’s “awareness” abilities translate into a legal standing? Could it be sued? Once its creators endow it with the capabilities that render it “autonomous”, have they removed themselves from the legal liability chain? We ponder whether, from a normative perspective, why and/or under what circumstances would our society allow such a disconnect?

Part of the answer here depends on risk-benefit analysis. UNTAME supporters might argue that “autonomous” technology is exactly what we need in a sophisticatedly dangerous world. Terrorists and other unsavory characters routinely launch cyber attacks on government and other critical institution computer networks. Failure to aggressively and effectively defend them is far more dangerous than designing and launching cybots. Critics, in contrast (and I have witnessed this myself when presenting on AiCE) point to pop culture, particularly movies and books (ergo my Terminator reference) as proof that this high-tech stuff is a bad idea.

So while AiCE is being designed with a computational-law mission bias and is correspondingly vastly different than UNTAME, the challenges of implementing it have similar undertones. This operational reality significantly influences my argument for creating UATA, which is described in a bit more detail in the April 17 post, and which will be a topic of future posts.