The Encryption Debate and its Implications on (Unreasonably Dangerous) AI

The FBI’s ex parte application to order Apple to decrypt the iPhone implicated in the San Bernardino shooting was mooted by (reportedly) Cellebrite’s decryption technology. Unsurprisingly, however, the debate on encryption and back-doors is far from over.

The encryption debate is a highly charged one. Judging by the flood of on-topic emails running through the Stanford cyberprof listserv, this issue was (and continues to be) huge. At one moment during this email hailstorm, I side barred with a colleague to reflect on the various positions and arguments. His comment on the phenomena of what he termed “privacy zealots” (those who see privacy in terms of an absolute, unassailable right; a right which must be protected at all costs from governmental snooping) left an impression on me. Not so much as it has to do with the present encryption debate, but more on the issue of where we are headed with artificial intelligence (AI). Specifically, the design of uber-intelligent AI, machine learning and quantum computing.

But let’s stay on the topic of encryption for just a moment. Imagine that a company, ACME, intentionally designs “unbreakable” encryption on its communications platform. So now what do we have? We have a platform that can be used for lawful purposes. Sure. But it can also be used by all sorts of unsavory types who now have a great new tool by which to plan their nefarious deeds. Further suppose that a terrorist attack occurs and during the investigation authorities find reasonable evidence pointing to the terrorists’ use of ACME’s comms platform to carry it out.

Now here’s the problem. How do we appropriately reconcile ACME’s strong-encryption position with this tragedy? Where do we draw ACME’s legal liability boundary line(s) in this event? Should society, for example, tolerate ACME’s argument that it cannot be held legally liable for any part of the attack due to its decision to design this platform?

Now let’s move this inquiry into the realm of AI. Imagine that a company, cyberACME, designs AI platforms and applications over which it has intentionally relinquished all ability to control. Suppose some of cyberACME’s AI platforms are capable of doing harm, not out-of-the-box, but through their evolution; they start in a semi-autonomous state, but (without supervision) evolve into an autonomous one and this is where they cause harm. Do we allow cyberACME the claim it is not liable? I think the answer is “no.”

Privacy, like any other right we have, is not absolute, but one that is tempered by normative restraints. And this is the driving principle behind the need to ensure we foster a balanced ecosystem, one where the forces of innovation, individual rights and governmental needs are properly accounted for.  This is made particularly more crucial as our technological landscape moves deeper into the realm of highly-sophisticated, autonomous AI systems. Society has virtually nothing to gain, and everything to lose if this accountability is not properly institutionalized. More specifically, robust, socially-beneficial AI development is unlikely to thrive in an environment that does not possess those features.

***

Update 4/13/2016: On the subject of the unpredictability principle of AI: Wired reported that during AlphaGo’s match against Go master, Lee Sedol, AlphaGo made a move that shocked just about everyone. One commentator even noted that no human would have made such a move. It was such a disconcerting move that Lee Sedol stood up and briefly left the match, presumably to compose himself. This particular event dramatically illustrates the unpredictability principle in autonomous AI, one that could also cause damage, injury or death.