Abstract Conceptualization and IP Infringement by AI: Part II

PARC researchers recently interviewed me for their research paper (tentatively titled) “Creating Sustainable Competitive Advantage in Machine Learning Systems.” It was a lively conversation after which I was inspired to look back on what I wrote on the topic of IP infringement by AI. Funny thing was that as I was reading it I was struck by how it seemed like I wrote it just a couple of weeks ago. But no…the first part of this post was conceived almost 4 years ago. Since then, machine learning has made some big headlines (Watson, Deep Mind), with arguably the most memorable one being AlphaGo’s defeat of Go master Lee Sedol. More on that in a moment.

First, let’s briefly recap how we got here. In the first installment of this post, I shared my thoughts on a machine learning Stanford/Google study that demonstrated that exposure to large data sets directly enhances the performance of machine learning algorithms. With infinite data sets to learn from, I observed, the learning capabilities are theoretically  boundless. Coupled with the exponential speed in which these capabilities are developed, we are faced with ever-increasing probability of high-frequency, unpredictable AI activities.

Now back to the Go match. The AI unpredictability variable was dramatically demonstrated in the 37th move of the second Go game between AlphaGo and Lee Sedol. Wired has a pretty interesting account of this. It reported that this particular AlphaGo move shocked just about everyone observing it. It seemed like a mistake. As one observer noted: “It’s not a human move. I’ve never seen a human play this move.” Perhaps most striking about this is that master Sedol was apparently so unnerved by it that he had to get up and leave the room to recompose, and even when he returned, he seemed to be at a loss at how to respond.

Within the context of a game, AI unpredictability is harmless. But once we exit that safe haven, we enter into much more difficult territory; the consequences of a “shocking” action can be dangerous and expensive.

So the key legal issue we need to grapple with is centered on the question of liability. Specifically, who is the appropriate liability-bearing party and how much liability should they bear? Intuitively we point to the AI designer. But is the “appropriate” result here synchronous with a reasonable, fair and economically efficient outcome? Maybe not. If AI designers are by default (unlimitedly) liable because they build AI applications that can behave unpredictably that can have an undesired effect of hobbling a nascent industry. The key to getting this right, therefore, is to adopt a balanced approach. This means focusing on and carefully thinking through the parameters that trigger liability and how “heavy” that liability is going to be.

To begin with, legal and industry standards drive the composition of these liability-triggering parameters. If, for example, an AI designer was legally required to bake in a security mechanism (such as a back-door) that quickly disables the AI when a harmful activity occurs, the failure to implement it becomes the violation, not the design of the AI itself.

So now back to the question of IP infringement, which was were the PARC researchers spent much of their questioning. My thinking here is as follows: the capability of autonomous AI to infringe on IP is unquestionably moving away from theory and merging into the lanes of reality. How we deal with it in an efficient, rationale manner is our challenge. When the PARC researchers asked me whether we hold the AI liable, I responded we cannot; at least not yet. Our legal regime is not designed to hold an AI “entity” legally-liable for infringement. For now, the liability appropriately rests with the designer. The extent of the designer’s liability should be based on the existence/quality of the security feature that was baked in, where a willful failure to include it should carry the most severe penalty.

*****

Update 10/21/2017: The PARC researchers that interviewed me published their paper Defining Characteristics of Sustained Competitive Advantage in Machine Learning Systems. Since that interview, DeepMind evolved from Alpha Go to Alpha Go Zero. The latter iteration recently beat its predecessor at Go, using a fraction (10%) of its predecessor’s AI processors. Not only did it do so quickly (40 days), but Alpha Go Zero accomplished this with only having the rules of the game; Alpha Go, in contrast, was uploaded with thousands of Go games from which it could strategize its moves. This evolutionary pattern provides a glimpse at a future where holding AI liable will be necessary. I explore this in more detail in the upcoming ABA book “The Law of Artificial Intelligence and Smart Machines.”

Update  5/1/2016: The extent to which neural networks can be efficient at compression, triggers interesting questions in the context of adaptive learning. As I have written here before, the representative data set in this analytical framework consists of the operational environment in which the cyber(netic) agent is deployed. The more efficient the compression is the more it fuels powerful neural learning capabilities, which concomitantly increases the probability of unpredictably harmful, but also benign, creative results.