Abstract Conceptualization and IP Infringement by AI

I recently presented a paper at Stanford Law School that examines IP infringement by AI.  I opened my presentation by inviting my audience to sit back and indulge in some science fiction fantasy.  Let’s also do that (just for a moment) here as well and fast forward to the year 2023.

We see a giant, dark auditorium.  A stealthy hover camera and thousands of people applaud the entry of their TED talk speaker, Peter Weyland.   We listen to him eloquently scanning tech tools and developments over the millennia.  From fire, (courtesy of the titan Prometheus), he smoothly roams through the centuries, highlighting the stone tools, the wheel, gun powder, the light-bulb, and all the way to nanotech, fusion, m-theory and the creation of cybernetic individuals that are indistinguishable from humans.  The crowd is silent.  Pausing for dramatic effect, Weyland proclaims “we are the Gods now.”  Armed with what he describes as his unlimited ambition and refusal to fail, he concludes by informing the audience that he is poised to “change the world”.

Yes, this viral YouTube video was a clever promo for the Ridley Scott “Prometheus”.  But it was also reminiscent of one of my all-time favorite Steve Jobs quotes: “The ones who are crazy enough to think that they can change the world, are the ones who do.”  It’s the  ”Weylands” of the future that will create AI that is indistinguishable and vastly more intelligent than humans (see also my post).  No one can stop this.

This future holds some incredible technological possibilities, but for the time being lets rewind back to 2012.  Let’s put aside the Weyland David 8 android and consider a relatively much more tame question and, again, more relevant to my inquiry into IP infringement by AI:  Is a computer capable of creating abstract conceptualizations?

We know that repeated exposure to an image of an object will lead a young child to learn, and much more quickly identify, what that object is before learning what the word for “it” means. Is this power of learning, this abstract conceptualization, the exclusive domain of humans or could computers do it too?  This is what Stanford University and Google researchers set out to examine.

Their “unsupervised learning” experiment consisted of a neural network (16,000 computers) that was shown 10 million cat images.  They never labeled any of these images as such (hence the “unsupervised” nature of the learning).  The result:  The computer independently constructed an image of a cat from an abstract conceptualization.  (We’re inching closer to David 1.)

The Stanford/Google study demonstrates that exposure to large data sets directly enhances the performance of machine learning algorithms.  I am unaware of whether the researchers (formally or informally) considered the following point, but we might also consider that these data sets do not necessarily need to be of the spoon-fed variety that was apparently involved in this research.  As I have written here in the past, the operational environment in which a cyber(netic) entity is deployed could in and of itself represent the data set.  This translates into a virtually endless, fluid and incredibly wealthy supply of data.  We can then start to appreciate the algorithm’s analytical prowess resulting from such a virtually endless exposure.  It teases us with the prospect that the performance of the cyber(netic) algorithm would continue to grow, perhaps at an exponential rate and the possibilities from there suffer no boundaries.

In practical terms, these cyber(netic) entities could learn (in an unsupervised manner) about different types/classes of IP.  Of course, one of the looming questions remains:  To what end?  Well, not to be flippant, but any end for that matter, limited only by human imagination.  With that said, I venture that this cyber(netic) entity, with that vast a knowledge, would be able to, for example, engage in forensic analysis and detect instances of IP infringement much more quickly and accurately than humans.  In that capacity it could detect instances of on-line copyright infringement or evidence of patent infringement, again, depending on the needs of its human deployers.  Where such infringement is the work of other cyber(netic) entities, our friendly variant would also be markedly more swift and accurate than humans.

While the distance between abstract conceptualization and Peter Weyland’s fictional David android series, is by all measures vast, it remains intriguing to ask: How far are we from seeing such cyber(netic) entities? Part of that answer can be found in Ray Kurzweil’s “The Singularity is Near” book.  Kurzweil observes that “most long-range forecasts of what is technically feasible in future time periods dramatically underestimate the power of future developments…” Thus the answer is: Perhaps not as far as we might initially estimate, at least not with respect to the IP model discussed above.

*****

Update: May 8, 2017 – In addition to the very cool portrayal of the “birth” of the cybernetic AI “Walter,” is the equally cool portrayal of AMD as the AI supplier to the fictional Weyland Yutani. This is an excellent example of blending fiction with fact. It is so effective that it is actually predictive. AMD is a well-known graphics processing unit (GPU) manufacturer. Together with NVIDIA (whose GPU’s are used by tech giants such as Alibaba, Amazon, Baidu, Google, Facebook, Microsoft, and Tencent to develop AI applications) the companies currently represent the most significant AI application generating power.

Update: July 15 2015 – The post above was written nearly three years ago. Today, artificial neural networks, such as Google’s “Deep Mind,”are rapidly gaining public and investor attention. This signals (the early stages) of a new era, one in which the capabilities I described three years ago edge closer to becoming real.

Tags: