Do AI Entities Need Rights?

The following is an excerpt from a number of presentations and interviews I have given on the subject of AI rights. Fundamentally, it’s important to properly frame the policy dimension so we can properly understand what sort of legal dimension we are likely to end up with.

  • As a matter of policy, we know that legal rights are granted in order to drive, serve a particular public policy. For example, the policy driving copyright law is based on the view that promoting the progress of science and useful arts is a worthwhile endeavor which we, as a society, want to promote. So once we determine something is a “good” thing, we establish laws around that something to promote it. The logical link required for doing this with AI is currently missing.
  • A key challenge we have when thinking about rights/incentives for AI is that the incentives that are human-centric cannot be easily translated, if at all, into a non-human framework.
  • When we ask whether or not AI entities need to have rights, we find that we’re pushing the inquiry into an overly abstract dimension, one which is well-suited for science fiction, but for little else. The rights inquiry fails to yield useful results so long as we have not successfully resolved the issue of “why” AI entities need to have rights in the first place.
  • But there is a possible link nonetheless. We know that the policy of creating the corporate entity was designed to promote economic growth. The corporate entity was and remains a valuable business framework. So there may be similar policy principles that could drive using AI entities in the way we use corporations and grant them limited, but necessary mission-centric rights. One example would be using AI as an avatar-like entity that helps shield an individual from their online environment. (This is the subject of a paper I presented at the 2010 Spring Symposium of the Association for the Advancement of Artificial Intelligence.) This concept entails using an AI avatar, essentially a legal entity, and using it to shield the owner’s private information. This AI entity becomes the user’s “veiled” identity, providing protections that are similar to that which corporate shareholders enjoy, all without degrading the flow of information vital to innovation and new value generation. Of course, while this AI entity could have rights, they are inextricably linked to a human.

***Postscript***

April 27, 2019: AI cannot be a named inventor. The USPTO denied today a patent application that listed a machine as the inventor. The machine, called DABUS, was created by Dr. Stephen Thaler and is classified as a “Creativity Machine.” This type of machine is comprised of a series of neural nets. One neural net creates content through certain stimuli and another monitors/appraises the creations. This “critic” net’s responses to the creator net also serve as stimuli that help the creator net generate optimal content. The USPTO’s decision is legally-sound. There is no compelling reason (at this time) to change the law.

December 9, 2019: Section C(4) of the European Commission’s Expert report “Liability for Artificial Intelligence and other emerging technologies” is devoted to a discussion of “legal personality.” The report says “experts believe there is currently no need to give a legal personality” to AI because even when it comes to “fully autonomous technologies” there is a clear line of sight between the AI harm and the natural person to whom it can be attributed. This view is only partially correct. It fails to account for Level D AI applications; a class of applications that I first described at SLS in 2012, and which may not always have a liability line-of-sight. For these applications, an iterative liability approach makes sense.

November 6, 2019: Imagine an AI owns a patent. Now what? Even if there is a way to police the patent (who coded that capability?), how is it even enforced? And the AI would have to also have legal standing and be capable of suing the infringer, be it a natural person or corporation. Ok. So what is the venue? What is the remedy? Awarding the AI monetary damages is beyond an excessive stretch of the imagination at this point and is really just toying with absurdity. So we’re left with injunctive relief. But how do we communicate the award/outcome to the AI? This is but a taste of the issues around AI IP ownership. The overarching point is this: the AI IP rights discourse is purely academic and will remain so for a very long time. How long? At the very earliest, it is maybe, just maybe slightly more relevant when we reach maturity-scale in AI General Intelligence applications (the average estimate is 81 years from now) and even so it is just relevant as a matter of its technical dimension/capabilities. The more important normative and legal dimensions may not even mature by then.

October 1, 2019: AI owning rights in IP it ‘creates’? Not so fast. A hybrid of the work-made-for-hire principle is a more appropriate conceptual/operational/legal framework, with default ownership in the creator/owner of the AI. I am working on a post that will address this in more detail. Stay tuned.