AI Inventorship and the Threshold “Line of Sight” Principle

Section C(4) of the European Commission’s Expert report “Liability for Artificial Intelligence and other emerging technologies” is devoted to a discussion of “legal personality.” The report says “experts believe there is currently no need to give a legal personality” to AI because even when it comes to “fully autonomous technologies” there is a clear line of sight between the AI harm and the natural person to whom it can be attributed. Highlight that term: “line of sight;” it’s important.

Now think of that view in the context of granting inventor status to an AI system. Setting aside the novelty, cool sci-fi characteristic, the (current) rationale for doing this seems too tenuous. Proponents argue that this is necessary because there may be machine-made inventions that qualify for a patent, but would be refused patent protection due to their lack of natural person status and this, would, in turn, depress innovation, etc. Essentially, this position takes the view that there are inventions that have no human line of sight. Really?

Maybe AI can realize the “inventive step,” but to say that this accomplishment is devoid of a human line of sight seems, again, tenuous. When it comes to the use of Level D AI applications, yes, that’s where we begin to have a line of sight attribute that helps push the AI closer to warranting inventorship status. But we are far, very far, from achieving Level D.

So there is a threshold question here that remains to be decided: How far does the human need to be from the center that is the AI app core algorithm (i.e., the first iteration) so that the invention can be reasonably labeled as no human line of sight? And even then, without human line of sight, the AI inventorship status rationale remains weak. Merely declaring that the absence of such status will stifle innovation has a whiff of, well, being alarmist.

***Postscript***

June 29, 2022: AI is not yet sufficiently advanced to invent without a human. This was the conclusion reached by the UK Intellectual Property Office in its recently released “consultation.”  The consultation, essentially a request for public comment, was issued to determine what changes, if any, are needed to patent and copyright law in light of the growing use of AI. None, for the time being. The consultation suggests that once AI becomes more advanced, changes (specifically to inventor status) may be required, which is consistent with my thoughts above.

March 17, 2022: Inventorship is a threshold question. It depends on utility (do we need to have it) and the operational environment in which the AI is used. The metaverse is where this question can likely be realized in the sense that the utility and environment variables coalesce. And it is not just limited to patents, IP ownership and an even more complex question of infringement are also tied to this. More on this coming soon.

December 22, 2021: A September 2021 UK court ruling in Stephen Thaler v Comptroller General of Patents Trade Marks and Designs [2021] EWCA Civ 1374 that AI cannot be named as an inventor of a patent was recently upheld by the European Patent Office Legal Board of Appeal. It was announced yesterday that the Legal Board of Appeal is dismissing Dr. Thaler’s ongoing legal battle to name the AI (DABUS) as an inventor. Though it looks like this dispute will make its way to the UK Supreme Court, I doubt a different result will be forthcoming, which makes sense. Absent the human line of sight principle I discuss in the post above and a solid rationale, which does not exist, there is no compelling reason the courts should allow recognition of AI as an inventor.