The Writers’ Strike Four Months In: Stanford’s Paul Goldstein on Artificial Intelligence and the Creative Process

On May 2, a key gear of America’s entertainment industry ground to a halt as some 11,500 members of the Writers Guild of America (WGA) put down their pens and went on strike. The strike highlights growing concern in the creative community about Artificial Intelligence. What if content can be generated by AI, produced from a computational scrubbing of the internet that spits out an aggregate of other people’s stories, reformulated into something new? Is AI-generated entertainment original content? Who owns the copyright?  Here, Stanford Law Professor Paul Goldstein, a leading copyright law scholar and author, discusses the WGA strike, the growing portent of AI-produced scripts, how AI is challenging the creative process, including in video game production, and how the law is developing in this nascent area.

IP Rules 1
Stanford Law Professor Paul Goldstein

Can you explain how AI is a threat to writers in the entertainment industry who are striking?

In part, the perceived threat is economic and is in this sense akin to the writers’ concerns over streaming services’ displacement of revenues from the more traditional broadcast, cable, theatrical, and DVD distribution of filmed content—writers’ rooms employing fewer writers over shorter periods, for example, and the decline of residual payments. In the case of generative AI, writers’ fear that the new technology will reduce their employment opportunities to the occasional rewrite of machine-produced scripts.

But I would be surprised if authorial self-esteem isn’t at work here as well. When movable type and the printing press first threatened the livelihood of fifteenth-century scriveners, I doubt they took it personally. But, and understandably, writers believe that their contributions to a work are unique and inimitable. The thought that their creative spark can be replaced by a few lines of computer code can be morally crushing.

How close is Hollywood to generating the script for a feature-length film through AI?

It will be a long time, if ever, before a film producer can visit an AI platform, type in “Write me a script for a superhero movie,” and get back a full-fledged, producible script. Even the most modest work, if it is to have any popular appeal, can require many thousands of human prompts and creative human editing of the digital responses. That said, I think video game production, where AI is already extensively used, offers an insight into where filmed entertainment may go. The costs of video game production, which can run into tens or even hundreds of millions of dollars, can be reduced by orders of magnitude through AI contributions to character, animation, dialogue, music, and strategies for level of play. And it’s not a long hop from video games to superhero movies.

Where generative AI seems likely to stumble, at least in the near term, is not in scale, but in the nuance, intentionality and moral sensibility that animate the most enduring cultural works. But, from the viewpoint of popular culture,  I think it’s easy to overstate the extent to which today or tomorrow’s entertainment consumer attaches a positive value to these qualities.

What about copyright and AI? I understand there was a recent case about this.

This past August, a federal district court in Washington, D.C. upheld the Copyright Office’s rejection of registration for artwork that, according to the applicant, had been produced entirely autonomously, without human intervention. The court agreed with the Office that copyright law has on principle historically required the presence of a “guiding human hand” for protection to attach.

But that was essentially a test case, and the usual AI production will almost inevitably have had a guiding human hand. That’s why the Copyright Office erred when it rejected parts of another application, this one for a comic book, which had in fact entailed countless prompts and repeated editing by a human author.  The Office’s rationale was that the results of the prompts were unpredictable. But ask a room full of writers if they can predict what combinations of words and ideas will ultimately land on their computer screen, and I promise you they will all shake their heads. Unpredictability is at the very heart of artistic creation.

I understand some artists, such as Sarah Silverman, have also filed lawsuits preventing AI companies from using their material to train AI systems. Why might that be necessary? And is there a component of that in the strike?

Yes, there have been a number of such lawsuits and, yes, one of the WGA positions in the current dispute is that writers’ past scripts not be used in the massive datasets employed to train the large language models that generate new content. It’s too early to predict the results in these cases, but the plaintiffs will face a number of hurdles: in some cases, the training technology will read, but not copy, the content in the dataset, and reading alone isn’t copyright infringement; in other cases, a fair use defense may be interposed against the charge of copyright infringement; and in still other cases, the content at issue may have been posted to the internet under terms of use that allow such copying. And there’s always the Copyright Act’s three-year statute of limitations to contend with.

But it is in the nature of copyright disputes that, going forward—and this is the direction in which the WGA demand points—many if not most of these kinds of training uses will become the subject of negotiated licenses with copyright owners.

Isn’t there an international element to this too?

If you want an example of legislative foresight—or improvidence—when the UK amended its intellectual property laws in 1988, Parliament included a provision effectively granting copyright to computer-generated works. More recently, the EU’s 2019 Copyright in the Digital Single Market Directive addressed the problem of dataset training by carving out from copyright liability two limited exceptions for text and data mining. China and Japan, among other countries, have also staked out positions on the issue.

Is the U.S. Congress planning to legislate in this area?

The U.S. Congress has not yet directly addressed any of these questions, and in my view that’s a good thing, at least so long as the delay indicates an inclination to premeditation, and not a forecast of industry stalemate. This past August, the U.S. Copyright Office issued a Notice of Inquiry seeking public comment on the full range of copyright questions raised by generative AI  (if any of your readers are interested, the deadline for comments is 18 October). The Office’s reports and proposals based on such investigations have historically been both thoughtful and politically practicable, and if that is again the case here, we can expect congressional discussions of AI and copyright to be well-informed. However, I wouldn’t begin to look for definitive legislation until 3-5 years from the Office’s report, at the earliest.

A globally recognized expert on intellectual property law, Professor Paul Goldstein is the author of an influential five-volume treatise on U.S. copyright law and a one-volume treatise on international copyright law. He has authored eleven books including five novels. Some of his other works include Copyright’s Highway: From Gutenberg to the Celestial Jukebox, a widely acclaimed book on the history and future of copyright and Intellectual Property: The Tough New Realities That Could Make or Break Your Business. Havana Requiem, his third novel, won the 2013 Harper Lee Prize for Legal Fiction.