Is a “Rogue AI” Catastrophe Coming? Stanford’s Mark Lemley on California’s AI Safeguards Bill
The California legislature recently passed SB 1047, focused on AI safeguards. The bill, passed with overwhelming majorities in the California State Senate and Assembly, is now with Governor Newsom, who has until Monday, September 30, to sign it into law—or not. As pressure builds, with supporters such as Hollywood A-listers and over 100 current and former employees of OpenAI, Google DeepMind, Anthropic, Meta, and xAI, posting open letters to Newsom in support of the law, Newsom is faced with the challenge of signing a bill into law—the first of its kind in the country—that might save the world from an AI catastrophe or be an unnecessary burden. Here, Stanford Law’s Mark Lemley, an expert in law and technology, discusses the bill’s pros and cons.

First, can you tell us the main point of the bill?
SB 1047 is designed to reduce the risk of catastrophic “rogue AI” by requiring AI companies to evaluate all AI projects for a risk of nuclear or chemical war or other catastrophes and to code in an “off switch” that could shut down AI that went rogue.
Do you think the bill has merits—that it’s a good thing for California?
I think the risk of rogue AI is significantly overstated. A number of people have worried publicly about this, but for the most part I think they are confusing generative AI’s ability to SOUND intelligent with real intelligence. There are certainly AI projects that pose real risks, like autonomous weapons systems. But applying these safeguards to, say, an AI designed to write poetry seems like overkill. It is interesting that Hollywood has gotten involved in this debate, because SB 1047 has essentially nothing to do with things they care about. I think that’s really a reflection of other frustrations people have with generative AI and a desire to regulate it in any way possible.

Why would the Governor decline to sign this bill into law? Some tech leaders have voiced strong opposition to the bill. What are those concerns?
Technology companies are worried that the bill will add a layer of unnecessary regulation to AI companies in California and will drive startups out of state or out of the country. While the Googles of the world can set up compliance and monitoring systems, it is much harder for startups to do so. And the bill will be particularly challenging for companies that release their code as open source, because it is essentially impossible for them to track what changes are made to the code and ensure that an “off switch” is effective. The bill’s authors made some changes to reduce the harm to the open source community, but compliance will still be a significant burden.
Shouldn’t this be dealt with at the federal level?
Regulation should be national (or ideally global), both because the problems are global and because of the risk of multiple, inconsistent laws. But Congress is in gridlock on almost all issues, and there is little prospect of that changing. I think Senator Wiener and other sponsors view this as an effort to reduce the risk of AI. But I’m just not sure the benefits are worth the costs.
Mark Lemley is the William H. Neukom Professor of Law at Stanford Law School and the Director of the Stanford Program in Law, Science and Technology. Recent articles include WHERE ’S THE LIABILITY IN HARMFUL AI SPEECH? and FREEDOM OF SPEECH AND AI OUTPUT. He is also a Senior Fellow at the Stanford Institute for Economic Policy Research and is affiliated faculty in the Symbolic Systems program. He teaches intellectual property, patent law, trademark law, antitrust, the law of robotics and AI, video game law, and remedies. He is the author of 11 books and 218 articles, including the two-volume treatise IP and Antitrust. His works have been cited 350 times by courts, including 19 times by the United States Supreme Court, and more than 45,000 times in books and academic articles, making him the most-cited scholar in IP law and one of the ten most cited legal scholars of all time.