Artificial Intelligence and the Law

Legal Scholars on the Potential for Innovation and Upheaval

Artificial Intelligence and the Law

Earlier this year, in Belgium, a young father of two ended his life after a conversation with an AI-powered chatbot. He had, apparently, been talking to the large language model regularly and had become emotionally dependent on it. When the system encouraged him to commit suicide, he did. “Without these conversations with the chatbot,” his widow told a Brussels newspaper, “my husband would still be here.”

A devastating tragedy, but one that experts predict could become a lot more common.

As the use of generative AI expands, so does the capacity of large language models to cause serious harm. Mark Lemley (BA ’88), the William H. Neukom Professor of Law, worries about a future in which AI provides advice on committing acts of terrorism, recipes for poisons or explosives, or disinformation that can ruin reputations or incite violence.

The question is who, if anybody, will be held accountable for these harms?

“We don’t have case law yet,” Lemley says. “The company that runs the AI is not doing anything deliberate. They don’t necessarily know what the AI is going to say in response to any given prompt.” So, who’s liable? “The correct answer, right now, might be nobody. And that’s something we will probably want to change.”

Generative AI is developing at a stunning speed, creating new and thorny problems in well-established legal areas, disrupting long-standing regimes of civil liability—and outpacing the necessary frameworks, both legal and regulatory, that can ensure the risks are anticipated and accounted for.

To keep up with the flood of new,  large language models like ChatGPT, judges and lawmakers will need to grapple, for the first time, with a host of complex questions. For starters, how should the law govern harmful speech that is not created by human beings with rights under the First Amendment? How must criminal statutes and prosecutions change to address the role of bots in the commission of crimes? As growing numbers of people seek legal advice from chatbots, what does that mean for the regulation of legal services? With large language models capable of authoring novels and AI video generators churning out movies, how can existing copyright law be made current?

Hanging over this urgent list of questions is yet another: Are politicians, administrators, judges, and lawyers ready for the upheaval AI has triggered?

 

ARTIFICIAL AGENTS, CRIMINAL INTENT

Did ChatGPT defame Professor Lemley?

In 2023, when Lemley asked the chatbot GPT-4 to provide information about himself, it said he had been accused of a crime: namely, the misappropriation of trade secrets. Director of the Stanford Program in Law, Science and Technology, Lemley had done no such thing. His area of research, it seems, had caused the chatbot to hallucinate criminal offenses.

More recently, while researching a paper on AI and liability, Lemley and his team asked Google for information on how to prevent seizures. The search engine responded with a link titled “Had a seizure, now what?” and Lemley clicked. Among the answers: “put something in someone’s mouth” and “hold the person down.” Something was very wrong. Google’s algorithm, it turned out, had sourced content from a webpage explaining precisely what not to do. The error could have caused serious injury. (This advice is no longer included in search results.)

Lemley says it is not clear AI companies will be held liable for errors like these. The law, he says, needs to evolve to plug the gaps. But Lemley is also concerned about an even broader problem: how to deal with AI models that cause harm but that have impenetrable technical details locked inside a black box.

Artificial Intelligence and the Law 1

Take defamation. Establishing liability, Lemley explains, requires a plaintiff to prove mens rea: an intent to deceive. When the author of an allegedly defamatory statement is a chatbot, though, the question of intent becomes murky and will likely turn on the model’s technical details: how exactly it was trained and optimized.

To guard against possible exposure, Lemley fears, developers will make their models less transparent. Turning an AI into a black box, after all, makes it harder for plaintiffs to argue that it had the requisite “intent.” At the same time, it makes models more difficult to regulate.

How, then, should we change the law? What’s needed, says Lemley, is a legal framework that incentivizes developers to focus less on avoiding liability and more on encouraging companies to create systems that reflect our preferences. We’d like systems to be open and comprehensible, he says. We’d prefer AIs that do not lie and do not cause harm. But that doesn’t mean they should only say nice things about people simply to avoid liability. We expect them to be genuinely informative.

In light of these competing interests, judges and policymakers should take a fine-grained approach to AI cases, asking what, exactly, we should be seeking to incentivize. As a starting point, suggests Lemley, we should dump the mens rea requirement in AI defamation cases now that we’ve entered an era when dangerous content can so easily be generated by machines that lack intent.

Lemley’s point extends to AI speech that contributes to criminal conduct. Imagine, he says, a chatbot generating a list of instructions for becoming a hit man or making a deadly toxin. There is precedent for finding human beings liable for these things. But when it comes to AI, once again accountability is made difficult by the machine’s lack of intent.

“We want AI to avoid persuading people to hurt themselves, facilitating crimes, and telling falsehoods about people,” Lemley writes in “Where’s the Liability in Harmful AI Speech?” So instead of liability resting on intent, which AIs lack, Lemley suggests an AI company should be held liable for harms in cases where it was designed without taking standard actions to mitigate risk.

“It is deploying AI to help prosecutors make decisions that are not conditioned on race. Because that’s what the law requires.”

Julian Nyarko, associate professor of law, on the algorithm he developed

At the same time, Lemley worries that holding AI companies liable when ordinary humans wouldn’t be, may inappropriately discourage development of the technology. He and his co-authors argue that we need a set of best practices for safe AI. Companies that follow the best practices would be immune from suit for harms that result from their technology while companies that ignore best practices would be held responsible when their AIs are found to have contributed to a resulting harm.

 

HELPING TO CLOSE THE ACCESS TO JUSTICE GAP 

As AI threatens to disrupt criminal law, lawyers themselves are facing major disruptions. The technology has empowered individuals who cannot find or pay an attorney to turn to AI-powered legal help. In a civil justice system awash in unmet legal need, that could be a game changer.

Artificial Intelligence and the Law 2
From left: Jessica Shin, JD ’25, and Isabelle Anzabi (BA ’24), students in the Legal Design Lab practicum with its Director Margaret Hagan, JD ’13

“It’s hard to believe,” says David Freeman Engstrom, JD ’02, Stanford’s LSVF Professor in Law and co-director of the Deborah L. Rhode Center on the Legal Profession, “but the majority of civil cases in the American legal system—that’s millions of cases each year—are debt collections, evictions, or family law matters.” Most pit a represented institutional plaintiff (a bank, landlord, or government agency) against an unrepresented individual. AI-powered legal help could profoundly shift the legal services marketplace while opening courthouse doors wider for all.

“Up until now,” says Engstrom, “my view was that AI wasn’t powerful enough to move the dial on access to justice.” That view was front and center in a book Engstrom published earlier this year, Legal Tech and the Future of Civil Justice. Then ChatGPT roared onto the scene—a “lightning-bolt moment,” as he puts it. The technology has advanced so fast that Engstrom now sees rich potential for large language models to translate back and forth between plain language and legalese, parsing an individual’s description of a problem and responding with clear legal options and actions.

“We need to make more room for new tools to serve people who currently don’t have lawyers,” says Engstrom, whose Rhode Center has worked with multiple state supreme courts on how to responsibly relax their unauthorized practice of law and related rules. As part of that work, a groundbreaking Rhode Center study offered the first rigorous evidence on legal innovation in Utah and Arizona, the first two states to implement significant reforms.

But there are signs of trouble on the horizon. This summer, a New York judge sanctioned an attorney for filing a motion that cited phantom precedents. The lawyer, it turns out, relied on ChatGPT for legal research, never imagining the chatbot might hallucinate fake law.

How worried should we be about AI-powered legal tech leading lay people—or even attorneys—astray? Margaret Hagan, JD ’13, lecturer in law, is trying to walk a fine line between techno-optimism and pessimism.

“I can see the point of view of both camps,” says Hagan, who is also the executive director of the Legal Design Lab, which is researching how AI can increase access to justice, as well as designing and evaluating new tools. “The lab tries to steer between those two viewpoints and not be guided by either optimistic anecdotes or scary stories.”

Artificial Intelligence and the Law 5

To that end, Hagan is studying how individuals are using AI tools to solve legal problems. Beginning in June, she gave volunteers fictional legal scenarios, such as receiving an eviction notice, and watched as they consulted Google Bard. “People were asking, ‘Do I have any rights if my landlord sends me a notice?’ and ‘Can I really be evicted if I pay my rent on time?’” says Hagan.

Bard “provided them with very clear and seemingly authoritative information,” she says, including correct statutes and ordinances. It also offered up imaginary case law and phone numbers of nonexistent legal aid groups.

In her policy lab class, AI for Legal Help, which began last autumn, Hagan’s students are continuing that work by interviewing members of the public about how they might use AI to help them with legal problems. As a future lawyer, Jessica Shin, JD ’25, a participant in Hagan’s class, is concerned about vulnerable people placing too much faith in these tools.

“I’m worried that if a chatbot isn’t dotting the i’s and crossing the t’s, key things can and will be missed—like  statute of limitation deadlines or other procedural steps that will make or break their cases,” she says.

“Government cannot govern AI, if government doesn’t understand AI.”

Daniel Ho, William Benjamin Scott and Luna M. Scott Professor of Law

Given all this promise and peril, courts need guidance, and SLS is providing it. Engstrom was just tapped by the American Law Institute to lead a multiyear project to advise courts on “high-volume” dockets, including debt, eviction, and family cases. Technology will be a pivotal part, as will examining how courts can leverage AI. Two years ago, Engstrom and Hagan teamed up with Mark Chandler, JD ’81, former Cisco chief legal officer now at the Rhode Center, to launch the Filing Fairness Project. They’ve partnered with courts in seven states, from Alaska to Texas, to make it easier for tech providers to serve litigants using AI-based tools. Their latest collaboration will work with the Los Angeles Superior Court, the nation’s largest, to design new digital pathways that better serve court users.

 

CAN MACHINES PROMOTE COMPLIANCE WITH THE LAW?

The hope that AI can be harnessed to help foster fairness and efficiency extends to the work of government too. Take criminal justice. It’s supposed to be blind, but the system all too often can be discriminatory—especially when it comes to race. When deciding whether to charge or dismiss a case, a prosecutor is prohibited by the Constitution from taking a suspect’s race into account. There is real concern, though, that these decisions might be shaped by racial bias—whether implicit or explicit.

Enter AI. Julian Nyarko, associate professor of law, has developed an algorithm to mask race-related information from felony reports. He then implemented the algorithm in a district attorney’s office, erasing racially identifying details before the reports reached the prosecutor’s desk. Nyarko believes his algorithm will help ensure lawful prosecutorial decisions.

“The work uses AI tools to increase compliance with the law,” he says. “It is deploying AI to help prosecutors make decisions that are not conditioned on race. Because that’s what the law requires.”

 

GOVERNING AI

While the legal profession evaluates how it might integrate this new technology, the government has been catching up on how to grapple with the AI revolution. According to Daniel Ho, the William Benjamin Scott and Luna M. Scott Professor of Law and a senior fellow at Stanford’s Institute for Human-Centered AI, one of the core challenges for the public sector is a dearth of expertise.

Very few specialists in AI choose to work in the public sector. According to a recent survey, less than 1 percent of recent AI PhD graduates took positions in government—compared with some 60 percent who chose industry jobs. A lack of the right people, and an ailing government digital infrastructure, means the public sector is missing the expertise to craft law and policy and effectively use these tools to improve governance. “Government cannot govern AI,” says Ho, “if government doesn’t understand AI.”

Artificial Intelligence and the Law 3
From left: Professor Daniel Ho, Professor Julian Nyarko, and Neel Guha, JD/PhD ’24 (BA ’18)

Ho, who also advises the White House as an appointed member of the National AI Advisory Committee (NAIAC), is concerned policymakers and administrators lack sufficient knowledge to separate speculative from concrete risks posed by the technology.

Evelyn Douek, a Stanford Law assistant professor, agrees. There is a lack of available information about how commonly used AI tools work—information the government could use to guide its regulatory approach, she says. The outcome? An epidemic of what Douek calls “magical thinking” on the part of the public sector about what is possible.

The information gap between the public and private sectors motivated a large research team from Stanford Law School’s Regulation, Evaluation, and Governance Lab (RegLab) to assess the feasibility of recent proposals for AI regulation. The team, which included Tino Cuéllar (MA ’96, PhD ’00), former SLS professor and president of the Carnegie Endowment for International Peace; Colleen Honigsberg, professor of law; and Ho, concluded that one important step is for the government to collect and investigate events in which AI systems seriously malfunction or cause harm, such as with bioweapons risk.

“If you look at other complex products, like cars and pharmaceuticals, the government has a database of information that details the factors that led to accidents and harms,” says Neel Guha, JD/PhD ’24 (BA ’18), a PhD student in computer science and co-author of a forthcoming paper that explores this topic. The NAIAC formally adopted this recommendation for such a reporting system in November.

“Our full understanding of how these systems are being used and where they might fail is still in flux,” says Guha. “An adverse-event-reporting system is a necessary prerequisite for more effective governance.”

MODERNIZING GOVERNMENT

While the latest AI models demand new regulatory tools and frameworks, they also require that we rethink existing ones—a challenge when the various stakeholders often operate in separate silos.

“Policymakers might propose something that is technically impossible. Engineers might propose a technical solution that is flatly illegal.” Ho says. “What you need are people with an understanding of both dimensions.”

Last year, Ho, Christie Lawrence, JD ’24, and Isaac Cui, JD ’25, documented extensive challenges the federal government faced in implementing AI legal requirements in an article. This led Ho to testify before the U.S. Senate on a range of reforms. And this work is driving change. The landmark White House executive order on AI adopted these recommendations, and the proposed AI Leadership to Enable Accountable Deployment (AI LEAD) Act would further codify recommendations, such as the creation of a chief AI officer, agency AI governance boards, and agency strategic planning. These requirements would help ensure the government is able to properly use and govern the technology.

“If generative AI technologies continue on their present trajectory, it seems likely that they will upend many of our assumptions about a copyright system.”

Paul Goldstein, Stella W. and Ira S. Lillick Professor of Law

Ho, as faculty director of RegLab, is also building bridges with local and federal agencies to develop high-impact demonstration projects of machine learning and data science in the public sector.

The RegLab is working with the Internal Revenue Service to modernize the tax-collection system with AI. It is collaborating with the Environmental Protection Agency to develop machine-learning technology to improve environmental compliance. And during the pandemic, it partnered with Santa Clara County to improve the public health department’s wide range of pandemic response programs.

“AI has real potential to transform parts of the public sector,” says Ho. “Our demonstration projects with government agencies help to envision an affirmative view of responsible technology to serve Americans.”

In a sign of an encouraging shift, Ho has observed an increasing number of computer scientists gravitating toward public policy, eager to participate in shaping laws and policy to respond to rapidly advancing AI, as well as law students with deep interests in technology. Alumni of the RegLab have been snapped up to serve in the IRS and the U.S. Digital Service, the technical arm of the executive branch. Ho himself serves as senior advisor on responsible AI to the U.S. Department of Labor. And the law school and the RegLab are front and center in training a new generation of lawyers and technologists to shape this future.

 

AI GOES TO HOLLYWOOD 

Swaths of books and movies have been made about humans threatened by artificial intelligence, but what happens when the technology becomes a menace to the entertainment industry itself? It’s still early days for generative AI-created novels, films, and other content, but it’s beginning to look like Hollywood has been cast in its own science fiction tale—and the law has a role to play.

“If generative AI technologies continue on their present trajectory,” says the Stella W. and Ira S. Lillick Professor of Law Paul Goldstein, “it seems likely that they will upend many of our assumptions about a copyright system.”

There are two main assumptions behind intellectual property law that AI is on track to disrupt. From feature films and video games with multimillion-dollar budgets to a book whose author took five years to complete, the presumption has been that copyright law is necessary to incentivize costly investments. Now AI has upended that logic.

“When a video game that today requires a $100 million investment can be produced by generative AI at a cost that is one or two orders of magnitude lower,” says Goldstein, “the argument for copyright as an incentive to investment will weaken significantly across popular culture.”

The second assumption, resting on the consumer side of the equation, is no more stable. Copyright, a system designed in part to protect the creators of original works, has also long been justified as maximizing consumer choice. However, in an era of AI-powered recommendation engines, individual choice becomes less and less important, and the argument will only weaken as streaming services “get a lot better at figuring out what suits your tastes and making decisions for you,” says Goldstein.

If these bedrock assumptions behind copyright are both going to be rendered “increasingly irrelevant” by AI, what then is the necessary response? Goldstein says we need to find legal frameworks that will better safeguard human authors.

“I believe that authorship and autonomy are independent values that deserve to be protected,” he says. Goldstein foresees a framework in which AI-produced works are clearly labeled as such to guarantee consumers have accurate information.

The labeling approach may have the advantage of simplicity, but on its own it is not enough. At a moment of unprecedented disruption, Goldstein argues, lawmakers should be looking for additional ways to support human creators who will find themselves competing with AIs that can generate works faster and for a fraction of the cost. The solution, he suggests, might involve looking to practices in countries that have traditionally given greater thought to supporting artists, such as those in Europe.

“There will always be an appetite for authenticity, a taste for the real thing,” Goldstein says. “How else do you explain why someone will pay $2,000 to watch Taylor Swift from a distant balcony, when they could stream the same songs in their living room for pennies?” In the case of intellectual property law, catching up with the technology may mean heeding our human impulse—and taking the necessary steps to facilitate the deeply rooted urge to make and share authentic works of art.  SL