Law, Disrupted

How Powerful AI Tools Are Transforming Legal Practice and Education

Law, Disrupted

Is it possible to download a senior law firm partner’s experience and activate it in an AI agent? What if that rich trove of knowledge—gathered over a career—could be put into a program and used as a tool to hone legal practice and train new associates?

A team at Stanford Law School is doing just that. The “personas project” is one of several efforts by the Legal Innovation through Frontier Technology Lab, or liftlab, looking at how AI and other emerging technologies can better help attorneys understand, teach, and deliver the practice of law. Megan Ma, liftlab executive director, calls the personas research “downloading their brains,” which she does during an intensive session.

Liftlab, launched in 2025, is among the first academic initiatives in legal AI to unite research, prototyping, and real-time collaboration with law firm private practice and industry developers. The nimble team already has several projects launched, in user testing, and in development. It’s just one project that brings law firms and legal tech companies together taking place at Stanford Law School. Here, faculty and program teams are leading the way in AI research with interdisciplinary exploration and industry collaboration, looking at the smart and practical application of this powerful new technology in legal practice by developing and utilizing tools and training students how to use them.

Stanford Law scholars have been engaged in explorations into the impact of AI tools on many areas of law, including intellectual property, First Amendment, corporate governance, and the delivery of health care. Teams at the Regulation, Evaluation, and Governance Lab (RegLab), the Legal Design Lab, and the Deborah L. Rhode Center on the Legal Profession, for example, are researching at the intersection of AI and law on a range of issues, including access to justice, policy development, and advancing AI in legal institutions with a human-centered design. And Stanford’s CodeX, the Center for Legal Informatics, has been on the cutting edge of computational law, legal tech, and innovation for decades, and its FutureLaw conference has gathered leaders in the field since 2005. These various programs, among others, are now supported through Stanford Law’s new AI Initiative—a hub that will help to align and amplify their work.

The dramatic uptick in Stanford Law faculty studying AI—and using AI for their research—is mirrored in legal practice, where investment in and use of AI is accelerating.

“I use AI every day. It’s definitely making me more effective as an attorney,” says Emily Kapur, JD ’15 (PhD ’17, BA ’08), a partner at Quinn Emanuel. “I also think it creates opportunities for junior associates to charge up the learning curve.”

Law, Disrupted 7

The incorporation of AI in legal practice, while occurring at a variable pace across different firms and organizations, received its first big impetus when ChatGPT took the world by storm in late 2022. The release of new versions of Claude Code in 2025, which greatly facilitated the building of legal AI agents, has accelerated this transformation.

“If you resist technology, you make the mistake of falling behind quickly,” says Jeff Karpf, JD ’94, managing partner at Cleary Gottlieb—a law firm that supported liftlab early on.

While the full ramifications of legal AI are yet to be determined, the pace of change is astonishing, with AI already embedded in workspaces and improving quickly. Legal scholars at Stanford are keeping their sights set on the data, with research teams diving into big-picture questions and building solutions as they look at the challenges and opportunities that AI presents to the practice of law—and how legal education is changing to meet both.

 

“Stanford Law School sits at the intersection of higher education, the legal profession, and applied policy across a full range of domains. As a result, we incorporated AI early on in what and how we teach and research. Since then, AI technology has hit several inflection points that have spurred us to make significant investments and accelerate this transformation,” says George Triantis, JSD ’89, Richard E. Lang Professor of Law and Dean of Stanford Law School. “To lead in this process, we must also provide fora for exchanges in ideas and techniques with the profession, as well as ensure that our graduates will be proficient in the responsible use of AI technologies.”

Here, we offer a few examples of Stanford Law faculty and student collaboration with industry and law firms to study how this new technology is changing law firm practice and education.

Legal judgment and AI

Julian Nyarko is a professor of law, co-chair of Stanford Law’s AI Initiative, a senior fellow at Stanford’s Human-Centered Artificial Intelligence (HAI), and the faculty director of liftlab—where he and Ma have put together an interdisciplinary team of lawyers, computer scientists, and linguists. It’s a small but highly productive group, with projects including Legal Personas, various evaluations of AI models, AI-driven simulation training, and contract risk assessment.

Law, Disrupted 5
Megan Ma, executive director of liftlab; Professor Nathaniel Persily, JD ’98, co-chair of Stanford Law AI Initiative; and Professor Julian Nyarko, co-chair of Stanford Law AI Initiative and faculty director of liftlab

Nyarko is a contract law expert and self-taught in computer programming and big data research—skills he picked up at Berkeley while working on his PhD in jurisprudence and policy. Since joining the Stanford Law faculty in 2019, he has dug deep into large language models and new computational methods to study questions of legal and social scientific importance.

Nyarko’s early scholarship delved into the practice of contract design and drafting and, more recently, algorithmic biases. His 2024 article “What’s in a Name? Auditing Large Language Models for Race and Gender Bias” revealed troubling patterns in how LLMs respond to prompts containing names associated with different races and genders. In a more recent computational study, “Breaking Down Bias,” Nyarko and his co-authors found that racial and other biases exhibited by LLMs can be pruned away, but because the biases are highly context-specific, there are limits to holding AI model developers liable for harmful outputs.

Some of Nyarko’s work with liftlab extends his research on contracts. He describes an early paper on the quality of expert annotations on legal contracts that highlights the challenges for AI application in the legal profession, where subjectivity is the norm. Nyarko and Ma asked 12 experienced attorneys to review and identify problems with a dozen contracts. There was perfect overlap, with all 12 describing the contracts as bad, but there was no overlap as to why.

Nyarko explains that for simple tasks, like spam detection, there is a relatively clear understanding about the problem to be solved. “But if you develop an AI tool to write a contract, it turns out that every lawyer thinks about contracts differently,” he says. “That has implications for how you develop the AI system.”

“If you resist technology, you make the mistake of falling behind quickly.”

Jeff Karpf, JD ’94, Managing Partner, Cleary Gottlieb

Contracts aside, most medium- to high-level legal work is bespoke and its value largely subjective.

“To what extent should AI systems pool knowledge from different legal practitioners and to what extent should they be customized toward the individual? That’s one interesting aspect of our research—looking at subjectivity and shared standards of quality in law,” says Nyarko.

Benchmarking AI

As the legal profession tests and adapts AI tools, questions about the reliability and efficacy of the technology have come into sharp focus. “At this time, there is no doubt about the need for humans to be in the loop for AI-generated legal services. The critical questions concern the appropriate human role today and how that will change in the next few years as AI agents improve,” remarks Triantis.

Law, Disrupted 6
RegLab fellows Emily Robitschek (l) and Ally Casasola (BS/MS’25) (r) with Professor Daniel Ho, RegLab faculty director

Daniel Ho, William Benjamin Scott and Luna M. Scott Professor of Law, has lent his expertise to various arms of the government, including by serving on the Intelligence Advisory Committee and advising the White House on AI policy, and he was special advisor to the ABA Task Force on Law and Artificial Intelligence. As the founding director of RegLab, which partners with government agencies to explore how AI can improve policy, services, and legal processes, Ho has focused on identifying AI risks in assessing the quality of tools. According to Ho and Nyarko, the profession is largely flying blind in this area, with law firms investing in the new technology without independent, third-party review.

“The big open question is where lawyers can utilize these tools to garner the gains and reduce the risks,” says Ho.

That there aren’t definitive answers has spurred new research by Ho, Nyarko, Neel Guha, JD ’25 (PhD ’27, BS ’18), and several co-authors that points to the urgent need for a clear-eyed assessment of this powerful technology.

The team first looked at the lack of systematic performance evaluation, or benchmarking, for legal AI tools—particularly in comparison with other AI adaptors such as those in medicine and software engineering.

“Benchmarking is necessary because legal AI today lacks legibility. That is to say, there is little public information about the performance of commonly used legal AI systems. Legal AI’s illegibility threatens responsible deployment; stymies legal education efforts, slows innovation, and results in poor governance of the technology,” they write in their forthcoming article “There’s No Free Benchmark: An Institutional View of Legal AI Benchmarking.”

Because benchmarking legal AI requires an attorney to evaluate and validate how well a tool performs, it is also expensive. “That all takes time and judgment that is not currently automated in the same way as benchmarking tasks have been in more conventional domains,” says Ho.

In addition to the high cost of benchmarking, the lack of transparency in the tools presents a formidable challenge. “They often don’t publish the underlying data, so it’s hard for law firms to figure out which tool to adopt if they don’t have an independent audit to look at,” says Nyarko.

While Ho and Nyarko believe that benchmarking legal AI tools is a shared responsibility, they see a vital role for profit-neutral university research teams.

“We can play a role because we’re incentivized differently,” says Nyarko. “Academic integrity is the most valuable thing we have. But we wouldn’t want to do this in isolation. These are tools being used in practice, so you’d want the end users in the evaluation process.”

A bot for that

Three Crowns is one of several law firms that have been working with liftlab. “There appeared to be an absolute tsunami of legal tech coming by 2023-24, but we couldn’t find tools to facilitate training of associates and law students,” says Three Crowns CEO Hugh Carlson. He and Ma were on a CodeX panel back in 2022 and began whiteboarding ideas for developing an LLM tool to train associates.

“Once you allow capital into the system and pair it with AI, you open the door to legal services being organized very differently from the traditional model of lawyers selling time inside a partnership.”

David Freeman Engstrom, JD ’02, LSVF Professor in Law

That early connection has led to continued collaboration. The end products were developed with open-source code and publicly available data, so can be made available to law schools.

“One tool is like a flight simulator, which can take law students or new associates through various legal processes,” says Ma.

Stanford Law School programs and centers with an AI focus are grounded in academic research, though some of the prototypes developed from that research can lead to practical outcomes. And collaboration is essential for the scholarship to advance. Some of these tools are now in testing with law firms. The feedback has been positive—reinforced when one project, an AI-powered cross-examination simulator called Atelier, co-developed with Three Crowns, was recognized in December at the 2025 Financial Times awards ceremony celebrating legal innovation.

Policy, regulation, and who can practice

Law, Disrupted 9
Nora Freeman Engstrom, JD ’02, Ernest W. McFarland Professor of Law, and David Freeman Engstrom, JD ’02, LSVF Professor in Law (photo by Lavette Studios)

The audience at this year’s National ABS (Alternative Business Structures) Law Firm Association conference in Arizona—where David Freeman Engstrom, JD ’02, LSVF Professor in Law, gave a keynote speech—was unusual. It was largely made up of private equity and venture capital investors—not lawyers. It raised the question: Why would they want to brave the September heat for what is typically a meeting of law nerds? It didn’t take Engstrom long to come up with an answer.

Engstrom co-directs the Rhode Center with Nora Freeman Engstrom, JD ’02, Ernest W. McFarland Professor of Law. His wide-ranging scholarship includes law and lawyering in the age of AI, with projects that span legal tech used by lawyers, direct-to-consumer AI tools to assist those without lawyers, and a unique collaboration with the Los Angeles Superior Court, the nation’s largest, to incorporate AI into court operations. He has published numerous articles on these issues and is the editor of two recent books, Legal Tech and the Future of Civil Justice and, with Nora, Rethinking the Lawyer’s Monopoly: Access to Justice and Future of Legal Services. He also co-founded the Filing Fairness Project, an ambitious collaboration with six states and technology providers to simplify filing systems and eliminate access barriers. And a series of reports, including the recent “Legal Innovation After Reform: Five Years of Data on Regulatory Change,” looks at efforts to relax regulation of the practice of law in Arizona and Utah to make more room for new human- and software-based delivery models.

“Of the 350 people in attendance in Phoenix, 50 were owners of law firms, and 300 were there because they’re thinking about buying into the business of law,” he says.

In addition to embracing new financing models, some law firms are setting their businesses apart by embracing AI—often through bespoke, in-house tools.

Cleary Gottlieb acquired a legal AI company to develop internal solutions, which are now opening new avenues of business for the firm. “Our clients see what we’ve done with AI, how we’re using it, and they ask us to help them do the same,” says Karpf.

“At this time, there is no doubt about the need for humans to be in the loop for AI-generated legal services. The critical questions concern the appropriate human role today and how that will change in the next few years as AI agents improve.”

George Triantis, JSD ’89, Richard E. Lang Professor of Law and Dean

Others are pushing the model further. Ryan Daniels, JD ’20, is co-founder and CEO of Crosby, which launched in 2024 as the “agentic law firm built for execution.” This hybrid AI law firm is embracing legal AI not as a supporting tool but as a central driver of speed and efficiency.

Crosby is actually two companies: Crosby Legal PLLC, a traditional law firm staffed with licensed attorneys, and Crosby Legal Inc., a parallel legal AI company that runs the firm’s technology platform. “We combine the speed and intelligence of AI with the safety of lawyers-in-the-loop to review contracts in under an hour,” says Daniels. “The idea is to simulate and predict the entire exchange between legal parties—at least for now in straightforward commercial transactions like software licensing agreements. But, as models improve, we hope to do this for more complex agreements that currently take months or even years to negotiate.”

These developments sit at the intersection of two conversations that have long proceeded on separate tracks: the push to expand access to legal services and the introduction of AI into legal work. At the center of that overlap are long-standing regulatory constraints—particularly the rules governing who may provide legal advice and who may own (and thus capitalize) law firms.

“The industrial organization of law is poised for major change,” Engstrom says. “Once you allow capital into the system and pair it with AI, you open the door to legal services being organized very differently from the traditional model of lawyers selling time inside a partnership. All of this raises a host of knotty questions. Just how far will judges and regulators permit new AI-based delivery models to cross into UPL territory? How reliably will law firms and other entities be able to extract lawyer knowledge and embed it in automated workflows and platform-based solutions? And who—as between haves and have-nots—will benefit most?”

These are some of the issues that the Rhode Center, joining with liftlab and others at Stanford Law, will focus on in more depth with the launch of a new Future of the Law Firm project. The project will use a mix of interdisciplinary research, strategic partnerships, and industry engagement to illuminate the convergence of capital, technology, and talent that is remaking the legal services landscape.

In addition to her work co-directing the Rhode Center, Nora Freeman Engstrom is a nationally renowned torts scholar who is an advisor to the American Law Institute’s project addressing civil liability for artificial intelligence. She sees strict regulations as exacerbating this inequity. “The same unauthorized practice of law rules that stunt human non-lawyers from providing legal assistance make it a crime for a tech tool to provide legal assistance,” she says. “If we are going to really harness the power of technology to address the access-to-justice crisis, we’re going to have to relax the rules that govern the profession.” SL