A Square Peg, a Round Hole, and the Future of Antidiscrimination Law
A job applicant is screened out by an automated hiring system. A prosecutor relies on an AI tool to assess a defendant’s likelihood of recidivism. A public agency adopts a system meant to make decisions more fairly, only to discover that the tool may be replicating old patterns of discrimination in new ways.

Those were just a few of the scenarios that animated Antidiscrimination Law x AI, a recent two-day convening at Stanford Law School. The conference brought interdisciplinary insights to some of the thorniest questions facing antidiscrimination law, namely how should law and public policy respond when discrimination increasingly emerges not from human actors with intent, but instead from data, models, and systems that act with some degree of autonomy?
The conference was co-presented by the Stanford Center for Racial Justice; Harvard Law School’s Charles Hamilton Houston Institute for Race & Justice, led by Professor Guy-Uriel Charles; and GW Law’s Multiracial Democracy Project, led by Professor Spencer Overton. Conducted under Chatham House rules to encourage forthright dialogue, it brought together legal scholars, computer scientists, technology industry leaders, civil rights advocates, and policymakers.
The participants and panelists repeatedly returned to the fact that antidiscrimination law and AI are moving in opposite trajectories. As one commentator put it: The mandate is to try to fit the square peg of AI into the round hole of antidiscrimination law.

At issue is a fundamental challenge to some of antidiscrimination law’s core assumptions. The traditional questions—who made a certain decision, what was their intent, was a protected group treated differently?—are increasingly harder to answer when an action or “decision” is the product of AI. Where does intent reside? How does a person denied a job, a loan, or housing prove disparate impact—a longstanding pillar of antidiscrimination doctrine that targets facially neutral practices with unjustified discriminatory effects—when the system producing the outcome may operate through layers of data that are difficult to untangle?
A number of Stanford Law faculty members helped shape the conversation, among them Ralph Richard Banks, the Jackson Eli Reynolds Professor of Law and the faculty director of the Stanford Racial Justice Center, who was a key organizer of the convening; Professor Julian Nyarko, faculty co-director of the Stanford Law AI Initiative and faculty director of the Legal Innovation through Frontier Technology Lab (liftlab), who helped bridge conversations between computer science and antidiscrimination law; and Nathaniel Persily, the James B. McClatchy Professor of Law and faculty co-director of the Stanford Law AI Initiative, who moderated the opening panel.
The Stanford Law Review is preparing a special online issue focused on topics related to the conference.
“The conference was a triumph in that it got the different communities to talk to each other, something that rarely happens. Indeed, I felt my worlds colliding, as my friends in the civil rights community were sparring with my friends that work for the AI companies. We need more forums like these.”
– Nathaniel Persily, James B. McClatchy Professor of Law and faculty co-director of the Stanford Law AI Initiative
Here, Banks, Nyarko, and Persily comment on some of the issues raised during the event.
Why was this an important moment to bring people together around antidiscrimination law and AI?
Ralph Richard Banks: One impetus was the recognition that this is not a problem lawyers and policymakers can solve by themselves, and it is not a problem technologists can solve by themselves either. AI governance is increasingly being understood, correctly, as something that requires technical expertise, legal judgment, institutional knowledge, and attention to civil rights all at once. That is especially true when these systems are used in areas like employment, housing, education, public safety, and criminal justice, where the stakes are high and the harms may be difficult to detect or articulate in conventional legal terms.
From the perspective of the Stanford Center for Racial Justice, that made this a particularly important time to convene an interdisciplinary group. The center is focused on how structures and institutions can address inequality, and AI is an important part of that story. We wanted to create a space where people could think together about those risks now, before the gap between technical development and legal understanding grows even wider.
Professor Persily, much of your work in the realm of AI has focused on democracy and elections. How has thinking about AI at that level shaped the way you see its implications for antidiscrimination law?
Every question related to discrimination and AI is also a democracy question. In both contexts, we worry about how this powerful and uncertain technology will affect the equal status of persons under the law. Questions related to AI bias ultimately come down to whether we should trust the technology (and the companies that develop it) to perform certain tasks in a way that will not unduly disadvantage discrete groups of people. The technology is inherently biased, in that it reflects training data on a warped depiction of the human community. Those biases are features, not bugs. The task for lawyers is to promote systems that account for these biases and build in technical, legal and governance guardrails that counteract these inescapable features of the technology. In this way, lawyers and governance experts can ensure that AI is a technology that bolsters, rather than undermines, democracy.
Professor Nyarko, your research has shown how large language models can absorb and reproduce racial and gender disparities in surprisingly subtle ways, including through something as seemingly innocuous as a person’s name. What does that kind of finding tell us about the challenge antidiscrimination law faces when bias is embedded in the underlying logic of an AI system?
One lesson is that bias in AI often operates through association rather than instruction. These systems are trained on enormous amounts of human-generated data, and that data reflects all kinds of disparities in the world. So even when a model is not explicitly told to treat people differently by race or gender, it can still learn patterns that reproduce unequal outcomes. A name may look like a neutral input to a user, but to the model it can function as a proxy for a whole set of social and economic associations. That creates a real challenge for antidiscrimination law, because the law has traditionally been more comfortable identifying discrimination when there is a human actor, a discernible motive, and a relatively legible decision.
With AI systems, the problem is often more diffuse. The disparities may come from the training data, the structure of the model, the way the system is fine-tuned and aligned with human preferences, the context in which it is deployed, or some combination of all of those things. In that setting, it becomes much harder to ask the conventional legal question, “Who intended what?” But that does not make the resulting disparities any less important. In some ways it makes them more important, because they can be scaled very quickly and embedded in systems people use every day without recognizing what is happening underneath.
That is one reason I think auditing is so important. If we cannot always open up a system and inspect every layer, we need rigorous ways of testing what kinds of outputs it produces under controlled conditions. That is what audit studies allow us to do. They do not solve the normative question of what fairness should mean in every context, but they can help us identify when a model is systematically treating similarly situated people differently. For law and policy, that kind of evidence can be very valuable.
Where do you see disparate impact law headed, especially in light of the Trump administration’s moves against it?
Ralph Richard Banks: Disparate impact is still the law, and that is the first point worth making clearly. No Executive Order can, by itself, change that. The current administration is hostile to disparate impact and may decline to enforce the law vigorously, but that is different from saying the doctrine has disappeared. The doctrine remains a central part of American antidiscrimination law.
I see disparate impact becoming more important in the future, as the advance of AI leaves the other pillars of antidiscrimination law—the ideas of intent and formal classification—increasingly inapplicable.
AI systems typically do not rely on prohibited classifications, and they lack any intent to discriminate. Yet they may operate in ways that are functionally equivalent to formally discriminatory systems. Disparate impact is one of the few legal tools we have that is capable of seeing that kind of harm. It asks whether a practice that looks neutral is nonetheless imposing unjustified burdens on protected groups, and whether there are less discriminatory alternatives that would serve the same legitimate end. That is exactly the kind of inquiry AI ought to trigger.
This is not to say that we can simply or easily apply existing disparate impact doctrine to AI. Rather, in light of AI, we need to develop new and more nuanced ways of thinking about impact or outcome oriented understandings of discrimination. Those approaches will need to consider both the idea of bias (that has long pervaded antidiscrimination law) and the idea of predictive accuracy (which has featured much less prominently in the law).
I am unsettled and frankly worried about the societal dislocations that AI will bring. Yet I am also thrilled about the possibilities created during this period of transformative societal change.
Professor Persily: The conference brought together lawyers, technologists, policymakers, and civil rights scholars who do not often share the same vocabulary. Where do you see the biggest gaps in understanding between those worlds? Conversely, where do you see the biggest overlaps and what gives you hope going forward?
The conference was a triumph in that it got the different communities to talk to each other, something that rarely happens. Indeed, I felt my worlds colliding, as my friends in the civil rights community were sparring with my friends that work for the AI companies. We need more forums like these. In addition to the siloing of expertise, I would say that one of the main challenges obstructing the cross-pollination between these two communities concerns the speed of technological development. Anyone working at the intersection of law and AI struggles to keep up with the latest innovations from the AI developers, let alone the legal implications of those developments. The greatest impact of civil rights lawyers might not be felt in offering traditional legal advice to developers or suing them for deploying discriminatory products. Rather, we need civil rights lawyers “in the room” with the engineers as the technology develops so that the last century of thinking concerning antidiscrimination gets baked into products rather than trying to play catch-up once the tech is unleashed on the world.
About Stanford Law School
Stanford Law School is one of the world’s leading institutions for legal scholarship and education. Its alumni are among the most influential decision makers in law, politics, business, and high technology. Faculty members argue before the Supreme Court, testify before Congress, produce outstanding legal scholarship and empirical analysis, and contribute regularly to the nation’s press as legal and policy experts. Stanford Law School has established a model for legal education that provides rigorous interdisciplinary training, hands-on experience, global perspective and a focus on public service.