Exploring Government Use of AI

Federal administrative agencies across the United States employ machine learning and artificial intelligence to make decisions. But what happens when agencies can’t explain how those algorithms work? Students in the Law and Policy Lab’s Administering by Algorithm: Artificial Intelligence in the Regulatory State held last spring explored this question and what it means for the future when law and computers intersect.

Stanford Law professors David Engstrom, Daniel Ho, and California Supreme Court Justice Mariano-Florentino Cuéllar, along with NYU School of Law professor Catherine Sharkey, brought together 25 burgeoning lawyers, computer scientists, and engineers to probe the technologies government agencies develop and deploy. Findings from the lab will be published in a report due to be published in December 2019 and submitted to the Administrative Conference of the United States (ACUS), which puts forward guidelines outlining how such government agencies should operate.

“We want to understand what is happening now and we also want to get inside agencies and really understand what might be coming down the pike in the next five or ten years,” says Engstrom, associate dean for strategic initiatives and Bernard D. Bergreen Faculty Scholar.

Exploring Government Use of AI 1
Illustration by Mark Smith

Engstrom sees the policy practicum as a model for a new type of interdisciplinary work that harnesses Stanford’s unique mix of legal and technical expertise. As artificial intelligence and machine learning become more sophisticated, laws will need to adapt to accommodate for developing technology. But in some cases, federal agencies can’t understand the “black box systems” they implement to do government work, from allocating benefits to prosecuting violations. Computer scientists themselves may not fully comprehend why AI makes the decisions it does.

“We have a collision between a body of law that says we want agencies to explain why they’re doing what they’re doing and agencies using tools that, by their very structure, are not fully explainable,” says Engstrom.

The course evolved as a way of addressing this clash. “Some of the most interesting conversations have required both a technical grasp and a legal understanding of a problem,” says Ho, the William Benjamin Scott and Luna M. Scott Professor of Law. “Observing that conversation play out among the students has been really rewarding.”

The policy practicum was divided into teams—each a mix of law and computer science students—and given two tasks. First, the teams fanned out to probe the 100 most important federal administrative agencies, including the Environmental Protection Agency, U.S. Securities and Exchange Commission, and Social Security Administration. When they found examples of algorithms involved in decision making, the students worked together to evaluate the technology and judge what category it fell into: Was it AI, machine learning, or something far more basic?

Next, they engaged with the agencies themselves to examine specific applications and understand where the technology might be headed. Students brainstormed how new technological advances might intersect with the law—and how to navigate those collisions. Their results, Engstrom says, will appear in the ACUS report, which they hope will influence future policies governing agencies.

“It is the most collaborative and interdisciplinary class that I’ve been in at Stanford,” says Cristina Ceballos, JD ’19, who is also pursuing a PhD in philosophy. She adds that without the computer science students on her team, she wouldn’t know what questions to ask when speaking with agency representatives. “I think that if you are going to regulate how agencies are going to use AI, you have to have some sense of what the AI is actually doing.”

Urvashi Khandelwal, a computer science student (PhD ‘21), says it’s important for people in her field to explore how to deploy AI and machine learning in the real world. “I’ve heard a lot about what machine learning researchers are talking about, but I did not have much perspective on the legal side or the policy side.”

Engstrom, Ho, and Cuéllar expect that their students finished the course with a deeper appreciation of how interdisciplinary work leads to better solutions.

“We have a collision between a body of law that says we want agencies to explain why they’re doing what they’re doing and agencies using tools that, by their very structure, are not fully explainable.”

David Freeman Engstrom, Professor of Law and Bernard D. Bergreen Faculty Scholar

Derin McLeod, JD ’20, says that he appreciates the value of having two disciplines together in the same room when thinking about complex issues. “Going back and forth tracks the challenges that we are trying to grapple with. It’s not just a technical problem of one kind or another; it’s explaining it to other audiences.”

For computer science students, Engstrom hopes they will have a “greater sense of the promise and peril of the tools they develop.”

Indeed, Sandhini Agarwal, a senior majoring in symbolic systems and philosophy and the only undergraduate in the class, recognizes that developing AI and machine learning could have significant consequences. “I’m learning how to ground some of the ideas we build in CS classes and seeing when they are actually being used in the real world and what are some challenges that we face,” Agarwal says.

“The coolest thing about the class is the back and forth between the CS and law students,” Agarwal adds. “I’m excited for more collaborations to take place.”

Erin I. Garcia de Jesus is a science journalist who has written for the Stanford Report.