RegLab Students Make an Impact with Award-Winning Scholarship on Artificial Intelligence and Public Policy

Two research papers co-authored by Stanford Law School students in the Regulation, Evaluation, and Governance Lab (the RegLab ) received “best paper” awards at summer 2023 conferences focused on artificial intelligence, trustworthy technology, and public policy. 

Daniel Ho
Daniel Ho, director of SLS’s RegLab and member of the National Artificial Intelligence Advisory Committee.

“These awards illuminate the broad range of AI scholarship, inquiry, and policy recommendations we are engaged in at RegLab and across the law school,” says Dan Ho, the William Benjamin Scott and Luna M. Scott Professor of Law and director of RegLab. “The law and policy implications of AI are enormous, and the debates around its regulation are fierce, and both of these papers touch on critical areas for the federal government and others to consider.” Ho is a member of the National Artificial Intelligence Advisory Committee, which advises President Biden and the National AI Initiative Office.

The RegLab is an interdisciplinary impact lab that partners with government agencies to modernize government using data science and machine learning.  

In August 2023, Christie Lawrence, JD ’24, and Isaac Cui, JD ’25, received a “best paper” award for “The Bureaucratic Challenge to AI Governance: An Empirical Assessment of Implementation at U.S. Federal Agencies” at the Sixth AAAI/ACM Conference on AI Ethics and Society (AIES), held this year in Montreal, Canada. Ho advised the students and served as a co-author on the paper. “This paper has already had a substantial impact in building government capacity for addressing technology,” Ho says. FedScoop, a publication focused on government technology news, has run a series of investigative pieces based on the work. 

In June, Victor Wu, JD ’25, along with fellow RegLab co-authors Arushi Gupta and Helen Webley Brown, were honored for “The Privacy-Bias Tradeoff: Data Minimization and Racial Disparity Assessments in U.S. Government” at the Conference on Fairness, Accountability, and Transparency (FAccT), a cross-disciplinary conference that has been the leading venue for work on algorithmic fairness. Jennifer King, a privacy and data policy fellow at Stanford’s Institute for Human-Centered AI, advised the students and served as a co-author on the paper. Co-author Gupta is pursuing a B.A. in Political Science and an M.S. in Computer Science at Stanford University and Brown is a graduate student fellow with RegLab and a PhD. student at MIT. 

Assessing the National AI Policy

“Christie and Isaac led a Herculean effort to track what was actually happening with regard to how well our national AI policy has worked so far,” Ho says. “The results were concerning, showing wide inconsistencies in basic transparency measures where AI is used in government, and their paper has sparked serious attention and calls for changes so that the government can get its own AI house in order.”

The idea for the paper started with two simple questions: what can the government realistically do to keep up with AI technology and are they actually doing it? 

National Capitol in Washington DC, United States landmark.

“Federal agencies have a variety of mandated actions that they are supposed to take on AI, including compiling inventories of how they use AI, and publishing a strategic plan about how they intend to regulate AI,” co-author Cui says. Lawrence and Cui parsed out all of the legal requirements placed on federal government entities and assessed whether actions that required publicly-verifiable actions had been taken.

“We wanted to show that if agencies were systematically unable to implement these requirements when no one was watching, then it might reflect constraints on the agencies, whether that’s a failure to strategically prioritize AI or a lack of resources necessary to address AI at the agency level,” Cui says. The results, according to the authors, strongly show a lack of senior-level leadership and capacity at the agency level and from the White House, to implement their obligations on AI, which does not bode well for more ambitious AI policies.

Lawrence, who is concurrently pursuing a master’s degree in public policy at the Harvard Kennedy School, says the paper inspired an almost immediate response by members of Congress, the White House, and federal agencies such as the Office of Management and Budget, all of which acknowledge the need for more action and transparency. Congress embraced the paper, with Senators and House committee hearings inquiring into the status of requirements. Implementation of the AI use case inventories also increased, with more agencies publicly posting them and OMB revising its guidance while recognizing challenges with reporting the inventories.

“At a most basic level, our research shows that Congress needs to provide greater resources for agencies to help them get the technical expertise, capacity, and guidance that will be necessary to effectively adopt and regulate AI,” Lawrence says. “This includes the need for more senior leadership at the agency level, such as the creation of Chief AI Officer roles for the relevant agencies,” a proposal that has been included in the bipartisan AI Lead Act that moved to full Senate considerations this summer.

The Privacy-Bias Tradeoff: Balancing Data Collection and Privacy

Government agencies must collect demographic data in order to identify and rectify institutional biases, explains Wu, co-author of The Privacy Bias Tradeoff. “For instance, President Biden’s racial justice Executive Order 13,985 mandated federal agencies conduct equity impact assessments of federal programs. To conduct these assessments and diagnose disparities, agencies require demographic data.” 

But there’s a sharp conflict with the data collection mandate: “For nearly five decades, privacy concerns have driven the federal government to minimize demographic data collection by creating various legal, data infrastructure, and bureaucratic barriers like the Paperwork Reduction Act and Privacy Act of 1974,” Wu says. “Our paper discusses these barriers and then proposes solutions which strike a balance between individual privacy and the need to address systemic bias in government programs.”

One solution the authors propose lies in interpreting the Privacy Act to potentially permit inter-agency record linkage for bias assessment while preserving the Act’s privacy protections. This interpretation recognizes that part of an agency’s mandate to provide services to the public is an obligation to do so equitably without perpetuating either systemic or individualized inequities. 

The paper suggests this can be accomplished by maintaining a firewall between the unit conducting the equity assessment and the unit administering the program. “While the protection of individual privacy is paramount, it should not come at the detriment of both equity assessments and an equitable delivery of public services,” says King.