Policy Experts from SLS’s RegLab Provide Input to Congress and OMB on Proposed Artificial Intelligence Policies

President Biden recently signed the executive order “Safe, Secure, and Trustworthy Artificial Intelligence,” which sets new standards for AI safety and security and aims to position the United States as a leader in the responsible use and development of AI in the federal government. In response, on November 1, 2023, the Office of Management and Budget (OMB) issued for comment a draft policy titled Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. The draft policy provides direction to federal agencies on how to strengthen AI governance, innovation, and risk management. 

Stanford Law School’s Dan Ho, the William Benjamin Scott and Luna M. Scott professor of law and director of the Stanford Regulation, Evaluation, and Governance Lab (RegLab), recently co-drafted two letters to the OMB noting how critical the moment is for getting technology policy right and commending the OMB for its “thoughtful approach to balancing the benefits of AI innovation with responsible safeguards,” as stated in one of the letters. 

Policy Experts from SLS’s RegLab Provide Input to Congress and OMB on Proposed Artificial Intelligence Policies

On December 6, Ho also testified in front of the U.S. House Subcommittee on Cybersecurity, Information Technology, and Government Innovation on matters relating to President Biden’s AI executive order and the OMB draft policy. Ho recommended six actions Congress should take in order to achieve a robust government AI policy that “protects Americans from bad actors and leverages AI to make lives better.” Among his recommendations: Congress must support policies that give the agencies’ Chief AI Officers flexibility and resources to “not just put out fires, but craft long term strategic plans.” Additionally, Ho said, the government must enable policies, including public-private partnerships, that will allow it to attract, train, and retain AI talent and provide pathways into public service for people with advanced degrees in AI. His full testimony can be viewed here

Ho serves on the National Artificial Intelligence Advisory Committee (NAIAC) and as a Senior Fellow at the Stanford Institute for Human Centered Artificial Intelligence (HAI). He and others at RegLab have worked extensively with government agencies around technology and data science. 

Urging ‘Agility and Flexibility’ in AI Policy

The November 30 letter to the OMB applauds the proposed guidance to create Chief AI Officer roles that provide AI leadership in federal agencies, increase technical hiring, conduct real-world AI testing, and allocate resources via the budget process. The letter outlines why some of the draft policy’s one-size-fits-all “minimum” procedures and practices–applied to all “government benefits or services” programs–may have negative unintended consequences. 

“Without further clarification from OMB and a clear mandate to tailor procedures to risks, agencies could find themselves tied up in red tape when trying to take advantage of non-controversial and increasingly commodity uses of AI, further widening the gap between public and private sector capabilities,” the letter authors wrote.

Kit Rodolfa, a research director at RegLab and former director of digital analytics at the White House Office of Digital Strategy, joined Ho on the letter, along with other prominent law and technology leaders, including former SLS professor Mariano-Florentino Cuéllar, president of the Carnegie Foundation for International Peace and former Justice of the Supreme Court of California, and Jennifer Pahlka, former U.S. deputy chief technology officer who helped co-found the U.S. Digital Service and author of Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better.

The letter offers a range of suggestions for adjusting the OMB guidance to reduce the burden on government agencies while maintaining AI safety, calling on the government to meet the needs of responsible AI with “agility and flexibility,” to distinguish between different types of AI, and to focus on the risks and rewards of different uses to guide their regulatory treatment, as well as the need to modernize many existing systems. 

“We admire the accomplishments of the AI Executive Order and strongly support the core tenets of the draft OMB Memo,” the co-authors said. “Because the memo will be central to how hundreds of government agencies pursue technology modernization–and, in turn, the potential use of a wide range of AI and machine learning approaches–we think it is critical that the framework promotes responsible innovation, while paying attention to the breathtaking variety of government programs and AI.”

Underscoring the Need for an Open-Source Approach

A second letter to the OMB, sent on December 4, focused specifically on government policies relating to open source, a type of software whose source code is publicly available for individuals to view, use, modify, and distribute. Ho co-drafted the letter with Rodolfa and Pahlka, along with Todd Park, former chief technology officer of the United States; DJ Patil, former U.S. chief data scientist, among technology and government services leaders; and Percy Liang, Associate Professor of Computer Science at Stanford. 

Citing “long-recognized benefits to open source approaches, including the reusability and robustness of code, the enhancement of digital services and federal programs, and the ability for government to develop collaborative approaches with the private sector,” the letter to the OMB underscores the need for an open-source approach to developing AI policy.  

The OMB policy memo should expressly draw a connection to established federal policy around the benefits of using open source,” the letter writers note, urging the OMB to be clear that government agencies should default to open source when developing or acquiring code.