AI In The Regulatory State: Stanford Project Maps The Use Of Machine Learning And Other AI Technologies In Federal Agencies

Details

Publish Date:
June 20, 2019
Author(s):
Source:
Thomson Reuters
Related Person(s):

Summary

The vast US Federal administrative apparatus, consisting of hundreds of agencies and sub-agencies making thousands of decisions each day, sits atop a formidable pile of data. Some of those agencies are putting machine learning to work to understand that data in support of their decisions.

Within a new policy lab at Stanford University, called Administering by Algorithm: Artificial Intelligence in the Regulatory State, four professors and a multidisciplinary group of 25 students are taking a deep dive to discover the scope, prospects, and limitations of using various forms of AI in public administration.

The client for the project is the Administrative Conference of the United States (ACUS), an independent agency charged with overseeing improvements to administrative process and procedure. The lab will document its findings in a report to ACUS. “Our hope is that the report will land on the desks of agency heads and agency general counsels and help them think about how to deploy these potentially transformative tools,” said David Freeman Engstrom, Associate Dean for Strategic Planning and Professor of Law at Stanford, one of the project co-instructors.

For example, the Social Security Administration has developed tools to try to deal with some of the problems that lots of adjudicatory agencies face, including backlogs and inter-judge disparities in decision making. On the enforcement side, the US Securities and Exchange Commission (SEC) and the Internal Revenue Service (IRS) are using interesting tools to look for patterns of violations in their data. “These tools help agencies target their resources by conducting what we call predictive targeting to carry out enforcement mandates,” says Daniel Ho, Stanford law professor and another co-instructor for the lab.

How far along are federal agencies in adopting AI? “It depends on the agency,” Engstrom says. “Well-resourced agencies like the SEC have several tools that are already fully deployed; other agencies have none, and many others have projects in the pipeline but aren’t deployed just yet. Overall penetration is still modest, but some of the use cases are substantial, and there’s no question that these tools will significantly alter the way the federal government does its work in the coming years.”

Some agencies are doing state-of-the-art work, while a fair number encounter resource challenges in recruiting the best technologists. Ho also noted that several initiatives were led by entrepreneurs and first movers who pushed their agencies to consider the adoption of these kinds of techniques.

The idea of applying machine learning and algorithms to support high-risk decisions creates tensions and trade-offs. “Government use of AI creates a profound collision,” says Engstrom. “On the one hand, administrative law is grounded in transparency, accountability, and reason-giving. When government takes actions that affect our rights, it has to explain why. On the other hand, the AI tools that many agencies use are not, by their structure, fully explainable.”

Both Engstrom and Ho are especially grateful for the mix of technologists and lawyers on the student teams. “This is a truly rewarding teaching model because the lawyers can draw on engineers to develop a deeper understanding of the technology and the legal questions it raises, while the engineers can comprehend how technical solutions can help address the legal challenges and constraints,” Ho says. “In addition, the technologists benefit from seeing how their toolkit can be of use in law and public policy and in the public sector. Seeing the complex social problems agencies like the SSA are grappling is moving some of them to work on these kinds of problems beyond the course.”

Read More