Advancing Science- and Evidence-Based AI Policy
Abstract
Policy-makers around the world are grappling with how to govern increasingly powerful artificial intelligence (AI) technology. Some jurisdictions, like the European Union (EU), have made substantial progress enacting regulations to promote responsible AI. Others, like the administration of US President Donald Trump, have prioritized “enhancing America’s dominance in AI.” Although these approaches appear to diverge in their fundamental values and objectives, they share a crucial commonality: Effectively steering outcomes for and through AI will require thoughtful, evidence-based policy development (1). Though it may seem self-evident that evidence should inform policy, this is far from inevitable in the inherently messy policy process. As a multidisciplinary group of experts on AI policy, we put forward a vision for evidence-based AI policy, aimed at addressing three core questions: (i) How should evidence inform AI policy? (ii) What is the current state of evidence? (iii) How can policy accelerate evidence generation?
AI policy should advance AI innovation by ensuring that its potential benefits are responsibly realized and widely shared. To achieve this, AI policy-making should place a premium on evidence: Scientific understanding and systematic analysis should inform policy, and policy should accelerate evidence generation. But policy outcomes reflect institutional constraints, political dynamics, electoral pressures, stakeholder interests, media environment, economic considerations, cultural contexts, and leadership perspectives. Adding to this complexity is the reality that the broad reach of AI may mean that evidence and policy are misaligned: Although some evidence and policy squarely address AI, much more partially intersects with AI. Well-designed policy should integrate evidence that reflects scientific understanding rather than hype (2). An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks. This paper tackles the hard problem of how to optimize the relationship between evidence and policy (3) to address the opportunities and challenges of increasingly powerful AI.