No. 141: Beyond Accuracy: Regulating Hallucination Risks in Generative AI in the EU and US

Abstract

Hallucination in generative AI is often treated as a technical failure to produce factually correct output. Yet this framing underrepresents the broader significance of hallucinated content in language models, which may appear fluent, persuasive, and contextually appropriate while conveying distortions that escape conventional accuracy checks. This paper critically examines how regulatory and evaluation frameworks have inherited a narrow view of hallucination, one that prioritizes surface verifiability over deeper questions of meaning, influence, and impact. We propose a layered approach to understanding hallucination risks, encompassing epistemic instability, user misdirection, and social-scale effects. Drawing on interdisciplinary literature, the article develops a taxonomy of hallucination and demonstrates the limitation of solely using accuracy as a benchmark. It then compares how the EU and the United States address these risks. In the EU, instruments such as the AI Act, GDPR, and DSA place accuracy at the heart of their compliance logic, incentivizing optimization of surface-level correctness while overlooking deeper epistemic and systemic harms. In the US, a fragmented patchwork of consumer protection, competition, and sectoral rules offers more flexibility but lacks coherent safeguards against hallucination-related harms. This comparative analysis shows how both approaches, though divergent in form, risk amplifying the same epistemic vulnerabilities: misplaced user trust, subtle manipulation, and epistemic convergence that undermines pluralism. Rather than improving factual precision alone, we argue for regulatory responses that account for language’s generative nature, the asymmetries between system and user, and the shifting boundaries between information, persuasion, and harm.

Details

Author(s):
Publish Date:
October 29, 2025
Publication Title:
TTLF Working Papers
Publisher:
Stanford Law School
Format:
Working Paper
Citation(s):
  • Zihao Li, Weiwei Yi & Jiahong Chen, Beyond Accuracy: Regulating Hallucination Risks in Generative AI in the EU and US, TTLF Working Papers No. 141, Stanford-Vienna Transatlantic Technology Law Forum (2025).
Related Organization(s):

Other Publications By