Sentencing the Brussels Effect: The Limits of the EU’s AI Rulebook
Abstract
The EU Artificial Intelligence Act (AIA) establishes a comprehensive framework for regulating AI, yet its application to criminal sentencing presents significant challenges. While the AIA may prohibit certain AI sentencing systems, such as those relying solely on profiling for criminal risk assessment, most will be classified as “high risk” and subject to extensive compliance obligations. An analysis of the AIA’s framework for prohibited and high-risk AI systems reveals considerable legal uncertainty and questions the act’s overall policy coherence in this sensitive domain. By exploring these regulatory challenges, the article’s authors highlight the inherent tension between broad, horizontal legislation and the nuanced requirements of specific sectors. Using AI in sentencing as a case study, the authors argue that while the AIA sets a crucial precedent, other jurisdictions should carefully evaluate its shortcomings before adopting a similar model.
Scholarly literature has extensively documented the challenges of bias and flawed human judgment in criminal sentencing. In the United States, efforts to standardize sentencing and mitigate these issues led to important reforms, most notably the Federal Sentencing Guidelines.2 The thoughtful and responsible use of AI presents an opportunity to further minimize sentencing disparities.3 However, poorly designed AI tools can perpetuate and entrench biases under the guise of “impartial” technology.4 Moreover, overreliance on AI in sentencing could undermine public trust in the judicial process by removing the crucial human dimension of justice.5 While this article will not exhaustively catalog the risks of using AI in sentencing, our analysis proceeds from the premise that AI, if properly regulated, can enhance the fairness and efficiency of sentencing. Therefore, regulators should aim to foster innovation while diligently guarding against potential harms.
This article scrutinizes the EU Artificial Intelligence Act (AIA),6 the most comprehensive AI regulation to date, to distill crucial lessons for U.S. practitioners and policymakers. While direct U.S. adoption of an AIA-style framework at the federal level or a pronounced “Brussels Effect”7 in this domain appears improbable, examining the AIA remains a valuable exercise. In particular, U.S. state legislators developing AI sentencing laws8 or broader AI statutes (some mirroring AIA principles),9 can glean valuable insights by identifying potential conflicts between the AIA’s omnibus approach and the distinct priorities of U.S. sentencing systems. Criminal sentencing occupies a uniquely sensitive intersection of law, technology, and ethics. Our analysis of the AIA’s potential impact on sentencing tools highlights a fundamental challenge: A sweeping regulatory model, if not meticulously designed and soundly drafted, risks both impeding beneficial innovation and failing to safeguard against the nuanced harms inherent in specific, high-stakes applications, especially those concerning individual rights.