No. 115: The Regulation of Artificial Intelligence in the European Union Through GDPR and AI Act: Bias and Discrimination in AI-Based Decisions and Fundamental Rights
Abstract
This thesis examines the increasing importance of discrimination and bias in AI-based decision-making and decision support systems in their use by public authorities. Further, how the European Union’s legal framework addresses this issue, with a focus on the GDPR and the AI Act. As AI technologies are increasingly used in public administration and due to their potential to influence outcomes in a discriminatory manner, this is a critical issue of our time. The research focuses on the associated risks to fundamental rights, which can be affected by biased algorithms and opaque decision-making processes, but also includes the major potential. Different areas of application in the Union are analyzed in more detail. The areas of law enforcement, EU border control, social welfare and the allocation of university places are discussed closely based on use cases. These use cases are subsumed under the legislation in focus, thereby identifying the scope of protection against discrimination by AI systems. While the provisions of the GDPR may provide certain fundamental protection against discrimination in advance, it does not actually offer comprehensive safeguarding. Art. 22 GDPR, which generally prohibits automated decision-making, is rather insufficient, especially due to its limited scope of application to solely automated processing. In part, the GDPR rather hinders effective protection against discrimination because of its strict requirements. Likewise, the AI Act does not consistently provide adequate protection of fundamental rights due to its risk-based regulatory system. While the legislator’s aim of many provisions is to combat discrimination, its design as a product safety law prioritizes system-level security over the protection of individuals, a discrepancy that contrasts the individual-centric approach of the GDPR. Helpful provisions only apply to high-risk AI systems. Additionally, the risk category is self-assessed, thereby providing a limited level of protection. The study contributes to the understanding of AI-based decision-making and discrimination. It is in line with a number of critical positions regarding the inadequate protection provided by the legislation examined but goes beyond this with a nuanced case-based analysis.