Human Rights in the AI Supply Chain

Abstract

Artificial intelligence (AI) has taken the world by storm since the launch of ChatGPT in November 2022, with some heralding it as the most significant technology of the century. Counterbalancing excitement for AI’s revolutionary potential, experts from fields as diverse as computer science, sociology, and global health are increasingly expressing their concerns regarding the serious risks AI may pose. While much of this attention focuses on downstream harms associated with AI’s use, comparatively less scrutiny has been given to human rights violations and environmental harms arising from the upstream processes and materials necessary for AI models’ functioning. This Note delves into these upstream harms and, drawing on the concept of the AI supply chain, assesses the ability of existing supply chain due diligence (SCDD) laws to regulate AI companies. Analyzing over a dozen enacted and pending laws from around the world, it argues that while some existing SCDD legislation applies to AI companies, the global legal landscape contains notable gaps that may enable human rights violations to remain unaddressed. The Note concludes with a discussion of proposed solutions and their limitations. Among these, lawmakers should amend or enact legislation to more clearly regulate AI supply chains, while AI companies should proactively self-regulate.

Details

Publisher:
Stanford University Stanford, California
Citation(s):
  • Jasper D.C. Johnston, Human Rights in the AI Supply Chain, 29 Stan. Tech. L. Rev. 108 (2025).
Related Organization(s):