Accountability in Algorithmic Copyright Enforcement

Abstract

Recent years demonstrate a growing use of algorithmic law enforcement by online intermediaries. Facilitating the distribution of online content, online intermediaries offer a natural point of control for monitoring access to illegitimate content, which makes them ideal partners for performing civil and criminal enforcement. Copyright law has been at the forefront of algorithmic law enforcement since the early 1990s when it conferred safe harbor protection to online intermediaries who remove allegedly infringing content upon notice under the Digital Millennium Copyright Act (DMCA). Over the past two decades, the Notice and Takedown (N&TD) regime has become ubiquitous and embedded in the system design of all major intermediaries: major copyright owners increasingly exploit robots to send immense volumes of takedown requests and major online intermediaries, in response, use algorithms to filter, block, and disable access to allegedly infringing content automatically, with little or no human intervention.

Algorithmic enforcement by online intermediaries reflects a fundamental shift in our traditional system of governance. It effectively converges law enforcement and adjudication powers in the hands of a small number of mega platforms, which are profitmaximizing, and possibly biased, private entities. Yet notwithstanding their critical role in shaping access to online content and facilitating public discourse, intermediaries are hardly held accountable for algorithmic enforcement. We simply do not know which allegedly infringing material triggers the algorithms, how decisions regarding content restrictions are made, who is making such decisions, and how target users might affect these decisions. Lessons drawn from algorithmic copyright enforcement by online intermediaries could offer a valuable case study for addressing these concerns. As we demonstrate, algorithmic copyright enforcement by online intermediaries lacks sufficient measures to assure accountability, namely, the extent to which decision makers are expected to justify their choices, are answerable for their actions, and are held responsible for their failures and wrongdoings.

This Article proposes a novel framework for analyzing accountability in algorithmic enforcement that is based on three factors: transparency, due process and public oversight. It identifies the accountability deficiencies in algorithmic copyright enforcement and further maps the barriers for enhancing accountability, including technical barriers of non-transparency and machine learning, legal barriers that disrupt the development of algorithmic literacy, and practical barriers. Finally, the Article explores current and possible strategies for enhancing accountability by increasing public scrutiny and promoting transparency in algorithmic copyright enforcement.

Details

Publisher:
Stanford University Stanford, California
Citation(s):
  • Maayan Perel and Niva Elkin-Koren, Accountability in Algorithmic Copyright Enforcement, 19 Stanford Technology Law Review 473 (2016).
Related Organization(s):