Discussion (1L): Race, Technology, and Law
Past Offerings
Discussion (1L): Race, Technology, and Law (240T): There is a tendency to think of technology as value neutral, as a set of essentially objective tools that people use, sometimes for good, sometimes for bad, particularly when questions of race and racial justice are involved. But are the technologies we develop and deploy really neutral? Or might they be shaped by historical prejudices, biases, and social inequalities? To what extent is technology perhaps no less biased and racist than the underlying society in which it exists? We will consider these questions and more in the context of a wide range of technologies, including risk assessment algorithms for bail, predictive policing, and other decisions in the criminal justice system; facial recognition systems; surveillance tools; algorithms for medical diagnosis and treatment decisions; targeted online housing ads that result in "digital redlining;" programs that determine entitlement to credit or public benefits and/or purport to detect fraud by recipients; algorithms used in recruiting and hiring; social-media targeting and disinformation; digital divide access gaps; and more. We will seek to articulate a framework for understanding how bias in tech might occur and how it might be related to racism and discrimination more broadly in our society. Finally, we will explore how these problems might be addressed, including by regulators, legislators, and courts as well as by technology developers and educators. Elements used in grading: Full attendance, reading of assigned materials, and active participation
Sections
-
2025-2026 AutumnSchedule No Longer Available
Discussion (1L): Race and Technology (240T): There is a tendency to think of technology as value neutral, as essentially objective tools that can be used for good or evil, particularly when questions of race and racial justice are involved. But the technologies we develop and deploy are frequently shaped by historical prejudices, biases, and social inequalities and thus may be no less biased and racist than the underlying society in which they exist. In this discussion group, we will consider whether and how racial and other biases are present in a wide range of technologies, such as risk assessment algorithms for bail, predictive policing, and other decisions in the criminal justice system; facial recognition systems; surveillance tools; algorithms for medical diagnosis and treatment decisions; targeted online housing ads that result in "digital redlining;" programs that determine entitlement to credit or public benefits and/or purport to detect fraud by recipients; algorithms used in recruiting and hiring; social-media targeting and disinformation; digital divide access gaps; and more. Building on these various categories and examples of anti-black and other biases in technology, we will seek to articulate a framework for understanding how they occur in the broader context of racism and discrimination in our society. Finally, we will explore how these problems might be addressed, including by regulators, legislators, and courts as well as by significant changes in mindset and practical engagement by technology developers and educators. Elements used in grading: Full attendance, reading of assigned materials, and active participation
Sections
-
2024-2025 AutumnSchedule No Longer Available
Discussion (1L): Race and Technology (240T): People like to think of technology as value neutral, as essentially objective tools that can be used for good or evil, particularly when questions of race and racial justice are involved. But the technologies we develop and deploy are frequently shaped by historical prejudices, biases, and inequalities and thus may be no less biased and racist than the underlying society in which they exist. In this discussion group, we will consider whether and how racial and other biases are present in a wide range of technologies, particularly artificial intelligence tools like risk assessment algorithms for bail, sentencing, predictive policing, and other decisions in the criminal justice system; algorithms for medical diagnosis and treatment decisions; AI that screens tenant or credit applications or job applications; facial recognition systems; surveillance tools; and many more. Building on these various case studies, we will seek to articulate a framework for recognizing both explicit and subtle anti-black and other biases in technology and understanding them in the broader context of racism and inequality in our society. Finally, we will discuss how these problems might be addressed, including by regulators, legislators, and courts as well as by significant changes in mindset and practical engagement by technology developers, companies, and educators. Class meets 4:30 PM-6:30 PM on Sept. 21, Oct. 5, Oct. 19, Nov. 2, 2023. Elements used in grading: Full attendance, reading of assigned materials, and active participation.
Sections
-
2023-2024 AutumnSchedule No Longer Available