Past Offerings
Useful Links
Discussion (1L): Race and Technology (240T): There is a tendency to think of technology as value neutral, as essentially objective tools that can be used for good or evil, particularly when questions of race and racial justice are involved. But the technologies we develop and deploy are frequently shaped by historical prejudices, biases, and social inequalities and thus may be no less biased and racist than the underlying society in which they exist. In this discussion group, we will consider whether and how racial and other biases are present in a wide range of technologies, such as risk assessment algorithms for bail, predictive policing, and other decisions in the criminal justice system; facial recognition systems; surveillance tools; algorithms for medical diagnosis and treatment decisions; targeted online housing ads that result in "digital redlining;" programs that determine entitlement to credit or public benefits and/or purport to detect fraud by recipients; algorithms used in recruiting and hiring; social-media targeting and disinformation; digital divide access gaps; and more. Building on these various categories and examples of anti-black and other biases in technology, we will seek to articulate a framework for understanding how they occur in the broader context of racism and discrimination in our society. Finally, we will explore how these problems might be addressed, including by regulators, legislators, and courts as well as by significant changes in mindset and practical engagement by technology developers and educators. Elements used in grading: Full attendance, reading of assigned materials, and active participation
Sections
-
2024-2025 AutumnSchedule No Longer Available
Discussion (1L): Race and Technology (240T): People like to think of technology as value neutral, as essentially objective tools that can be used for good or evil, particularly when questions of race and racial justice are involved. But the technologies we develop and deploy are frequently shaped by historical prejudices, biases, and inequalities and thus may be no less biased and racist than the underlying society in which they exist. In this discussion group, we will consider whether and how racial and other biases are present in a wide range of technologies, particularly artificial intelligence tools like risk assessment algorithms for bail, sentencing, predictive policing, and other decisions in the criminal justice system; algorithms for medical diagnosis and treatment decisions; AI that screens tenant or credit applications or job applications; facial recognition systems; surveillance tools; and many more. Building on these various case studies, we will seek to articulate a framework for recognizing both explicit and subtle anti-black and other biases in technology and understanding them in the broader context of racism and inequality in our society. Finally, we will discuss how these problems might be addressed, including by regulators, legislators, and courts as well as by significant changes in mindset and practical engagement by technology developers, companies, and educators. Class meets 4:30 PM-6:30 PM on Sept. 21, Oct. 5, Oct. 19, Nov. 2, 2023. Elements used in grading: Full attendance, reading of assigned materials, and active participation.
Sections
-
2023-2024 AutumnSchedule No Longer Available
Discussion (1L): Race and Technology (240T): People often tend to think of technology as value neutral, as essentially objective tools that can be used for good or evil, particularly when questions of race and racial justice are involved. But the technologies we develop and deploy are frequently shaped by historical prejudices, biases, and inequalities and thus may be no less biased and racist than the underlying society in which they exist. In this discussion group, we will consider whether and how racial and other biases are present in a wide range of technologies, such as "risk assessment" algorithms for bail, predictive policing, and other decisions in the criminal justice system; facial recognition systems; surveillance tools; algorithms for medical diagnosis and treatment decisions; online housing ads that result in "digital redlining;" programs that determine entitlement to credit or public benefits and/or purport to detect fraud by recipients; algorithms used in recruiting and hiring; digital divide access gaps; and more. Building on these various case studies, we will seek to articulate a framework for recognizing both explicit and subtle anti-black and other biases in tech and understanding them in the broader context of racism and inequality in our society. Finally, we will discuss how these problems might be addressed, including by regulators, legislators, and courts as well as by significant changes in mindset and practical engagement by technology developers and educators. Elements used in grading: Full attendance, reading of assigned materials, and active participation. Class meets 4:30 PM-6:00 PM on Sept. 29, Oct. 13, Oct. 27, Nov. 10.
Sections
-
2022-2023 AutumnSchedule No Longer Available