- This event has passed.
Decision-making based on machine learning algorithms is becoming ever more prevalent in society, in such varied domains as consumer finance, housing, employment, health care, sentencing, and policing, among others. Such decisions can result in unintentional discrimination, due to the necessarily subjective decisions that people in system design. These decisions—including defining the goals of the problem, what training data to collect, and how the data is labeled, among many others—are often made with the best intentions, but with little awareness of their potentially harmful downstream effects.
And as it turns out, current anti-discrimination law is inadequate to address these risks. In this talk, Andrew Selbst of Data & Society Research Institute will discuss some of the technical aspects of machine learning models that lead to discriminatory results and why current anti-discrimination law cannot rectify the problems. He will also discuss a proposal for “algorithmic impact statements” that can require decision makers to consider the harmful downstream effects before introducing algorithmic systems into the world at large.