- This event has passed.
The success of machine learning has surged, with similar algorithmic approaches effectively solving a variety of human-defined tasks. Tasks in image perception and language understanding have exposed strong effects of different types of bias, such as selection bias and reporting bias. In this talk, I will sketch a brief history of addressing bias and fairness in algorithms, unpack some of the known bias and fairness issues, and explain some techniques for making machine learning systems more diverse, inclusive, and fair.
Margaret Mitchell is a Senior Research Scientist in Google AI, and Tech Lead of Google’s ML Fairness effort. Her research involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. This includes research on helping computers to communicate based on what they can process, as well as projects to create assistive and clinical technology from the state of the art in AI. Her recent work focuses on issues of diversity and representation in text and face images.