AI’s Racial Bias Claims Tested in Court as US Regulations Lag
Summary
Bias in algorithms can be identified by documenting each step of the modeling pipeline—from the input of data to the model used and how the output from it is used—and how each of those steps can mitigate or increase disparities, said Daniel Ho, a Stanford law professor who advised the Biden White House on AI policy.
“With the same exact training data, you can have modeling choices that may lead you to sort of decisions that have higher disparities or lower disparities,” he said.
Read More