Presentation Date:
Friday, March 25, 2022
The risk of delegating high-stakes decisions to AI exposes everyone to unequal treatment because these seemingly impartial algorithms are the product of harmful data and practices that may amplify historical biases in society. Fairness requires vigilance and accountability from stakeholders at every stage of AI lifecycles. We propose the AI Blindspot toolkit for advancing equity in AI systems.
Now learning from Hong Qu @hqu about his research, "A discovery process for spotting structural inequalities in AI systems." #neasist2022 pic.twitter.com/tqsdG76QSc
— NEASIST (@neasist) March 25, 2022