Tools for Combatting Bias in Datasets and Models

Presentation Date: 

Wednesday, February 3, 2021


Stanford, CA

New research has illustrated the many ways that racial and other biases are reproduced through AI, machine learning, and other new technologies, impacting everything from advertising to policing to hiring. Join us for a conversation with researchers have developed tools to help identify and mitigate this bias in datasets and models, including datasheets, model cards, and FactSheets.

Timnit Gebru | leading researcher, advocate, and co-author of Datasheets for Datasets and Model Cards for Model Reporting, and Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.

Michael Hind | IBM Distinguished Researcher focusing on the fairness, explainability, transparency, and the goverance of AI systems and leading work on AI FactSheets and AI Fairness 360.

James Zou | Assistant Professor of Biomedical Data Science, and Computer Science and Electical Engineering (by courtesy) at Stanford University. Zou's work includes research on bias using word embeddings in machine learning.

Hong Qu (moderator) | CCSRE Race & Technology Practitioner Fellow, who has developed a bias discovery tool called AI Blindspot.

Event Sponsor: 

Stanford Center on Philanthropy and Civil Society, Center for Comparative Studies in Race and Ethnicity, Stanford Institute for Human-Centered Artificial Intelligence (HAI)