New research has illustrated the many ways that racial and other biases are reproduced through AI, machine learning, and other new technologies, impacting everything from advertising to policing to hiring. Join us for a conversation with researchers have developed tools to help identify and mitigate this bias in datasets and models, including datasheets, model cards, and FactSheets.
Timnit Gebru | leading researcher, advocate, and co-author of Datasheets for Datasets and Model Cards for Model Reporting, and Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.
James Zou | Assistant Professor of Biomedical Data Science, and Computer Science and Electical Engineering (by courtesy) at Stanford University. Zou's work includes research on bias using word embeddings in machine learning.
Hong Qu (moderator) | CCSRE Race & Technology Practitioner Fellow, who has developed a bias discovery tool called AI Blindspot.
Stanford Center on Philanthropy and Civil Society, Center for Comparative Studies in Race and Ethnicity, Stanford Institute for Human-Centered Artificial Intelligence (HAI)