Annette Zimmermann is a political philosopher working on the ethics of artificial intelligence and machine learning.

Dr Zimmermann is a Technology & Human Rights Fellow at the Carr Center for Human Rights Policy at Harvard University, and a Lecturer (US equivalent: Assistant Professor) in Philosophy at the University of York.

Annette Zimmermann Harvard University

Before joining the University of York and Harvard University, Dr Zimmermann was a postdoctoral fellow at Princeton University (2018 to 2020), with a joint appointment at the Center for Human Values and the Center for Information Technology Policy. Prior to that, Dr Zimmermann was awarded a DPhil (PhD) from Nuffield College at the University of Oxford, for work focusing on contemporary analytic political and moral philosophy—in particular, democratic decision-making, justice, and risk.

Dr Zimmermann's recent research visitor positions include Yale University (2016), the Australian National University (2019) and Stanford University (2020). Dr Zimmermann's research has been published in Philosophy and Public Affairs and her recent public writing has appeared in the New Statesman and in the Boston Review. Dr Zimmermann’s research has been generously supported by the United Kingdom’s Economic and Social Research Council, the University of Oxford, and the German National Academic Foundation.

 

The Algorithmic is Political.

AI does not exist in a moral and political vacuum. Technological models interact dynamically with the social world, including larger-scale patterns of injustice.

How we deal with this problem is a moral and a political choice.

Annette Zimmermann's current research project and book manuscript (The Algorithmic is Political) explores questions like: what is algorithmic injustice, and how do its effects compound over time? What role do risk and uncertainty play in this context? What does it mean to trust AI? Whose voices should we prioritize in collective decisions about AI design and deployment—and whose voices are currently excluded? Whose rights are most at risk? How can we place AI under meaningful democratic control—and would that solve the problem of algorithmic injustice?