Artificial Intelligence

Artificial Intelligence's 'Black Box' Is Nothing to Fear
Vijay Pande. 1/25/2018. “Artificial Intelligence's 'Black Box' Is Nothing to Fear.” The New York Times. Publisher's VersionAbstract

Alongside the excitement and hype about our growing reliance on artificial intelligence, there’s fear about the way the technology works. A recent MIT Technology Review article titled “The Dark Secret at the Heart of AI” warned: “No one really knows how the most advanced algorithms do what they do. That could be a problem.” Thanks to this uncertainty and lack of accountability, a report by the AI Now Instituterecommended that public agencies responsible for criminal justice, health care, welfare and education shouldn’t use such technology.

Given these types of concerns, the unseeable space between where data goes in and answers come out is often referred to as a “black box” — seemingly a reference to the hardy (and in fact orange, not black) data recorders mandated on aircraft and often examined after accidents. In the context of A.I., the term more broadly suggests an image of being in the “dark” about how the technology works: We put in and provide the data and models and architectures, and then computers provide us answers while continuing to learn on their own, in a way that’s seemingly impossible — and certainly too complicated — for us to understand.

Read More.

Beyond prediction: Using big data for policy problems
Susan Athey. 2/3/2017. “Beyond prediction: Using big data for policy problems.” Science, 355, Pp. 483-485.Abstract
Machine-learning prediction methods have been extremely productive in applications ranging from medicine to allocating fire and health inspectors in cities. However, there are a number of gaps between making a prediction and making a decision, and underlying assumptions need to be understood in order to optimize data-driven decision-making.
Read More.

Pages