Artificial Intelligence & Bias

Artificial Intelligence & Bias
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

By Jackson Grigsby, Harvard Class of 2020

On Thursday, February 16th, the JFK Jr. Forum at the Harvard Institute of Politics hosted a conversation on the past, present, and future of Artificial Intelligence with Harvard Kennedy School Professor of Public Policy Iris Bohnet, Harvard College Gordon McKay Professor of Computer Science Cynthia Dwork, and Massachusetts Institute of Technology Professor Alex “Sandy” Pentland.

Moderated by Sheila Jasanoff, Kennedy School Pforzheimer Professor of Science and Technology Studies, the conversation focused on the potential benefits of Artificial Intelligence as well as some of the major ethical dilemmas that these experts predicted. While Artificial Intelligence (AI) has the potential to eliminate inherent human bias in decision-making, the panel agreed that in the near future, there are ethical boundaries that society and governments must explore as Artificial Intelligence expands into the realms of medicine, governance, and even self-driving cars.

Some major takeaways from the event were:

1. Artificial Intelligence offers an incredible opportunity to eliminate human biases in decision-making

In the future, Artificial Intelligence can be utilized to eliminate inherent human biases that often influence important decisions surrounding employment, government policy, and even policing. At the event, Professor Iris Bohnet stated that every person has biases that inform their decisions. These biases can affect whether a candidate for a job is chosen or not. As a result, Bohnet suggested that by using algorithms, employers could choose the best candidates by using AI to focus on the candidate’s qualifications rather than by basing decisions on gender, race, age or other variables. However, the panel also discussed the fact that even algorithms can have bias. For example, the algorithm that is used to match medical students with residency hospitals can either be biased in favor of the hospitals’ preferences or the students.’ It is up to humans to control bias in the algorithms that they use.

2. Society must begin having conversations surrounding the ethics of Artificial Intelligence

Due to the fact that Artificial Intelligence is becoming more popularly utilized, society and governments must continue to have conversations addressing ethics and Artificial Intelligence. Professors Alex Pentland and Cynthia Dwork stated that as Artificial Intelligence proliferates, moral conflicts can surface. Pentland emphasized that citizens must ask themselves “is this something that is performing in a way that we as a society want?” Pentland noted that our society must continue a dialogue around ethics and determine what is right.

3. Although Artificial Intelligence is growing, there are still tasks that only humans should do

In the end, the experts agreed, there are tasks and decisions that only humans can make. At the same time, there are some tasks and decisions that could be executed by machines, but ultimately should be done by humans. Professor Bohnet emphasized this point by reaffirming humanity’s position, concluding, “There are jobs that cannot be done by machines.”

Check out video of the full forum below:

Popular in the Community

Close

What's Hot