Skip to main content Skip to secondary navigation
Page Content
Image
Photo of HAI Fellow Hong Qu

Hong Qu launched AI Blindspot to help developers see where their work might create or perpetuate inequality. 

As one of Youtube’s earliest engineers, Hong Qu was assigned an important task: Make the comments section less negative. But as he and other engineers redesigned the notoriously snarky section of the popular video-sharing platform, he discovered fostering constructive community conversation was much more challenging than a quick tech fix. 

With that realization, Qu left the tech industry to work as a journalist and then instructor at City University of New York and Harvard University. Based on these experiences, Qu and several collaborators, as part of the 2019 Berkman Klein Center’s Assembly Fellowship program, created a card deck called AI Blindspot. It is intended to help AI developers discover the risk that their work may perpetuate unconscious biases or structural inequality – so they can address these problems well before products are deployed.

Now, as a Stanford HAI and Center for Comparative Studies in Race & Ethnicity Fellow associated with Stanford’s Digital Civil Society Lab, Qu is developing AI Blindspot into a tool that nontechnical civil society organizations can use as a framework for investigating and auditing AI systems as well as for organizing campaigns challenging corporations and governments to deploy AI systems that are fair, equitable, accountable, and genuinely participatory.

Additionally, Qu is starting a PhD at Northeastern University studying network science. He wants to map out the various stakeholders in AI advocacy, policy, and industry, and then analyze the social movements that develop around this issue. 

Was there an experience in your own career that led you to a place where you felt the need for AI Blindspot? 

I have a very vivid memory of sitting at YouTube’s startup office as a product designer. It was very early in YouTube’s history there were only 10 employees. I had designed many of the interfaces for YouTube’s key features like video sharing and finding related videos. But the big challenge came when the founders came to me and asked if I could fix the YouTube comments, which were negative and snarky.

An engineer, a product manager, and I tried to redesign the comments section on YouTube, and we struggled not so much on the technical side, but on the “social engineering,” community-building side, to set the tone for comments. That was when I realized that, sure, the technology is challenging in that you have to make sure it is easy to use, provides instant gratification, and accommodates a massive scale of millions of users. But once you solve that problem, you encounter the even more challenging and more urgent problem of creating and fostering a constructive community. 

We ended up copying a product called Digg that allowed people to give a thumbs up or down on the comments. As optimists a team of engineers who thought that in general people are good we expected people to upvote the highest quality and most constructive comments or the most meaningful comments. But something different happened: The commenters would actually upvote the most obnoxious comments rather than the highest quality comments, and vote down the comments by anyone with whom they disagreed. And if you train the algorithm to continue prioritizing and highlighting the snarky comments, then you will end up with that kind of community. 

I think YouTube comments are still notoriously negative and destructive, and the problem exists on Twitter and Facebook as well. It is a difficult general problem to solve: How do you foster deliberative, civil conversation online? You have to stop the negativity at the community guidelines, values, or norms level. It’s about setting the rules of engagement. 

So that’s a very specific example, and it lives on with me every time I describe my professional life working in Silicon Valley. It’s not so simple that there will be a technical solution that will make us all into better, more ethical people.

Do you agree with those who say tech can’t fix tech?

The notion of techno-solutionism or even determinism has been proven wrong again and again.  You can’t solve a problem by throwing more tech at it. You need to understand social science, philosophy, political science, and power structures in society; and you need to appreciate and be mindful of who is being disadvantaged or harmed. 

That’s one of the reasons I left the tech industry in 2010 and switched to journalism and then public policy. And now I’m really focused on ethics, morality, and social behavior. To really fix the problem, I’m trying to understand social movements and participatory deliberative governments and grassroots civil society organizations.   

And that nicely slots me into this fellowship, where I’m a fellow at both the Digital Civil Society Lab and HAI at Stanford. Civil society needs to play a big role in undoing this big mess we’re in with social media technology.

Is the tech industry open to using tools like AI Blindspot to regulate itself? 

When we tried to sell the developers and tech teams in Silicon Valley on the idea of AI Blindspot, it was an uphill battle.

There were two responses we typically got. One was: “We’re so low on the totem pole, we can’t impact the mission, the product, the business plan, or the process.” There was some willingness but a lack of process or reporting structures.

The second reaction was, “We have a team dedicated to this; go talk to them.” But teams that are responsible for AI ethics, who are already steeped in this field, often saw AI Blindspot as too rudimentary for their operation. At the same time, they have trouble diffusing their knowledge and best practices to other parts of their own organizations. In fact, one of the cofounders of AI Blindspot, Dan Taber, works at Indeed and is building a new AI ethics team there, so we have firsthand experience with this challenge.

So self-regulation and self-policing, from my vantage point, has been tried. There has been plenty of opportunity in the last 10 years. I feel that we’re at a dead end in terms of industry self-regulation. That’s a personal opinion, not based on empirical data or analysis, but I feel that very little movement has happened on the industry side. So civil society is a possible counterweight.

Why hasn’t good governance worked to rein in AI – at least so far?

The joke is that you need a permit to remodel a bathroom, but you don’t have to do anything to put an AI system out in the world.

But here’s the conundrum: If we want to regulate AI, how do we do that? How do we word the regulations and statutes? When I go to D.C., that’s what the congressional staffers ask me, and I actually have no idea how to use a scalpel instead of a hammer. I used to wonder why they couldn’t do their jobs and move as fast as the tech companies. But creating legislation is harder than creating the code.

And even adopting a set of general democratic principles to regulate AI is problematic because these principles are hard to translate into an action plan for a tech company that’s trying to rush products out the door.

So we tried to fill the gap with AI Blindspot and we found a lot of inertia and resistance within these organizations, even with this very concrete approach. That’s one of the reasons that AI Blindspot is resorting to civil society pressure. Appealing to human dignity, social justice, equity, and fairness is one of the best routes for exposing the problems with AI.

What do you plan to do as part of your HAI fellowship?

The main purpose of this fellowship is to take AI Blindspot and reconceptualize it as a tool for civil society organizations to identify, expose, and uncover the unfairness of AI systems and to organize campaigns to fix them. We are intentionally pivoting toward advocacy and away from our initial plan of asking the tech industry to use AI Blindspot themselves. We think advocacy can convey the dangers of AI, along with the possibility of improving it and making it safer and more equitable.

So, we will take the original AI Blindspot cards, which were a general set of explanations, and upgrade them to become a framework for uncovering injustice and advocating for a more grassroots, participatory, equal footing for the communities that are impacted by AI.

This project is for activists and nonprofits who are advocating for such things as racial justice, health care, labor justice, gender justice, or global human rights. And to make AI Blindspot accessible to these groups, the new design concept uses the metaphor that AI is like a child that is growing and learning and gaining power to influence society, but it is also like a puppet being controlled by the people making it the AI encodes human values and the engineers might not even realize it, as was the case when I designed YouTube comments. The new design also includes an intuitive visual icon for each of the 11 AI Blindspots, which include things like abusability, privacy, representative data, transparency, and oversight.  

We’re doing it this way so that we can reach people who are completely nontechnical and might be mystified by the lingo of AI and explanations of different evaluation metrics of fairness. We’ve boiled it down to what they need to understand to challenge or contest the deployment of AI.

My hope, being an engineer, is to close the feedback loop. If you buy a product like a webcam and it breaks down, you can complain, or you can get a warranty and hold the manufacturer accountable for a faulty product. With most AI today, that’s not a real option. For example, if someone is rejected for a loan because of an AI screening tool, they don’t have recourse to request appeal or redress harms because it’s such an opaque black box. AI Blindspot brings this problem to the front of people’s minds in order to address it and close the loop.

Can you describe a scenario in which a civil society organization or activist organization might want to use the AI Blindspot discovery process?

In Detroit, the city council keeps funding facial recognition systems for their police department to use. The Detroit Community Technology Project, a grassroots organization, protests the deployment of these systems, but the city council continues to fund them, even when the New York Times exposes the program for identifying the wrong person.

We’re now piloting a project with Tawana Petty, director of the Data Justice Program at the Detroit Community Technology Project, who is also a fellow in the Stanford Digital Civil Society Lab, and one of the sharpest people I know working in AI. Our hope is to apply our new AI Blindspot model to this campaign challenging the facial recognition system the city council keeps approving and putting millions of dollars into.

This is our first attempt to use AI Blindspot on the ground, as close as possible to campaigns and advocacy there in Detroit. 

And we want people to work together. That’s one of the reasons we call it AI Blindspot. We aren’t saying it’s intentional on the part of police departments or software engineers. It’s a blindspot that we have to uncover and discover through this process. It’s a shared responsibility to create accountable AI systems, not so much an us against them. 

Are you optimistic or pessimistic about AI generally?

I’m pretty optimistic that AI in general will be tremendously beneficial. It removes a lot of inefficiencies. But we have to be watchful to make sure that every community has a strong voice for challenging AI systems, to create a feedback loop for the suppliers of AI to be held accountable. 

I’m still generally optimistic that biased AI can be fixed. Whereas human imperfections are idiosyncratic and difficult to fix.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

More News Topics

Related Content