Publications

2023
The Ends Of Artificial Intelligence
Hong QU and Seth Rudy. 6/1/2023. “The Ends Of Artificial Intelligence.” In The Ends of Knowledge: Outcomes and Endpoints Across the Arts and Sciences, edited by Rachael Scarborough King, Pp. 139–150. London: Bloomsbury. Publisher's VersionAbstract
The powers and possibilities of artificial intelligence (AI) are boundless, as computer scientists and philosophers formalize novel cognitive and moral capabilities, progressively expanding the boundaries of our imaginations. This monumental technological revolution will disrupt the global order and social hierarchies, and generate winners and losers among those who steer the telos of AI to either protect vested interests or uphold social justice. Our greatest challenge as a species in wielding — or yielding to — these new intelligent systems will be to stipulate the establishment of ethical governance structures which prioritize and protect freedom and fairness. What’s at stake is nothing less than the future of humanity — one in which software code and ethical codes fight for supremacy to shape our lives.
Emily Olson. 4/15/2023. “How the Boston Marathon bombings changed Twitter, media and how we process tragedy.” NPR. Publisher's Version
2021
10/18/2021. “Interdisciplinary learning through accessible, intentional technology.” Into Practice. Publisher's VersionAbstract

Hong Qu, Adjunct Lecturer in Public Policy, taught Data Visualization virtually last spring to over 70 students from different Harvard Schools, levels of experience, and corners of the world. To foster a close-knit community among students from diverse backgrounds, Qu intentionally curated a set of online tools and learning exercises to generate an “ambient telepresence.” For instance, he assigned group data visualization projects to promote peer learning and used VoiceThread for assigned peer critiques. During synchronous class time, students were invited to sketch with Qu using Jamboard on the shared screen—a novel form of participation to draw out the inner artist/designer in every student. “I wanted to give them a sense that we’re spending time with each other in this very challenging period to learn as a community, to work together on group projects, and to achieve organic connections and authentic relationships between all our unique places during this pandemic."

https://vpal.harvard.edu/interdisciplinary-learning-through-accessible-intentional-technology

Résumé-Writing Tips to Help You Get Past the A.I. Gatekeepers
Julie Weed. 3/19/2021. “Résumé-Writing Tips to Help You Get Past the A.I. Gatekeepers.” NY Times. Publisher's VersionAbstract

...The artificial intelligence used by hiring systems can generate unintended harmful consequences, said Hong Qu, a race and technology fellow at Stanford. He is a creator of AI Blindspot, a set of practices that help software development teams recognize unconscious biases and structural inequalities that could affect their software’s decision-making.

“The systems can still have their own forms of biases and may screen out qualified applicants,” Mr. Qu said.

Patricia Wei and Jared Klegar. 2/3/2021. ““Even if you can do it, should you?” Researchers talk combating bias in artificial intelligence.” Stanford Daily. Publisher's VersionAbstract

As artificial intelligence becomes increasingly common in several areas of public life — from policing to hiring to healthcare — AI researchers Timnit Gebru, Michael Hind, James Zou and Hong Qu came together to criticize Silicon Valley’s lack of transparency and advocate for greater diversity and inclusion in decision making.

The event, titled “Race, Tech & Civil Society: Tools for Combating Bias in Datasets and Models,” was sponsored by the Stanford Center on Philanthropy and Civil Society, the Center for Comparative Studies in Race and Ethnicity and the Stanford Institute for Human-Centered Artificial Intelligence.

Qu, the moderator of the panel and a CCSRE Race & Technology Practitioner Fellow who developed a tool called AI Blindspot that discovers bias in AI systems, opened the conversation by discussing how there are two definitions of combating bias: ensuring that algorithms are “de-biased” and striving for equity in a historical and cultural context.

Gebru ’08 M.S. ’10 Ph.D. ’15 said that she was introduced to these issues as they relate to machine learning in AI after seeing a ProPublica article about recidivism algorithms, and a TED Talk by Joy Buolamwini, a graduate researcher at MIT, who discovered that an open-source facial recognition software did not detect her face unless she wore a white mask. 

As Gebru moved higher up in the tech world, she noticed a severe lack of representation — a pressing inequity she is working to redress. “The moment I went to grad school, I didn’t see any Black people at all,” Gebru said. 

“What’s important for me in my work is to make sure that these voices of these different groups of people come to the fore,” she added. 

Michael Hind is an IBM Distinguished Researcher who leads work on AI FactSheets and AI Fairness 360, which promote transparency in algorithms. Hind agreed with Gebru, noting the importance of “having multiple disciplines and multiple stakeholders at the table.” 

But not everyone has affirmed Gebru’s mission of inclusion. Last month, Google fired Gebru from her role as an AI ethics researcher after she had co-written a paper on the risks of large language models. Her paper touched on bias in AI and how social linguists, who study how language relates to power, culture, and society, were left out of the process.

Since the news of her firing broke, computer science educators and students showed their support and solidarity for Gebru and her work. When Qu asked the panelists how to increase racial literacy, Gebru responded, “I tried to do it and I got fired.” 

She cited the “high turnover” of employees tasked to spearhead diversity initiatives, whose efforts to enact institutional change were often dismissed.

“They don’t have any power,” Gebru said. “They’re miserable. They leave.” Despite Google’s many ethics committees, “there’s just no way that this will work if there’s no incentive to change,” she added

Citing this culture of complacency as one of the reasons he left Silicon Valley, Qu said, “For me, I believe it’s more pernicious to be passively complicit than even to be intentionally malicious.”

In many cases, technology companies may even step beyond passive complicity, Gebru said. She cited Microsoft’s partnership with the New York Police Department to develop predictive policing algorithms. Because these technological tools rely on historical data, some researchers say they may reinforce existing racial biases.

“For any Black person in the United States who has had experiences with police,” Gebru said, “you would understand, you know, why predictive policing would be an issue.”

Zou, an assistant professor of biomedical data science at Stanford, remained optimistic about the use of AI for social good. He noted that because of the pandemic, the need for telehealth has exploded. Doctors are now relying on patients on images from patients to help diagnose them, and computer vision technology can help patients take better photos, he said.

Even cases of algorithmic bias can present valuable learning opportunities, Zou added. Pointing to language models he worked on as a member of Microsoft Research, Zou said that the models’ gender biases offered a way to “quantify our stereotypes.”

However, Zou was reluctant to place too much faith in AI. 

“If it’s a scenario that affects someone’s health or safety, AI should not be the sole decision maker,” he said.

As the conversation ended, the researchers agreed on the need to continue questioning the role of using AI in tools that affect our daily lives. 

“A lot of features certainly should not be ones a data scientist or engineer should be deciding,” Hind said. 

He referred to how the virtual interviewing company HireVue used AI to analyze applicants’ videos to measure skills such as empathy and communication. 

“Even if you can do it, should you?” Gebru asked. 

 

screenshot_stanford_event.png
2020
Katharine Miller. 12/3/2020. “Shining a Headlight on AI Blindspots.” Stanford HAI Blog. Publisher's VersionAbstract
As one of Youtube’s earliest engineers, Hong Qu was assigned an important task: Make the comments section less negative. But as he and other engineers redesigned the notoriously snarky section of the popular video-sharing platform, he discovered fostering constructive community conversation was much more challenging than a quick tech fix. 
hong_qu-photo.jpg
2019
Understanding Data Privacy Protections Across Industries
11/2019. Understanding Data Privacy Protections Across Industries. Publisher's VersionAbstract

This report the nuances of privacy protection through different organizations and strategies. As part of the decision to facilitate an open discussion during the workshop, we will integrate some of the discussion, questions and ideas throughout this report without attribution.

Introducing AI Blindspot: A Call for Tech to Think Holistically and Spot Risks
10/22/2019. “Introducing AI Blindspot: A Call for Tech to Think Holistically and Spot Risks.” Berkman Klein Center on Medium.Abstract

How can teams prevent structural inequalities and their unconscious biases from affecting artificial intelligence systems?

AI Blindspot aims to stimulate discussion and deliberation about the pitfalls of AI systems. To do this, we created a website and a set of printed cards as both a provocation and a tool.

AI Blindspot
6/30/2019. “AI Blindspot.” A discovery process for spotting unconscious biases and structural inequalities in AI systems. Publisher's VersionAbstract
AI Blindspots are oversights in a team’s workflow that can generate harmful unintended consequences. They can arise from our unconscious biases or structural inequalities embedded in society. Blindspots can occur at any point before, during, or after the development of a model. The consequences of blindspots are challenging to foresee, but they tend to have adverse effects on historically marginalized communities. Like any blindspot, AI blindspots are universal -- nobody is immune to them -- but harm can be mitigated if we intentionally take action to guard against them.
ai_blindspot_cards.pdf
Privacy Design Forecast 2019
5/5/2019. “Privacy Design Forecast 2019.” A collection of conceptual ideas on privacy by design and meaningful informed consent. Publisher's VersionAbstract

 

Bridging privacy policy with product design

 

The Shorenstein Center on Media, Politics and Public Policy reached out to academics, design experts, and consumer advocates who are leading thinkers and practitioners in promoting privacy enhancing technologies. This website is a collection of their ideas on ways to build privacy into products and systems.

How might organizations and legislators collaborate to better translate aspirational privacy principles to product design and development?

There is a gap between translating privacy regulation to technology created by engineers and designers. Conversely, there is a gap between understanding how products work and how data is collected, maintained, and safeguarded that leads to the creation of ineffective public policy. This project aims to build bridges connecting these communities of practice.

2016
Collaborative streaming of video content
Hong QU, Yu Pan, Ches Wajda, and Maryrose Dunton. 3/8/2016. “Collaborative streaming of video content.” United States of America US9282068B1 (U.S Patent and Trademark Office). Publisher's VersionAbstract
A system, method and various user interfaces enable visually browsing multiple groups of video recommendations. A video stream includes a group of videos to be viewed and commented by users who join the stream. Users who join a stream form a stream community. In a stream community, community members can add videos to the stream and interact collaboratively with others community members, such as chatting in real time with each other while viewing a video. With streams, a user can create a virtual room in an online video content distribution environment to watch videos of the streams and interact with others while sharing videos simultaneously. Consequently, users have an enhanced video viewing and sharing experience.
2014
Real-time video commenting
Hong QU, Steven Chen, Michael Powers, and Yu Pan. 8/26/2014. “Real-time video commenting.” United States of America US8819719B1 (U.S Patent and Trademark Office). Publisher's VersionAbstract
The present invention enables real-time video commenting by viewers of media content on a web site. The media content may be video, audio, text, still images or other types of media content. When a content viewer indicates a desire to provide a real-time video comment, a content server causes a video input device at the content viewer's location to be activated. The content viewer's video comment is captured by the video input device and transmitted to the content server, where it is stored and associated with the video being commented upon. When the original video is subsequently presented to content viewers, indicia of the video comment such as a thumbnail or description of the comment is also presented, thus inviting content viewers to view the video comment in addition to the original video.
2013
Organize the Noise: Tweeting Live from the Boston Manhunt
Hong QU and Seth Mnookin. 6/13/2013. “Organize the Noise: Tweeting Live from the Boston Manhunt.” Nieman Reports, 67, 1, Pp. 26-30. Publisher's VersionAbstract

A reporter and a programmer on what social media coverage of the Boston bombings means for journalism

Reporting the Boston manhunt
4/22/2013. “Reporting the Boston manhunt.” Radio National Australian Broadcast Network.Abstract

The reporting of the Boston bombings and the subsequent police hunt were riddled with errors, both online and in the traditional media.

In the race to be first, a number of innocent people were wrongly named as bombing suspects.

The events have raised interesting questions about the future of policing and the reporting of breaking news:

Hong QU. 4/19/2013. “Twitter, Credibility and The Watertown Manhunt.” Nieman Reports. Publisher's VersionAbstract

Nieman Visiting Fellow Hong Qu analyzes what Twitter coverage of the Watertown manhunt tells us about how non-journalists establish credibility and emerge as ‘go to’ sources for mainstream media.

hong-qu_2_lg.jpg
Social media and the Boston bombings: When citizens and journalists cover the same story
4/17/2013. “Social media and the Boston bombings: When citizens and journalists cover the same story.” Nieman Lab. Publisher's VersionAbstract
Nieman Visiting Fellow Hong Qu analyzes the role social media played in breaking the news of the Boston Marathon attack.
2006
Hong QU, Andrea La Pietra, and Sarah S Poon. 2006. “Automated Blog Classification: Challenges and Pitfalls.” In AAAI Spring Symposium: Computational Approaches to Analyzing Weblogs, Pp. 184-186. Stanford, California: Association for the Advancement of Artificial Intelligence. Publisher's VersionAbstract
Blogs are difficult to categorize by humans and machines alike, because they are written in a capricious style. In the early days of web, directories maintain by humans could not keep up millions the websites; likewise, blog directories cannot keep up with the explosive growth of the blogsphere. This paper investigates the efficacy of using machine learning to categorize blogs. We design a text classification experiment to categorize one hundred and twenty blogs into four topics: personal diary, news, political, and sports. The baseline feature is unigrams weighed by TF-IDF, which yielded 84% accuracy. We analyze the corpus, features, and result data. Our analysis leads us to believe that blog taxonomies need to support polyhierarchy—a given blog may be correctly classified under more than one category.
ss06-03-037.pdf
2005
Hong QU, Shane Ahern, Simon King, and Marc Davis. 2005. “PhotoRouter: destination-centric mobile media messaging.” 13th annual ACM international conference on Multimedia. Singapore: ACM. Publisher's VersionAbstract
The number of people using cameraphones is growing by tens of millions every month. Yet the majority of cameraphone users have difficulty transferring photos off their phone and sharing them with others. PhotoRouter is a software application for cameraphones that makes the photo sharing process destination-centric by allowing users to focus on who the photo should go to, not how it needs to get there. Attempting to produce an application which meets user needs better than current, technology-centric cameraphone photo sharing applications, we designed PhotoRouter. In this paper we describe PhotoRouter's user interface innovations that we will show in our technical demonstration.
photorouter_paper.pdf