Hong Qu, Adjunct Lecturer in Public Policy, taught Data Visualization virtually last spring to over 70 students from different Harvard Schools, levels of experience, and corners of the world. To foster a close-knit community among students from diverse backgrounds, Qu intentionally curated a set of online tools and learning exercises to generate an “ambient telepresence.” For instance, he assigned group data visualization projects to promote peer learning and used VoiceThread for assigned peer critiques. During synchronous class time, students were invited to sketch with Qu using Jamboard on the shared screen—a novel form of participation to draw out the inner artist/designer in every student. “I wanted to give them a sense that we’re spending time with each other in this very challenging period to learn as a community, to work together on group projects, and to achieve organic connections and authentic relationships between all our unique places during this pandemic."
...The artificial intelligence used by hiring systems can generate unintended harmful consequences, said Hong Qu, a race and technology fellow at Stanford. He is a creator of AI Blindspot, a set of practices that help software development teams recognize unconscious biases and structural inequalities that could affect their software’s decision-making.
“The systems can still have their own forms of biases and may screen out qualified applicants,” Mr. Qu said.
As artificial intelligence becomes increasingly common in several areas of public life — from policing to hiring to healthcare — AI researchers Timnit Gebru, Michael Hind, James Zou and Hong Qu came together to criticize Silicon Valley’s lack of transparency and advocate for greater diversity and inclusion in decision making.
The event, titled “Race, Tech & Civil Society: Tools for Combating Bias in Datasets and Models,” was sponsored by the Stanford Center on Philanthropy and Civil Society, the Center for Comparative Studies in Race and Ethnicity and the Stanford Institute for Human-Centered Artificial Intelligence.
Qu, the moderator of the panel and a CCSRE Race & Technology Practitioner Fellow who developed a tool called AI Blindspot that discovers bias in AI systems, opened the conversation by discussing how there are two definitions of combating bias: ensuring that algorithms are “de-biased” and striving for equity in a historical and cultural context.
As Gebru moved higher up in the tech world, she noticed a severe lack of representation — a pressing inequity she is working to redress. “The moment I went to grad school, I didn’t see any Black people at all,” Gebru said.
“What’s important for me in my work is to make sure that these voices of these different groups of people come to the fore,” she added.
Michael Hind is an IBM Distinguished Researcher who leads work on AI FactSheets and AI Fairness 360, which promote transparency in algorithms. Hind agreed with Gebru, noting the importance of “having multiple disciplines and multiple stakeholders at the table.”
But not everyone has affirmed Gebru’s mission of inclusion. Last month, Google fired Gebru from her role as an AI ethics researcher after she had co-written a paper on the risks of large language models. Her paper touched on bias in AI and how social linguists, who study how language relates to power, culture, and society, were left out of the process.
Since the news of her firing broke, computer science educators and students showed their support and solidarity for Gebru and her work. When Qu asked the panelists how to increase racial literacy, Gebru responded, “I tried to do it and I got fired.”
She cited the “high turnover” of employees tasked to spearhead diversity initiatives, whose efforts to enact institutional change were often dismissed.
“They don’t have any power,” Gebru said. “They’re miserable. They leave.” Despite Google’s many ethics committees, “there’s just no way that this will work if there’s no incentive to change,” she added
Citing this culture of complacency as one of the reasons he left Silicon Valley, Qu said, “For me, I believe it’s more pernicious to be passively complicit than even to be intentionally malicious.”
In many cases, technology companies may even step beyond passive complicity, Gebru said. She cited Microsoft’s partnership with the New York Police Department to develop predictive policing algorithms. Because these technological tools rely on historical data, some researchers say they may reinforce existing racial biases.
“For any Black person in the United States who has had experiences with police,” Gebru said, “you would understand, you know, why predictive policing would be an issue.”
Zou, an assistant professor of biomedical data science at Stanford, remained optimistic about the use of AI for social good. He noted that because of the pandemic, the need for telehealth has exploded. Doctors are now relying on patients on images from patients to help diagnose them, and computer vision technology can help patients take better photos, he said.
Even cases of algorithmic bias can present valuable learning opportunities, Zou added. Pointing to language models he worked on as a member of Microsoft Research, Zou said that the models’ gender biases offered a way to “quantify our stereotypes.”
However, Zou was reluctant to place too much faith in AI.
“If it’s a scenario that affects someone’s health or safety, AI should not be the sole decision maker,” he said.
As the conversation ended, the researchers agreed on the need to continue questioning the role of using AI in tools that affect our daily lives.
“A lot of features certainly should not be ones a data scientist or engineer should be deciding,” Hind said.
He referred to how the virtual interviewing company HireVue used AI to analyze applicants’ videos to measure skills such as empathy and communication.
As one of Youtube’s earliest engineers, Hong Qu was assigned an important task: Make the comments section less negative. But as he and other engineers redesigned the notoriously snarky section of the popular video-sharing platform, he discovered fostering constructive community conversation was much more challenging than a quick tech fix.
This report the nuances of privacy protection through different organizations and strategies. As part of the decision to facilitate an open discussion during the workshop, we will integrate some of the discussion, questions and ideas throughout this report without attribution.
AI Blindspots are oversights in a team’s workflow that can generate harmful unintended consequences. They can arise from our unconscious biases or structural inequalities embedded in society. Blindspots can occur at any point before, during, or after the development of a model. The consequences of blindspots are challenging to foresee, but they tend to have adverse effects on historically marginalized communities. Like any blindspot, AI blindspots are universal -- nobody is immune to them -- but harm can be mitigated if we intentionally take action to guard against them.
The Shorenstein Center on Media, Politics and Public Policy reached out to academics, design experts, and consumer advocates who are leading thinkers and practitioners in promoting privacy enhancing technologies. This website is a collection of their ideas on ways to build privacy into products and systems.
How might organizations and legislators collaborate to better translate aspirational privacy principles to product design and development?
There is a gap between translating privacy regulation to technology created by engineers and designers. Conversely, there is a gap between understanding how products work and how data is collected, maintained, and safeguarded that leads to the creation of ineffective public policy. This project aims to build bridges connecting these communities of practice.
A system, method and various user interfaces enable visually browsing multiple groups of video recommendations. A video stream includes a group of videos to be viewed and commented by users who join the stream. Users who join a stream form a stream community. In a stream community, community members can add videos to the stream and interact collaboratively with others community members, such as chatting in real time with each other while viewing a video. With streams, a user can create a virtual room in an online video content distribution environment to watch videos of the streams and interact with others while sharing videos simultaneously. Consequently, users have an enhanced video viewing and sharing experience.
The present invention enables real-time video commenting by viewers of media content on a web site. The media content may be video, audio, text, still images or other types of media content. When a content viewer indicates a desire to provide a real-time video comment, a content server causes a video input device at the content viewer's location to be activated. The content viewer's video comment is captured by the video input device and transmitted to the content server, where it is stored and associated with the video being commented upon. When the original video is subsequently presented to content viewers, indicia of the video comment such as a thumbnail or description of the comment is also presented, thus inviting content viewers to view the video comment in addition to the original video.
Blogs are difficult to categorize by humans and machines alike, because they are written in a capricious style. In the early days of web, directories maintain by humans could not keep up millions the websites; likewise, blog directories cannot keep up with the explosive growth of the blogsphere. This paper investigates the efficacy of using machine learning to categorize blogs. We design a text classification experiment to categorize one hundred and twenty blogs into four topics: personal diary, news, political, and sports. The baseline feature is unigrams weighed by TF-IDF, which yielded 84% accuracy. We analyze the corpus, features, and result data. Our analysis leads us to believe that blog taxonomies need to support polyhierarchy—a given blog may be correctly classified under more than one category.
The number of people using cameraphones is growing by tens of millions every month. Yet the majority of cameraphone users have difficulty transferring photos off their phone and sharing them with others. PhotoRouter is a software application for cameraphones that makes the photo sharing process destination-centric by allowing users to focus on who the photo should go to, not how it needs to get there. Attempting to produce an application which meets user needs better than current, technology-centric cameraphone photo sharing applications, we designed PhotoRouter. In this paper we describe PhotoRouter's user interface innovations that we will show in our technical demonstration.