Publications

2020
Josh Simons and Dipayan Ghosh. 8/2020. “Utilities for democracy: Why and how the algorithmic infrastructure of Facebook and Google must be regulated.” Brookings Institution. Publisher's VersionAbstract

In the four years since the last U.S. presidential election, pressure has continued to build on Silicon Valley’s biggest internet firms: the Cambridge Analytica revelations; a series of security and privacy missteps; a constant drip of stories about discriminatory algorithms; employee pressure, walkouts, and resignations; and legislative debates about privacy, content moderation, and competition policy. The nation — indeed, the world — is waking up to the manifold threats internet platforms pose to the public sphere and to democracy.

This paper provides a framework for understanding why internet platforms matter for democracy and how they should be regulated. We describe the two most powerful internet platforms, Facebook and Google, as new public utilities — utilities for democracy. Facebook and Google use algorithms to rank and order vast quantities of content and information, shaping how we consume news and access information, communicate with and feel about one another, debate fundamental questions of the common good, and make collective decisions. Facebook and Google are private companies whose algorithms have become part of the infrastructure of our public sphere.

We argue that Facebook and Google should be regulated as public utilities. Private powers who shape the fundamental terms of citizens’ common life should be held accountable to the public good. Online as well as offline, the infrastructure of the public sphere is a critical tool for communication and organization, political expression, and collective decisionmaking. By controlling how this infrastructure is designed and operated, Facebook and Google shape the content and character of our digital public sphere, concentrating not just economic power, but social and political power too. Leading American politicians from both sides of the aisle have begun to recognize this, whether Senator Elizabeth Warren or Representative David Cicilline, Senator Lindsey Graham or President Donald Trump.


Regulating Facebook and Google as public utilities would be a decisive assertion of public power that would strengthen and energize democracy. The public utility concept offers a dynamic and flexible set of regulatory tools to impose public oversight where corporations are affected by a public interest. We show how regulating Facebook and Google as public utilities would offer opportunities for regulatory innovation, experimenting with new mechanisms of decisionmaking that draw on the collective judgement of citizens, reforming sclerotic institutions of representation, and constructing new regulatory authorities to inform the governance of algorithms. Platform regulation is an opportunity to forge democratic unity by experimenting with different ways of asserting public power.

Founder and CEO Mark Zuckerberg famously quipped that “in a lot of ways Facebook is more like a government than a traditional company.”[1] It is time we took this idea seriously. Internet platforms have understood for some time that their algorithmic infrastructure concentrates not only economic power, but social and political power too. The aim of regulating internet platforms as public utilities is to strengthen and energize democracy by reviving one of the most potent ideas of the United States’ founding: democracy requires diverse citizens to act with unity, and that, in turn, requires institutions that assert public control over private power. It is time we apply that idea to the governance of Facebook and Google.

utilities-for-democracy.pdf
Dipayan Ghosh. 6/2020. Terms of Disservice: How Silicon Valley is Destructive by Design. Washington: Brookings Institution Press. Publisher's VersionAbstract

High technology presents a paradox. In just a few decades, it has transformed the world, making almost limitless quantities of information instantly available to billions of people and reshaping businesses, institutions, and even entire economies. But it also has come to rule our lives, addicting many of us to the march of megapixels across electronic screens both large and small.

Despite its undeniable value, technology is exacerbating deep social and political divisions in many societies. Elections influenced by fake news and unscrupulous hidden actors, the cyber-hacking of trusted national institutions, the vacuuming of private information by Silicon Valley behemoths, ongoing threats to vital infrastructure from terrorist groups and even foreign governments—all these concerns are now part of the daily news cycle and are certain to become increasingly serious into the future.

In this new world of endless technology, how can individuals, institutions, and governments harness its positive contributions while protecting each of us, no matter who or where we are?

In this book, a former Facebook public policy adviser who went on to assist President Obama in the White House offers practical ideas for using technology to create an open and accessible world that protects all consumers and civilians. As a computer scientist turned policymaker, Dipayan Ghosh answers the biggest questions about technology facing the world today. Proving clear and understandable explanations for complex issues, Terms of Disservice will guide industry leaders, policymakers, and the general public as we think about how we ensure that the Internet works for everyone, not just Silicon Valley.

Adrien Abecassis, Dipayan Ghosh, and Jack Loveridge. 2020. “La crise du coronavirus ébranle aussi l’idée de démocratie et de liberté.” Le Monde. Publisher's VersionAbstract

La crise du coronavirus met sous pression notre quotidien, notre travail, nos relations sociales. Elle ébranle aussi l’idée de démocratie et de liberté : qui aurait pensé que se promener dans la rue puisse devenir, du jour au lendemain, interdit, passible d’amende ? Plus encore, qui aurait cru qu’une telle mesure soit plébiscitée et même, pour un nombre considérable de Français en quête d’autorité, jugée insuffisamment stricte ? En quelques heures, des habitudes et croyances que l’on pensait profondément ancrées ont été renversées.

Les impératifs de santé publique ont percuté des principes démocratiques aussi fondamentaux que la liberté d’aller et venir. Tout porte à croire qu’ils rentreront aussi en conflit avec la protection de la vie privée. En Europe, la République tchèque a annoncé la première son intention de déployer un outil de localisation puissant, intrusif, utilisant les données de localisation des téléphones portables pour suivre les mouvements des porteurs de virus, afin de tracer les contacts que cette personne a eus et freiner la propagation de la pandémie. L’Allemagne, l’Italie, le Royaume-Uni ont exprimé un intérêt similaire. Emmanuel Macron vient d’engager cette discussion en France.

op-ed_le_monde_-_covid-19_digital_bill_of_rights.pdf
Dipayan Ghosh. 2020. “Overcoming Disinformation Operations.” The Day One Project, 2020. Publisher's VersionAbstract

Internet-based disinformation operations have infiltrated the universe of political communications in the United States. American politics and elections carry major implications for the national and global economy, as well as for diplomatic relations conducted by and with the United States. As a result, the United States is a major target for politically charged propaganda promulgated by both foreign and domestic actors. This paper presents a two-part approach to countering internet-based disinformation.

diminishing_russian_influence_overcoming_coordinated_disinformation_operations_through_federal_policy_ghosh.pdf
Dipayan Ghosh. 2020. “Social media and politics: Towards electoral resilience. .” In Resilience. Washington, D.C. New America. Publisher's VersionAbstract

Television was pivotal in bringing John Fitzgerald Kennedy to the American presidency in 1961. The candidate was charismatic, and his campaign was intelligent in how it exploited the nascent medium. But television did not merely amplify his existing characteristics, it markedly changed the tactics necessary to prevail in an election; that change in tactics entailed a change in the qualities necessary to be a credible candidate. These changes imposed by the technology on the American electorate in turn quickly reformed the nature of government itself. The nation’s politics were so fundamentally shaped by television that it is now impossible to realistically imagine modern political life without it.1

We now have a new technological medium that joins television as a potent and central mechanism for the construction of social reality: the online communication and networking platforms we have come to call “social media.” The leading social media platforms exert influence both directly and in conjunction with television. “Television,” in the 1960s, comprised a small number of powerful companies: NBC, CBS, and ABC. So too “social media” today is both a technological schema and, at least with relevance to national politics, a particular and small set of services offered by an even smaller group of powerful corporations: Facebook, Google, and Twitter.2

social_media_and_politics_towards_electoral_resilience._new_america_resilience-_resilience.pdf
2019
Vijeth Iyengar, Dipayan Ghosh, Tyler Smith, and Frank Krueger. 2019. “Age-Related Changes in Interpersonal Trust Behavior: Can Neuroscience Inform Public Policy? .” National Academy of Medicine. Publisher's VersionAbstract

In the years to come, there will be a significant global increase in the number of older adult persons. Some projections indicate that by 2030, there will be a higher number of adults age 60 or over than those between the ages of 10 to 24 [1]. It is critical to proactively address the novel challenges that societies will face with a shifting demography. In particular, understanding the neuropsychological changes that take place with advancing age and the effects these changes have on how older adults function and engage with their surroundings will become increasingly important in designing products, programs, and services to support the global population in the face of these inevitable new challenges.

In this paper, we link empirical findings from neuroscientific investigations of interpersonal trust behavior in older adults to incidences of financial exploitation, health care fraud, and digital deception—consumer harms for which older adults are preferentially targeted by bad actors.

Dipayan Ghosh. 2019. “Banning Micro-Targeted Political Ads Won’t End the Practice .” Wired. Publisher's VersionAbstract

A FEW WEEKS ago, we were talking about whether companies like Facebook and Twitter should ban all paid political advertising from their platforms. Now the debate has narrowed to a secondary question: Where political ads are allowed, should their micro-targeting be prohibited? Google is the first to make this restriction formal policy: The company announced on Wednesday that it will “stop allowing highly targeted political ads on its platform” and limit the steering of such messages only to large interest categories. On Thursday, news broke that Facebook, too, may soon prevent “campaigns from targeting only very small groups of people.” These policy changes are designed to curb the negative consequences of the disinformation problem and legitimate political advertising—but they may end up doing little good.

For Mark Zuckerberg in particular, one bombshell has chased the last. Last month, he proclaimed at Georgetown University that his company would take a general stance against censorship in the political context—including over any content, false or true, disseminated by politicians as advertisements over his platforms. That laissez-faire approach to content moderation was met with absolute vilification by many technology experts and critics alike. A few days later, Twitter chief Jack Dorsey made an equally strong announcement, stating that his company was shuttering all political advertising over the platform, effective later this month—essentially, the diametrical opposite of Facebook’s stated position. On November 4, Zuckerberg acceded to meet over dinner with civil rights advocates who had serious concerns about the potential for uncensored political advertising to undermine the interests of marginalized American communities.

banning_micro-targeted_political_ads_wont_end_the_practice_wired.pdf
Dipayan Ghosh. 2019. “The Commercialization of Decision-Making: Towards a Regulatory Framework to Address Machine Bias over the Internet .” The Hoover Institution. Publisher's VersionAbstract

The consumer internet has exacerbated the discrimination problem. The business model that sits behind the front end of the internet industry is one that focuses on the unchecked collection of personal information, the continual creation and refinement of behavioral profiles on the individual user, and the development of algorithms that curate content. These actions all perpetuate the new pareto optimal reality of the commercial logic underlying the modern digitalized media ecosystem: that every act executed by a firm, whether a transfer of data or an injection of content, is by its nature necessarily done in the commercial interests of the firm because technological progress has enabled such granular profiteering. This novelty in the media markets has created a tension in the face of the public motive for nondiscriminatory policies; where adequate transparency, public accountability, or regulatory engagement against industry practices are lacking, it is directly in the firm’s interest to discriminate should discriminatory economic policies suit its profit-maximizing motive. This paper discusses this technological development and offers policy responses to counteract these breaches against the subjects of internet-based discrimination.

the_commercialization_of_decision-making.pdf
Dipayan Ghosh. 2019. “Facebook’s Oversight Board Is Not Enough .” Harvard Business Review. Publisher's VersionAbstract

Following Mark Zuckerberg’s stated commitment to improving his company’s public accountability measures nearly a year ago, Facebook announced detailed plans last month for its new Oversight Board. The body, which the company says will comprise 40 independent experts who will serve in three-year terms, has been described by many as Facebook’s own Supreme Court as it will adjudicate questions of content policy on the company’s platforms as they arise. The board is designed to have notable independence; in these judgments it can overrule Zuckerberg himself.

 
Dipayan Ghosh. 2019. “Hold tech companies accountable for fake news .” The Hindustan Times. Publisher's VersionAbstract

In the days after Donald Trump won the United States presidency, it became resoundingly clear that the Russians had engaged in disinformation operations to push millions of potential social media impressions at the American voting population — content that may have swung tens of thousands of critical votes in key swing states across the nation. But when questioned about the nefarious Russian activity by the American public, Facebook chief executive Mark Zuckerberg’s response was predictably defensive; he claimed only “a very small amount [of all the content on Facebook] is fake news and hoaxes”. He added, “The idea that fake news on Facebook…influenced the election in any way is a pretty crazy idea.” And perhaps, most critically, he suggested that, in any case, the firm doesn’t want to be an “arbiter of truth” — in other words, that he did not want to put Facebook in the position of having to determine whether certain forms of content, like targeted political lies, should be taken down from the firm’s platforms or not.

But the time for corporations to shirk this responsibility must come to an end. And it must be the government and the people who hold the corporate sector’s hand through the process — or, if need be, pull the industry by the ear.

hold_tech_companies_accountable_for_fake_news_-_analysis_-_hindustan_times.pdf
Dipayan Ghosh and Joshua A. Geltzer. 2019. “HUD's new lawsuit against Facebook is a dagger at the heart of the consumer internet .” CNN. Publisher's VersionAbstract

Last week, the US Department of Housing and Urban Development took Facebook and the broader internet industry by surprise and storm with a remarkable allegation: that the company has engaged in discriminatory practices that engendered and perpetuated bias against marginalized classes of the American population -- such as non-Christians, immigrants, and minorities -- by displaying housing ads only to selected audience segments in unfair ways.

This charge of housing discrimination might seem like something of a peripheral matter, given core concerns about social media that relate to interference with democracy, terrorist radicalization and social polarization. But HUD's charge takes direct aim at Facebook's fundamental business model: the company's digital advertising management platform and the algorithms underlying it, which collectively enable commercial entities to splice and select the demographic audience segments they wish to target with ads based on race, gender and politics, among other factors.

That makes HUD's lawsuit an important step at reining in online practices that can otherwise betray decades of important legal and regulatory progress in defending civil rights.

huds_new_lawsuit_against_facebook_is_a_dagger_at_the_heart_of_the_consumer_internet_-_cnn.pdf
Dipayan Ghosh. 2019. “A New Digital Social Contract Is Coming for Silicon Valley .” Harvard Business Review. Publisher's VersionAbstract

Last month, the British parliament released a detailed and forceful report on the disinformation – or “fake news” – problem. The report examined what it described as overreaches of an internet industry comprised of leading Silicon Valley firms that, in the committee’s view, is responsible for perpetrating tremendous harms against British citizens.

This report is only the latest incident marking a trajectory toward regulation and legislation that will constrain how these and other firms operate on the web in regard to not only disinformation but also increasingly broader social and economic concerns including transparency, privacy, and competition. Businesses need to know what is coming, and what is at stake — and come to the table in Washington ready to contribute to these efforts in an honest negotiation.

a_new_digital_social_contract_is_coming_for_silicon_valley.pdf
Dipayan Ghosh. 2019. “A New Digital Social Contract to Encourage Internet Competition .” Antitrust Chronicle. Publisher's VersionAbstract

Over the past year, the common conception that the lion’s share of the digital advertising market would securely remain the dominion of Facebook and Google for the foreseeable future was turned on its head with the emergence of an entirely new player: Amazon. The company’s emergence in digital advertising has raised the idea that it might present a major challenge to the market power that Facebook and Google have developed in the sector over the past many years.2

cpi_-_ghosh_-_final.pdf
Vijeth Iyengar and Dipayan Ghosh. 2019. “Older Adults Are Especially Prone to Social Media Bubbles .” Scientific American. Publisher's VersionAbstract

The past year has starkly illustrated how pervasive and deep-rooted the disinformation problem is in American society. We learned, for example, of the shocking revelations that the information associated with 87 million Facebook users had been illegally accessed by Cambridge Analytica. And we have been sequentially disheartened by news of data breach after data breach, each of which has discouraged any faith we might have had that Silicon Valley can effectively regulate itself to fight digital disinformation.

Centrally responsible for the stubbornness of the disinformation problem is the business model that sits at the heart of the internet itself—a business model that is premised on (1) the creation of borderline-addictive web-based services that enjoy a network effect; (2) the unchecked collection of personal data through those services to create behavioral profiles; and (3) the development and implementation of opaque algorithms that curate content in our social feeds and target ads at us.

These practices are as remarkably simple as they are exploitative of our individual autonomy, and they align well with the phenomenon of motivated cognition—the idea that the way in which individuals perceive, interact and operate in their environment is biased towards achieving an outcome most favorable to them. This phenomenon is manifested across a variety of societal contexts, including inflated self-appraisals for positive personality traits and the tendency to consume and endorse new information consistent with one’s prior belief system.

older_adults_are_especially_prone_to_social_media_bubbles_-_scientific_american_blog_network.pdf
2018
Dipayan Ghosh. 2018. “Beware of A.I. in Social Media Advertising .” The New York Times. Publisher's VersionAbstract

Nine days ago, we learned that Cambridge Analytica, the firm engaged by the Trump campaign to lead its digital strategy leading up to the 2016 United States presidential elections, illegitimately gained access to the Facebook data of more than 50 million users, many of them American voters. This revelation came on the heels of the announcement made last month by the Justice Department special counsel Robert Mueller of the indictment of 13 Russians who worked for the Internet Research Agency, a “troll farm” tied to the Kremlin, charging that they wielded fake social media accounts to influence the 2016 presidential elections.

But as Facebook, Google, Twitter and like companies now contritely cover their tracks and comply with the government’s requests, they simultaneously remain quiet about a critical trend that promises to subvert the nation’s political integrity yet again if left unaddressed: the systemic integration of artificial intelligence into the same digital marketing technologies that were exploited by both Cambridge Analytica and the Internet Research Agency.

According to the F.B.I.’s findings, the tactics used to date by Russia have, technologically speaking, not been particularly sophisticated. Those tactics have included the direct control of fake social media accounts and manual drafting of subversive messages. These were often timed for release with politically charged incidents in the real world — including, for instance, the suicide bombings in Brussels, the declaration of Donald Trump as the Republican nominee and Mr. Trump’s staging of a town hall in New Hampshire, each of which occurred weeks before election night in 2016. Further, according to various experts, Cambridge Analytica’s targeting efforts likely were tame and ineffective.

opinion_beware_of_a.i._in_social_media_advertising_-_the_new_york_times.pdf
Dipayan Ghosh and Ben Scott. 2018. “Digital Deceit II: A Policy Agenda to Fight Disinformation on the Internet.” New America; Shorenstein Center at the Harvard Kennedy School. Publisher's VersionAbstract

The crisis for democracy posed by digital disinformation demands a new social contract for the internet rooted in transparency, privacy and competition. This is the conclusion we have reached through careful study of the problem of digital disinformation and reflection on potential solutions. This study builds off our first report—Digital Deceit—which presents an analysis of how the structure and logic of the tracking-and-targeting data economy undermines the integrity of political communications. In the intervening months, the situation has only worsened—confirming our earlier hypotheses—and underlined the need for a robust public policy agenda.

Digital media platforms did not cause the fractured and irrational politics that plague modern societies. But the economic logic of digital markets too often serves to compound social division by feeding pre-existing biases, affirming false beliefs, and fragmenting media audiences. The companies that control this market are among the most powerful and valuable the world has ever seen. We cannot expect them to regulate themselves. As a democratic society, we must intervene to steer the power and promise of technology to benefit the many rather than the few.

We have developed here a broad policy framework to address the digital threat to democracy, building upon basic principles to recommend a set of specific proposals.

Transparency: As citizens, we have the right to know who is trying to influence our political views and how they are doing it. We must have explicit disclosure about the operation of dominant digital media platforms -- including:

  • Real-time and archived information about targeted political advertising;
  • Clear accountability for the social impact of automated decision-making;
  • Explicit indicators for the presence of non-human accounts in digital media.

Privacy: As individuals with the right to personal autonomy, we must be given more control over how our data is collected, used, and monetized -- especially when it comes to sensitive information that shapes political decision-making. A baseline data privacy law must include:

  • Consumer control over data through stronger rights to access and removal;
  • Transparency for the user of the full extent of data usage and meaningful consent;
  • Stronger enforcement with resources and authority for agency rule-making.

Competition: As consumers, we must have meaningful options to find, send and receive information over digital media. The rise of dominant digital platforms demonstrates how market structure influences social and political outcomes. A new competition policy agenda should include:

  • Stronger oversight of mergers and acquisitions;
  • Antitrust reform including new enforcement regimes, levies, and essential services regulation;
  • Robust data portability and interoperability between services.

There are no single-solution approaches to the problem of digital disinformation that are likely to change outcomes. Only a combination of public policies—all of which are necessary and none of which are sufficient by themselves—that truly address the nature of the business model underlying the internet will begin to show results over time. Despite the scope of the problem we face, there is reason for optimism. The Silicon Valley giants have begun to come to the table with policymakers and civil society leaders in an earnest attempt to take some responsibility. Most importantly, citizens are waking up to the reality that the incredible power of technology can change our lives for the better or for the worse. People are asking questions about whether constant engagement with digital media is healthy for democracy. Awareness and education are the first steps toward organizing and action to build a new social contract for digital democracy.

digital_deceit_2.pdf
Dipayan Ghosh. 2018. “Facebook Is Changing How Marketers Can Target Ads. What Does That Mean for Data Brokers? .” Harvard Business Review. Publisher's VersionAbstract

Last month, Facebook announced in a brief statement that it will be shutting down Partner Categories, a feature that allows marketers to target ads on the company’s universe of platforms by using third-party data provided by data brokers. The move, which comes during a period of intense scrutiny over the social media giant’s privacy and security practices following the Cambridge Analytica revelations, marks a first-of-its-kind pivot among internet companies. This development could have major repercussions for internet companies and the broader digital advertising ecosystem if the firms at the center of this industry follow suit, collectively distancing themselves from data brokers and increasing transparency into their practices with personal data.

Dipayan Ghosh. 2018. “Facebook Isn’t Silicon Valley’s Only Problem .” The New York Times. Publisher's VersionAbstract

Over the past several months, there has been an onslaught of alarming news about Facebook’s collection and sharing of users’ personal data. The resulting public scrutiny of Facebook is well deserved. But as we scramble to understand the societal harms caused by one Silicon Valley behemoth, we mustn’t turn a blind eye to those instigated by the rest of the technology industry.

Facebook is deservedly the most visible public target right now. In March, the whistle‐blower Christopher Wylie revealed that the British political advisory firm Cambridge Analytica had illegitimately gained access to more than 50 million people’s Facebook data through the efforts of a foreign academic, Aleksandr Kogan. The next month, it was reported that the real number was 87 million. Weeks later, Facebook confirmed that it had entered into many more data partnerships with questionable applications, around 200 of which the company suspended. And last month, it came to light that Facebook has had longstanding data‐sharing arrangements with no fewer than 60 manufacturers of device technology, partnerships about which political leaders have expressed deep reservations.

But the reality is that the digital wilderness stretches far beyond Facebook, to a much larger tech ecosystem that deserves holistic examination and, potentially, regulation in the days ahead.

opinion_facebook_isnt_silicon_valleys_only_problem_-_the_new_york_times.pdf
Dipayan Ghosh and Ben Scott. 2018. “Facebook’s New Controversy Shows How Easily Online Political Ads Can Manipulate You.” TIME. Publisher's VersionAbstract

he questions surrounding the role of Facebook and other social media sites in the politics of our time have been coming at what feels like an accelerating pace. Reporting by the Observerthe Guardian and the New York Times in recent days has revealed that Cambridge Analytica — the social media monitoring firm that bragged it helped put Trump in the White House — had gained access before the election to the data of 50 million Facebook users through highly questionable means. Cambridge Analytica used to that data to create a tool of “psychological warfare” to manipulate American voters with targeted Facebook ads and social media campaigns. This news has painted the national discussion over social media’s impact on national politics in a stark new light. There was already a debate raging about how targeted digital ads and messages from campaigns, partisan propagandists and even Russian agents were sowing outrage and division in the U.S. electorate. Now it appears that Cambridge Analytica took it one step farther, using highly sensitive personal data taken from Facebook users without their knowledge to manipulate them into supporting Donald Trump. This scandal raises major questions about how this could have happened, how it can be stopped and whether the connection between data-driven ads and democracy is fundamentally toxic.

The bombshells are dropping so fast in this story about social media and the 2016 election, it is hard to keep up. Recall that just last week, Washington was aflutter over allegations from Brad Parscale, head of digital media strategy for President Donald Trump’s 2016 presidential run and the man who led the partnership with Cambridge Analytica, who tweeted on February 24 that his boss’ campaign had a massive advantage using Facebook advertising to reach voters. Parscale, who is now chief of Trump’s 2020 efforts, said his candidate’s Facebook ads were 100 or 200 times more cost-effective than those placed by the Clinton campaign for the presidency. Facebook quickly shared proprietary data illustrating that the two campaigns paid roughly the same aggregate sums to reach voters — and that the Trump campaign actually paid more on average than the Clinton campaign.

Now in light of the Cambridge Analytica headlines, it is clear that price of the advertising wasn’t the real story. The real story is about how personal data from social media is being used by companies to manipulate voters and distort democratic discourse. In this regard, it appears the Trump campaign had a decisive and ill-gotten advantage in the quest to exploit personal data to influence voters. And they used it to the hilt.

new_facebook_scandal_shows_how_political_ads_manipulate_you_time.pdf
Dipayan Ghosh. 2018. “How GDPR Will Transform Digital Marketing .” Harvard Business Review. Publisher's VersionAbstract

This month will see the enforcement of a sweeping new set of regulations that could change the face of digital marketing: the European Union’s General Data Protection Regulation, or GDPR. To protect consumers’ privacy and give them greater control over how their data is collected and used, GDPR requires marketers to secure explicit permission for data-use activities within the EU. With new and substantial constraints on what had been largely unregulated data-collection practices, marketers will have to find ways to target digital ads, depending less (or not at all) on hoovering up quantities of behavioral data.

Consumers have already seen a range of developments resulting from the forthcoming enforcement of GDPR. Among these are the dozens of messages from web-based companies from TaskRabbit to Twitter about privacy policy updates, as well as the recent reports about how major internet companies like Facebook and LinkedIn are moving the personal data associated with non-Europeans out of Europe and into other jurisdictions – the latter of which are largely moves designed to minimize legal liability. But while digital marketers are aware of the strict new regulatory regime, seemingly few have taken active steps to address how it will impact their day-to-day operations.

 

GDPR will force marketers to relinquish much of their dependence on behavioral data collection. Most critically, it will directly implicate several business practices that are core to current digital ad targeting. The stipulation that will perhaps cause most angst is the new formulation for collecting an individual’s consent to data gathering and processing; GDPR requires that consent be active (as opposed to passive) and represent a genuine and meaningful choice. Digital marketers know that users of internet-based services like Snapchat, Facebook, and Google technically provide consent by agreeing to these companies’ terms of service when they sign up. But does this constitute an active and genuine choice? Does it indicate that the user is willing to have her personal data harvested across the digital and physical worlds, on- and off-platform, and have that data used to create a behavioral profile for digital marketing purposes? Almost certifiably not.

 

how_gdpr_will_transform_digital_marketing.pdf

Pages