“Disinformation.” “Computational propaganda.” “Information warfare.” “Digital deception.” In the wake of the 2016 election, terms like these have become much more prevalent in American discourse. That election cycle marked the first time there was widespread recognition of the problems posed by online political manipulation efforts. Despite greater public awareness of the issue though, there’s still a significant amount of confusion surrounding it. So what, exactly, is digital deception, and how do we combat it?
Digital deception refers to the proliferation of political bots, troll farms, fake social media accounts, networks of disinformation websites, and other methods that—in addition to paid digital advertising—seek to manufacture and manipulate public opinion around political events in the United States and around the world. In the U.S., this includes everything from Russian election interference efforts to the Cambridge Analytica scandal to nontransparent digital advertising run by domestic political campaigns. Such operations work to the detriment of democratic processes by attempting to influence voter behavior, skewing public discourse, and decreasing trust in public institutions.
Microtargeting is one weapon in the arsenal of digital disinformation efforts. Organizations—including domestic campaigns and outside spending groups, as well as foreign entities—can use online data to narrowly target political and other ads to consumers based on very specific lifestyle choices or preferences. They can even link this data to individual voter files. The Republican digital ad agency i360, for example, offers to target ads to individuals based on their: age, religion, income, marital status, interests/hobbies, charitable donation preferences, views on specific political issues, social media habits, partisanship level, political persuadability, and propensity to vote. Social media “dark posts” are only visible to the advertiser and their target audience. This allows advertisers to show disparate, manipulative, and contradictory messages to specific groups and makes it nearly impossible to refute or hold them accountable for their messages.
Of particular concern is the fact that online political manipulation campaigns often target minority groups and seek to inflame polarization around already divisive issues. During the 2016 and 2018 elections, Russian propaganda operations focused on infiltrating Black and LatinX online communities in order to sow political division and discourage political participation. And just days before the 2016 election, the Trump campaign admitted to running voter suppression efforts aimed at women and African Americans, with digital campaigning as a cornerstone of their strategy.
Meanwhile, digital astroturfing—utilizing (often paid) networks of bots or trolls to amplify certain narratives and misleadingly present them as grassroots—skews political discourse to extremes and falsifies public opinion. Social media algorithms that determine what news and topics are recommended to users as “trending” struggle to distinguish between real and fake support, making platforms easy targets for such tactics. The effects ripple out to the news media as well; journalists often turn to social media to identify trending issues on which to report and to find public opinion sources for their stories. However, in this new era of digital disinformation, a study out of the University of Wisconsin-Madison found that 32 out of 33 major American news outlets had embedded at least one tweet from the top Internet Research Agency accounts between 2015 and 2017—mistaking Russian trolls for the American public. This distorts the political conversations that are fundamental to American democracy.
These activities undermine trust in democratic institutions. According to the 2018 Edelman Trust Barometer, there has been a “trust crash” in U.S. institutions. From 2017 to 2018, the American population’s trust in government fell from 47% to 33% and trust in the media fell from 47% to 42%. Among the “informed public” (avid news media consumers), the changes have been even more precipitous; trust in government dropped from 63% to 33% and trust in the media dropped from 64% to 42%. Edelman notes that “The demise of confidence in the Fourth Estate is driven primarily by a significant drop in trust in platforms, notably search engines and social media,” and that this drop is also associated with loss of trust in government leaders.
In order to preserve the democratic principles on which America was founded, we must curb digital deception. But how? In our new policy paper, we argue that allowing technology companies to self-regulate is not adequate. Government and civil society must take the lead. Unfortunately, no single policy can fully address the problem; a number of approaches will likely be needed in combination to combat digital disinformation, from campaign finance law to privacy protections to media literacy efforts. Key to all approaches will be increasing transparency and accountability online. Only by ensuring that the public knows who is trying to shape their opinions can we restore trust in the democratic process.
This post is part of a MapLight series looking at solutions combat deceptive digital politics. To view other posts in this series, visit maplight.org/category/