The following opinion piece by Ann M. Ravel, Digital Deception Project Director at MapLight, originally appeared on Blavity.com.
No single group was targeted more than African Americans: that was the conclusion of a Senate Intelligence Committee report which looked at digital deception efforts in the lead up to the 2016 election. Released early this month, the report has become the latest piece in a mounting pile of evidence that a large part of Russia’s strategy when it interfered in the 2016 election was to suppress the vote, and especially the black vote.
In 2017, when research into Russia’s role in the 2016 election began emerging, efforts to sow division in the American public through hyperpartisan posts were widely reported on. Much less covered was the Internet Research Agency (IRA’s) wide use of voter suppression messages, largely targeting communities of color, especially black and Latinx Americans. Although standard tools used to depress the vote and perpetuate racism in our democracy such as voter identification laws, a scarcity of polling stations, and voter roll purges have all been on the rise recently, encouraged by the 2013 Shelby County v. Holder Supreme Court decision to gut the Voting Rights Act, another, more subtle form of voter suppression has also emerged over the last few years: digital deception.
This new tactic has substantial reach. On Facebook alone, the IRA had an audience of 126 million Americans that we know of. One of the Facebook pages the IRA set up, Blacktivist, got 360,000 likes — more than the official Black Lives Matter page. In the run-up to the election, the Blacktivist page ran posts and ads claiming that the candidates didn’t care about black people, encouraging voters to stay home, to not vote, or to vote for third-party candidates.
It wasn’t just Russia either: the head of Trump’s digital campaign (now his 2020 campaign manager), Brad Parscale, has admitted to employing strategies meant to depress voter turnout. Analyses of propaganda surrounding the 2018 midterm elections have found similar instances of voters being told to vote on incorrect days, to vote online, or that their vote does not matter—either explicitly, or via messaging aiming to discourage and dissuade them from participating. More recently, Facebook has disabled IRA-linked fake accounts posing as swing state voters, one of which was using the #BlackLivesMatter hashtag to slam Democratic presidential candidate Joe Biden about his history on race issues.
At the end of October, thanks in large part to efforts by civil rights advocates, Facebook announced new measures meant to combat voter suppression on its platform. They include labeling state sponsored media, labeling debunked posts, and banning advertisements that discourage voting. Twitter has taken by far the most comprehensive action of any of the platforms so far; in addition to stopping all political advertising globally, the company has stated that “malicious election content” will be removed. However, it has provided little information on what malicious content is, or how it is detected, meaning it is still vulnerable to disinformation and voter suppression efforts.
While the efforts to address voter suppression by the platforms are a small step in the right direction, they are fall far short of fully addressing the issue. They do not, for example, address more insidious forms of voter suppression, including manipulative posts or ads by politicians containing lies and disinformation. Civil rights groups have called on Facebook to restore fact checking and labeling for paid and organic posts by politicians. Platforms’ efforts also do not address the pervasive problem they have with fake accounts. Truly combatting this problem requires regulation.
A central issue that needs to be tackled is that of microtargeting. Digital voter suppression efforts are only possible because posts can be targeted to specific communities, making such posts harder to find, regulate, and counter. When communications are only visible to the sender and recipient, it prevents accountability. In order to address the issue, individuals targeted by political communications need to be able to see exactly who is targeting them and what targeting criteria is being used. Regulations ensuring this transparency — including disclosures on political communications, a comprehensive database of political communications that includes microtargeting criteria, and reporting requirements equal to those that exist for traditional media — are desperately needed.
Additionally, more needs to be done to prevent fraudulent accounts from being created on social media platforms. Despite the fact that foreign actors are banned from participating in American campaigns, loopholes in campaign finance law — including a dearth of information on who is actually paying for political communications and insufficient identity verification policies — make monitoring and enforcing this policy extremely difficult. To correct these problems the loopholes need to be closed, automated accounts need to be identified as such, and all three platforms need to implement systems that will allow for better flagging of ‘inauthentic behavior’.
The bedrock of a representative democracy is participation. When individuals are prevented from voting — whether through increased physical barriers or through digital deception — that undermines our entire political system. It is our job, as a country, to ensure that voter suppression in any form is stamped out. Anything less contravenes the very ideals and values on which the entire country was founded.
Ann M. Ravel is the Digital Deception Project Director at MapLight and previously served as chair of the Federal Election Commission.