Decoder Newsletter: Online Voter Suppression Amps Up

Margaret Sessa-Hawkins | August 31, 2020

Produced by MapLight, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • Black voters are being targeted by disinformation as the election nears. This past week, Twitter removed fake accounts that purported to be Black people who had abandoned the Democratic party after a tweet from one of the accounts went viral. The fake accounts have been accompanied by an uptick in ‘copypasta,’ where posts are copied and pasted verbatim and then tweeted out across multiple accounts. As a result, Twitter said that it may limit the visibility of any tweets that engage in this practice, although it did not provide any details on which tweets would be hidden or how. The efforts mirror 2016, when Russian operatives used disinformation to engage in voter suppression efforts targeting Black individuals in particular.  

  • Other instances where Black voters have been targeted include FreedomWorks (the dark money group most known for helping to launch the tea-party) boosting a website called Protect My Vote. The website used Lebron James’ image as part of a disinformation campaign on mail-in voting. Facebook deleted Protect My Vote’s page after James -- who advocates for Black voting rights -- tweeted about it. Google, however, chose to leave up advertisements from the group. In Michigan, the Attorney General and Secretary of State are investigating robocalls that used “racially charged stereotypes” to deter residents from voting by mail. The Daily Beast found evidence that the calls also occurred in Pennsylvania. 

  • Facebook, meanwhile, is still illegally allowing discriminatory ads, despite apparently having banned the practice a year ago, a new investigation from The Markup finds. A May job ad posted by a healthcare company was targeted to those under 55 and with “African American multicultural affinity.” The ad was targeted this way despite the fact that federal law currently prohibits employers from discriminating based on age or race. In a settlement last year, Facebook had already agreed to stop allowing housing, job and financial services advertisers to target based on gender, age, or “multicultural affinities.”

  • MapLight has become a plaintiff in a lawsuit challenging the Trump administration’s executive order which sought to punish social media companies for fact-checking him. Protect Democracy, along with the Electronic Frontier Foundation (EFF) and Cooley LLP, is challenging the President’s ‘Executive Order on Preventing Online Censorship’ in federal court on behalf of MapLight, Common Cause, Free Press, Rock the Vote and Voto Latino, on the grounds that it violates the First Amendment. The executive order is meant to intimidate social media companies so that they stop moderating the President’s posts -- particularly their corrections of his false statements about the election and mail-in-voting.
  • Facebook is facing criticism for failing to take down the page of a self-proclaimed militia group in Wisconsin. The Kenosha Guard page, which issued a ‘call to arms’ in advance of protests in Kenosha last week, was reported to Facebook at least 455 times according to an internal report viewed by Buzzfeed News. Both times moderators decided the page did not violate the platform’s policies. Facebook eventually took the page down on Wednesday morning, after two people were shot and killed Tuesday evening at the protests. 

  • Facebook CEO Mark Zuckerberg has since acknowledged the site was wrong to not take down the page, blaming an ‘operational mistake’ for the company’s failure to take down the page. The company has now banned militia groups and pages that advocate violence, but Shirin Ghaffary at Recode reports that they still proliferate on the site. And while the social media platform has blocked searches for the name of the alleged shooter, Julia Carrie Wong reports in ‘The Guardian’ that praises for him are still proliferating on the platform despite its supposed ban of posts supporting mass shooters.

  • Coronavirus misinformation is becoming an issue in some Latin American Christian communities. In ‘First Draft’, Jaime Longoria, Daniel Acosta Ramos and Madelyn Webb take a look at how fake cures and conspiracy theories from religious leaders have proliferated on social media. In one case, the page of a Mexican pastor promoting chlorine dioxide (which can kill those who take it) as a cure for coronavirus was seen 2.2 million times and had 57,000 shares. 

  • Facebook finds itself caught in a controversy over hate speech in India. The controversy began when a BJP legislator in the state of Telengana posted derogatory comments about both Rohingya refugees and Muslims on his Facebook page and Facebook declined to remove them. The company’s top public policy executive, Ankhi Das, then informed staff that hate speech rules should not be applied to BJP allies. The controversy has raised questions over the company’s ties with the ruling party, especially after ‘The ‘Wall Street Journal’ reported that Das has made several internal posts over the years indicating her support for the BJP. Facebook executives will be appearing before an Indian parliamentary information technology committee this week to respond to questions about the incident.

  • Facebook’s ‘Kill Switch’ would do little to address disinformation, argues Nina Jankowicz in Wired. Following reports last week that the company is considering a ‘kill switch’ to shut down all political advertising in the event of a disputed presidential election result, Jankowicz points out that advertisements are not the main way disinformation spreads online, and that a lack of consistent enforcement already keeps Facebook’s policies from being fully effective.