Decoder Newsletter: Unceasing Electoral Disinformation

Margaret Sessa-Hawkins | November 23, 2020

Produced by MapLight, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • Job opportunity!: Before we begin our regularly scheduled Decoder, we’d like to let you know about an exciting job opportunity. MapLight is looking for a new Engineering Manager for Democracy. So, if you are someone who enjoys seeing how software development can be used to combat disinformation (and likes working with really awesome people) check out the full job posting.

  • Electoral disinformation… again: Nearly three weeks after the election, Trump continues to spread disinformation meant to undermine our democracy, going so far as to fire election security director Christopher Krebs. Social media platforms are enabling this behavior. A report from found that YouTube videos espousing election fraud have been viewed over 137 million times. The warning labels Facebook has been putting on Trump’s posts also aren’t really working. Kevin Roose summed the problem up pretty succinctly, pointing out that, “false things are generally way more interesting than true things. If your system is built around an engagement ranked feed, you can label and fact-check all you want and it won't move the needle much.”

  • Covid disinformation… again: With Coronavirus disinformation continuing to proliferate on social media, a new paper in HKS’ Misinformation Review looks at Americans’ propensity to believe different types of covid-19 misinformation (and analyzes the policy implications). The Forum on Information and Democracy has published a report entitled ‘How to End Infodemics’ which features 250 recommendations from international experts. To tackle vaccine misinformation, Facebook, Google and Twitter are now working with a coalition of governments including the UK and Canada. And in a more unusual twist on health disinformation, David Zweig found that journalists weren’t as thorough in their reporting of a nurse’s viral story as they should have been.

  • Congressional hearings… again: Facebook and Twitter’s CEOs were in front of Congress testifying again, this time before the House Judiciary Committee with questions that focused mostly on content moderation. In MapLight, Viviana Padelli and Alec Saslow point out that as long as platforms’ business models remain unchanged, content moderation isn’t going to do much -- legislation is the only way to stem the flow of disinformation. For those interested in a recap of the testimony, The New York Times has an interesting breakdown of statistics, while in CNN, Brian Fung outlined the different tacks Zuckerberg and Dorsey took toward regulation.

  • Where’s Youtube?: In Wired, Evelyn Douek pointed out there was one person conspicuously absent from the hearing: YouTube’s Susan Wojcicki. When it comes to disinformation, YouTube’s strategy has been to generally ignore it. This strategy has actually worked fairly well until attention turned to the electoral disinformation YouTube allowed to spread on its platform. 

  • Facebook’s moderators: In an open letter, more than 200 Facebook moderators are objecting to their working conditions. Many moderators -- including some with high-risk household members -- have been brought back into the office in the midst of the pandemic (a decision Facebook defends). The moderators are requesting hazard pay and better medical coverage.

  • Speaking of moderators: In case you missed it among the pandemic and election news, Facebook released its latest Community Standards Enforcement Report Thursday. For the first time, the report appeared in a downloadable format, and also reported on hate speech. The company claims AI has helped it crack down on hate speech, but in Protocol, Issie Lapowsky argues the report actually supports the argument that human moderators are integral to the company.

  • Disinformation in Spanish: While Latinos were hit heavily by disinformation before the election, social media platforms did not do enough to address the problem, according to advocates. Preliminary data from civic organization Avaaz suggest that disinformation in Spanish is not labeled at the same rate as its English counterpart. 

  • Racism & disinformation: In a similar vein, researchers at the Center for Social Media Responsibility have found that disinformation about racism and the election is appearing on Facebook and Twitter nearly three times as often as coronavirus disinformation.

  • Live-streaming: There has been quite a bit of focus on social media platforms and their role in disinformation, but what about live-streaming? In a new report, the Election Integrity Project looks at the unique challenges live-streaming poses to content moderation. 

  • ‘Fleet’ problems: Twitter introduced ‘Fleets,’ or tweets that disappear within 24 hours Wednesday. Their potential for use in online abuse was flagged pretty early on. So far, users have discovered that you can tag banned accounts in fleets, as well as spread banned URLs and disinformation