Decoder Newsletter: The Coronavirus ‘Infodemic’ Hits Home

Margaret Sessa-Hawkins and Viviana Padelli | October 06, 2020

Produced by MapLight, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • Coronavirus disinformation: On Thursday evening, Donald Trump tweeted that he and the First Lady had tested positive for coronavirus. Unsurprisingly, the news led to a deluge of disinformation on social media, as the L.A. Times reported. In the wake of the news, social media companies were quick to reiterate that they have policies against posts that wish someone dead. In response, thousands of women and minority users pointed out that platforms (and specifically Twitter) were not nearly as conscientious when they were receiving death threats. Twitter responded in a thread, saying it agrees it must do better.

  • Super-spreader: By coincidence, a few hours before Trump tweeted out his diagnosis, researchers at Cornell had released a paper revealing that the President himself was the largest single driver of coronavirus misinformation. That paper was not the only investigation into Trump’s propagation of disinformation to come out this past week. A working paper released by Harvard’s Berkman Klein Center found that President Trump and Fox News are also the primary drivers of a disinformation campaign around mail-in voting, which could be used to call into question the legitimacy of the election, and to drive down voter turnout.

  • Voter ‘deterrence’ 2016: Speaking of driving down voter turnout, the UK’s Channel 4 news found that the Trump campaign in 2016 disproportionately targeted minorities -- especially Black voters -- on social media in an effort to get them to stay home on election day. Taking a deeper look at the issue as a whole, Vox released a video looking at what those messages look like, why digital voter suppression generally targets Black communities, and how social media contributes to the problem, while Business Insider looked at how Facebook's microtargeting in particular enables voter suppression. 

  • Voter deterrence 2020: The New York Times Magazine has an in-depth piece on how the Trump campaign is using social media to build on a decades-long campaign of promoting false claims of voter fraud to disenfranchise Americans. It’s not just social media though, the Columbia Journalism Review examined how some of the biggest  media outlets are aiding in spreading the narrative of voter fraud. An example of the strategy can be seen in Project Veritas’ recent release of a video claiming proof of widespread voter fraud. Researchers concluded that it was part of a coordinated disinformation campaign, and Popular Info analyzed the video’s function as a diversion from the story of  Trump’s tax returns.

  • Facebook’s role: The Biden campaign sent a letter to Mark Zuckerberg before the first Presidential debate, saying that Facebook was the largest propagator of voting disinformation, and urging the company to do more, especially when it comes to President Trump’s posts. At the same time, disinformation about Biden was spreading across the platform, a trend that continued well after the debate had concluded. And although Facebook has banned ads that delegitimize the outcome of the election, Media Matters For America found at least 80 ads active on the site that appear to violate the policy.  

  • Tech CEOs subpoenaed: The Senate Commerce Committee voted unanimously Thursday to subpoena the CEOs of Twitter, Facebook, and Google for a hearing which will cover Section 230 of the Communications Decency Act, as well as issues around data privacy and antitrust. Before the subpoenas could be issued, however, the CEOs agreed to testify for the committee October 28. The move for testimony is part of a concerted Republican effort that could ultimately hinder platforms’ ability to moderate disinformation and hate speech. 

  • Section 230: Another aspect of this effort is a recent deluge of legislation dealing with Section 230 of the Communications Decency Act. Slate took a look at the Department of Justice and Sen. Lindsay Graham’s (R-S.C.) proposed legislation, pointing out that while the proposals would enable the taking down of pornography and similar content, they would not protect moderating for hate speech or electoral disinformation. MapLight has joined more than a dozen civil society organizations in calling on members of the Senate Committee on the Judiciary to oppose the measures, which would hamper online platforms’ ability to counter disinformation.  Meanwhile, Rebecca Kern of Bloomberg reported that Rep. Jan Schakowsky (D-III.) is circulating a draft bill that would hold digital platforms accountable for violating their online terms of service. 

  • Democracies can counter disinformation together: As this election is demonstrating, right now, technology companies wield an almost unimaginable amount of power over the flow of information. Across the world, democracies have been trying to work out various models that balance modulating this power with enabling free speech. In the MIT Technology Review, Marietje Schaake, however, has proposed that democracies should instead form a coalition that would agree on regulations and standards for technology companies. Last year, Ann Ravel, MapLight’s Digital Deception project director and former chair of the Federal Election Commission made a similar point in the Guardian, arguing that the Americas should form an international working group to address the issues of digital deception.

  • Group harm: The QAnon conspiracy is seeping into local groups on Facebook, reports Wired. The spread of the conspiracy theory to these forums could potentially affect the election, especially as it plays out in swing states. Conspiracy theories and extremism are rife throughout groups, which is why MapLight has joined more than a dozen organizations in a campaign to get Facebook to stop group recommendations. Paul Waldman and Greg Sargent wrote an opinion piece in the Washington Post examining the harm of these recommendations. While Facebook has said it will limit content from groups linked to violence, it also recently unveiled a suite of new features meant to expand the public reach of groups, which could intensify existing issues.

  • UNFCK the internet: Mozilla has begun a new campaign titled (pardon our almost-French) UNFCK the internet. The campaign includes a Firefox extension that shares political ads on Facebook to a public database, an extension that prevents Facebook from tracking your web activity and the recently-launched Regrets Reporter that lets users flag harmful Youtube recommendations. (If you’re interested in helpful browser plug-ins, here is another reminder that MapLight has a new Election Deception Tracker which allows you to report deceptive election-related information on Facebook).

  • Society confronting disinformation: Joan Donovan, Research Director of Harvard Kennedy School’s Shorenstein Center on Media, Politics and Public has released a series of recommendations on how Civil Society Organizations (CSOs) can counter digital disinformation. The recommendations involve specific suggestions for how CSOs can go about the three Ds (detecting, documenting and debunking) of disinformation, with useful tips and examples provided for different situations. As Donovan points out in her introduction, there aren’t many documents of this type around, so this one is especially useful.