NEWS

Decoder Newsletter: Platforms, Presidents, and Protests

Hamsini Sridharan | June 22, 2020

Produced by MapLight, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we track the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. If you're interested in receiving the Decoder in your inbox, sign up here.

  • Should social media platforms moderate posts by political leaders, such as President Trump, when they spread disinformation about voting or incite violence? In Axios, Sara Fischer offers a useful recap of Twitter, Facebook, and Snapchat’s responses to Trump’s recent problematic posts. (Worth noting: this morning, Trump yet again tweeted damaging and unsubstantiated claims about mail-in voting — with no warning label from Twitter as of this writing.) 

  • MapLight’s response: In TechCrunch, Ann Ravel, Margaret Sessa-Hawkins, and I call on platforms to enforce their policies against voting disinformation and incitement of violence, even against political leaders: “Trump’s social media posts are but the latest installment in a long, ugly history of voter suppression and violence against protestors, much of it targeting Black communities in the United States. Put together, the events of the past week bring into stark relief how social media has become a front in such attacks on democracy — and show how much more must be done to address digital disinformation.” 

  • More discussion: In The Hill, public policy expert Philip M. Napoli (Duke) argues that Trump’s executive order in response to being moderated may harm the case for legitimate, necessary regulation of social media platforms. For Slate, Renee DiResta (Stanford Internet Observatory) clearly lays out the distinction between fact-checking the president and politically biased censorship. Also in Slate, political communications researcher Bridget Barrett (UNC) explores the weaknesses in platforms’ responses to voting disinformation that this incident illuminates. And in The Atlantic, Evelyn Douek (Harvard) discusses the need for greater transparency into why and how such moderation decisions are made and notes that “the debate about content moderation needs to move beyond taking things down versus leaving them up.”

  • Meanwhile, the backlash against Facebook’s refusal to moderate Trump’s problematic posts has been furious. The New York Times reports that employees staged a virtual walkout earlier this month, and former Facebook employees have spoken out as well. Per the Washington Post’s Nitasha Tiku, more than 140 scientists funded by the Chan Zuckerberg Initiative have also condemned Facebook’s inaction. Moreover, the Biden campaign has taken Facebook to task, as Cecilia Kang reports in The New York Times. (Check out MapLight’s statement on the Biden campaign’s stance.) And, as Lauren Feiner notes for CNBC, civil rights leaders have resoundingly expressed disappointment with Facebook.

  • Advocacy groups are channeling the outcry into action. Last week, a coalition of civil rights and tech policy groups, including NAACP, Anti-Defamation League, Color of Change, Free Press, Sleeping Giants, and Common Sense, launched a campaign urging advertisers to #StopHateforProfit by boycotting Facebook. The North Face and REI are already on board, according to Jonathan Roeder at Bloomberg. At Axios, Ina Fried reports that a new nonprofit called Accountable Tech has launched an ad campaign targeting Facebook employees, calling out the platform’s hypocrisy vis-a-vis Trump’s posts. And media watchdog Media Matters may also be gearing up a campaign against Facebook, write Jessica Toonkel and Alex Heath at The Information.

  • Facebook’s move: To deflect attention from this backlash, last week, Mark Zuckerberg announced a new voter information and registration initiative and pointed to the platform’s work to improve election integrity — while doubling down on the arguments underlying the platform’s lack of response to Trump’s dangerous posts (see Alex Hern and Julia Carrie Wong in The Guardian for further context.) At the same time, in a sign of the byzantine intricacy of its moderation policies, Facebook did remove ads run by the Trump campaign that used Nazi iconography as part of online messaging about antifa, writes Isaac Stanley-Becker for the Washington Post. In an added twist, the platform has decided to let users opt out of seeing political ads, per Mike Isaac at The New York Times.

  • In MIT Technology Review, Joan Donovan (Harvard) discusses the dangerous overlap between groups spreading conspiracy theories about the Black Lives Matters protests and those perpetuating pandemic misinformation. As Brandy Zadrozny and Ben Collins observe in NBC News, despite rumors and fear-mongering from political leaders, there are few indications that “outsider” extremist groups are behind the civil unrest. But conspiracy theories persist; for The New York Times, Davey Alba documents three major narratives. The Atlantic Council’s DFR Lab delves into far-right social media efforts to falsely cast the protests as the work of antifa, while at Mashable, Matt Binder looks at how this narrative played out in Facebook Groups. At NBC, Collins, Zadrozny, and Emmanuelle Saliba reveal that at least one purported antifa Twitter account was actually operated by white nationalists; meanwhile, Facebook deactivated 200 accounts linked to white supremacist groups trying to instigate violence at the protests, per Kim Lyons at The Verge.

  • How can platforms do content moderation well? Researchers at First Draft News offer a raft of evidence-backed tips for moderating manipulated and synthetic media. Meanwhile, Paul M. Barrett (NYU Stern Center for Business and Human Rights) dives into the plight of platforms’ human content moderators, with recommendations for expanding and improving the labor processes of moderation. And the Open Technology Institute’s Spandana Singh and K.J. Bagchi examine platforms’ responses to COVID-19 misinformation, with recommendations for amplifying authoritative information, reducing the spread of misinformation, altering and enforcing ad policies, and providing heightened transparency around COVID-19 content moderation efforts.