NEWS

Decoder Newsletter: Social Media & Online Speech

Margaret Sessa-Hawkins and Viviana Padelli | October 19, 2020

Produced by MapLight, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • That story: Questions of how much power social media companies have over online speech came to a head last week in the wake of platforms’ actions against a questionable New York Post story (which many members of the paper’s own staff objected to and which has also now been linked to a possible disinformation campaign orchestrated by Russian intelligence). Twitter blocked users from tweeting out links to the story, only later explaining the article violated its hacked materials and privacy policies. After backlash, Twitter announced it was changing the ‘hacked materials’ part of the policy, with Twitter CEO Jack Dorsey calling the company’s action 'unacceptable’. Facebook, meanwhile, announced that the story was eligible for fact-checking, and temporarily reduced its distribution on the platform.

  • Analysis: Predictably, the moves generated a fair amount of controversy. In Poynter, Cristina Tardáguila wrote that the platforms’ reaction caught the International Fact-Checking Network off-guard. Casey Newton, though, pointed out that while there’s legitimate room for criticism, the platforms’ response represented a fairly quick reaction to a potential hack-and-leak. The MIT Technology Review looked at how the situation was a no-win for social media platforms, while in the Verge, Adi Robertson reminded everyone that this is just further confirmation that social media companies have an unnerving amount of power over online speech. (The New York Times also wondered where Youtube was in all of this.)

  • Politicians react: As expected, Republicans were not pleased with the platforms’ actions. On Tuesday the Senate Judiciary Committee will vote on subpoenaing Twitter CEO Jack Dorsey and Facebook CEO Mark Zuckerberg over their handling of the matter. (Zuckerberg, Dorsey and Google CEO Sundar Pachai will separately testify before the Senate Commerce Committee on October 28 on Section 230 of the Communications Decency Act.) Sen. Josh Hawley has also asked the Federal Election Commission to investigate possible election law violations by the platforms. While affirming the need for “legislation that holds companies like Facebook and Twitter accountable,” Rep. Jan Schakowsky (D-Ill) said recent Republican efforts against the platforms are “designed to silence fact checking and allow the spread of misinformation to continue.”

  • Decreased traffic: Raising more questions about social media companies’ handling of online speech, the Wall Street Journal reported that Facebook apparently “choked traffic” to left-leaning news sites like Mother Jones, a policy that was reportedly personally approved by CEO Mark Zuckerberg. The change was meant to balance out an algorithm alteration that executives thought would hit right-leaning sites harder. On Twitter, Mother Jones CEO Monika Bauerlein has a thread detailing what she thinks the effects of the change have been.

  • FCC jumps in: In the wake of the Post incident, Federal Communications Commission chair Ajit Pai announced that the agency will seek to regulate social media companies as President Trump requested in an executive order earlier this year. In Tech Dirt, Mike Masnick has a strongly-worded analysis of why Pai’s announcement -- which involves trying to strip back the immunity granted by Section 230 of the Communication Decency Act -- is full of hot air. Brian Fung of CNN also wondered whether Pai even had the votes to start a rulemaking on Section 230. 

  • Misinterpreting Section 230?: Pai’s announcement came after Supreme Court Justice Clarence Thomas used the Court’s decision not to hear a Section 230 case (MalwareBytes Inc. v. Enigma Software Group) as an opportunity to issue a 10-page opinion about the legal shield. Although Justice Thomas argued that the Supreme Court was correct in passing on the case, he also said that the courts have been interpreting Section 230 in a way that gave online platforms broader legal protections than required by the law, and that the Court should examine the issue when a better case presents itself. In Popular Info, Judd Legum analyzed the opinion and its legal reasoning. 

  • Disinformation and Black communities: Twitter has suspended a network of more than two dozen fake accounts purporting to be Black Trump supporters. The so-called “digital blackface” accounts had generated more than 265,000 retweets in just a few days before Twitter suspended them, showing how quickly disinformation can widely circulate. Separately, April Glaser reports for NBC that Black voters are being subjected to ‘voter depression’ campaigns on social media, which aim to convince them voting is futile. On Friday, a group of Black scholars, activists and communications specialists announced the launch of the “National Black Cultural Information Trust,” which will combat disinformation targeting Black communities.

  • Vaccine disinformation: Facebook announced Tuesday that it would be blocking any ads that discourage people from getting vaccinated. In Stat, Erin Brodwin points out that the policy does not address groups or pages -- the main vectors for vaccine misinformation spread. A new study examining susceptibility to coronavirus misinformation in different countries finds that a higher susceptibility to disinformation is associated with lower self-reported compliance with public health guidance about COVID-19, as well as decreased willingness to get vaccinated against the virus. 

  • Youtube & Q: Youtube announced it was joining the coterie of platforms cracking down on the QAnon conspiracy theory Thursday. The company announced it would be removing “conspiracy theory content used to justify real-world violence.” At Platformer, Casey Newton pointed out that Youtube is frequently the last of the ‘big three’ social media platforms to make policy changes to control disinformation. TikTok also told NPR Sunday that it would be tightening its own policies on the conspiracy theory. For those wanting to keep tabs on platforms’ policy changes, Mozilla has debuted a policy tracker that looks at how Facebook, Instagram, Google, YouTube, Twitter, and TikTok are addressing disinformation. 

  • Washington strikes again: Twitter has agreed to pay the state of Washington $100,000 after admitting it broke the state’s campaign finance laws. Washington state law requires all political ad sellers to make details about an ad’s reach as well as who is paying for it available. In August, a judge also denied Facebook’s attempt to dodge its second lawsuit in the state for violating the same campaign finance regulations. Facebook had banned all political advertising in the state after the first lawsuit -- but was unable to enforce it. Facebook’s failure to enforce its Washington ban has Eli Sanders in Wild West questioning whether the company will be able to enforce its nationwide political ad ban.

  • Advertising & disinformation: Looking to increase ad transparency, 15 researchers have proposed a new standard for advertising disclosures, arguing that current digital ad archives maintained by social media companies are inadequate (Facebook’s ad spending numbers don’t even add up). A detailed description of the data the researchers say should be made available is hosted at the Online Political Transparency Project. A new study by the Global Disinformation Index on the U.S. (dis)information ecosystem has found that ads on sites that propagate electoral disinformation rake in about $1 million each month, with Google accounting for 71% of the advertising dollars.