NEWS

Decoder Newsletter: Facebook’s Civil Rights Failures

Hamsini Sridharan | July 09, 2020

Produced by MapLight, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. If you're interested in receiving the Decoder in your inbox, sign up here.

  • On June 26th, in response to mounting public pressure, Mark Zuckerberg announced several policy changes to how Facebook addresses online hate and voter suppression, including the decision to begin labeling content from politicians that violate its community standards. 

  • These changes are not enough to address civil rights concerns. Today, auditors engaged in a two-year civil rights audit of Facebook released their final report, while expressing serious concern about Facebook’s turn towards a “particular definition of free expression, even where that has meant allowing harmful and divisive rhetoric that amplifies hate speech and threatens civil rights” (in response to Facebook’s recent refusal to moderate posts by President Trump). The report covers Facebook’s accountability structure, elections and census policies, content moderation, diversity and inclusion, advertising practices, algorithmic bias, and privacy.

  • Responses: In The New York Times, Mike Isaac and Tiffany Hsu capture civil rights leaders’ disappointment with Facebook following a meeting yesterday to discuss the demands of the #StopHateforProfit campaign. For MapLight, I recap some of the ways that Zuckerberg’s recent announcements fail to address systemic issues with the platform. Steven Renderos, Executive Director of Media Justice, digs deeper into why such policy tweaks remain inadequate. Meanwhile, in Fast Company, Alex Pasternack contextualizes Facebook’s voting information campaign against the platform’s massive influence and prior experimentation on voters. Finally, in the Washington Post, Elizabeth Dwoskin, Craig Timberg, and Tony Romm dissect Facebook’s history of changing its policies to accommodate Trump, starting during his candidacy in 2015.

  • Hundreds of advertisers are officially on board with the #StopHateforProfit boycott, with several others separately boycotting Facebook. In the Wall Street Journal, Sahil Patel and Nat Ives review years of tension between the company and its advertisers. Protocol’s Sofie Kodner, meanwhile, looks at the changes that YouTube made in 2017 to appease advertisers similarly concerned about their brands appearing alongside hateful content. 

  • Banhammered: The past couple of weeks have seen social media platforms take on white supremacists and other extremists for hate speech and violence. Facebook announced that it had disrupted a network of accounts, pages, and groups affiliated with the boogaloo movement, which it officially designated as a “dangerous organization.” (But only after running boogaloo ads promoting civil war for months, as Ryan Mac and Caroline Haskins observe in BuzzFeed.) At The Verge, Julia Alexander reports that YouTube deplatformed channels belonging to Stefan Molyneux, David Duke, Richard Spencer, and others, while Casey Newton discusses Reddit’s ban of r/The_Donald and 2,000 other subreddits that violated its new ban on hate speech.

  • Facebook Groups: In WIRED, disinformation researchers Nina Jankowicz and Cindy Otis explore how Facebook’s turn to privacy has led to the promotion of Groups that spread political disinformation and extremism with little oversight. Jankowicz and Otis argue that Groups should be made more transparent, and that algorithmically generated Group and Page recommendations should be eliminated. In the Guardian, Julia Carrie Wong examines how QAnon conspiracies have taken root in Facebook Groups, while at CNet, Dara Kerr and Shara Tibken expose private groups using “Justice for George Floyd” as a cover for racist behavior. And of course, the boogaloo movement has also flourished in this medium, as Tonya Riley reports for the Washington Post.

  • How do we know all this? In a special report for Columbia Journalism Review, Jacob Silverman talks to journalists who have been on the Facebook beat for years, exposing the company’s PR obfuscation tactics. And in the Guardian, Wong writes about the painful harassment she has faced as a result of reporting on white nationalist organizations on Facebook — and how that stacks up against Facebook’s current messaging about hate speech.

  • In The Wall Street Journal, Emily Glazer and Michael C. Bender report that the Trump campaign is considering alternative social media platforms and a greater emphasis on its own app, out of concern over (slightly) more stringent moderation by major platforms. One such app is Parler, an alternative to Twitter that many conservative influencers are promoting due to its laxer content policies, writes Ari Levy for CNBC. Meanwhile, the Trump campaign has already invested heavily in the Trump 2020 app, which UT Austin researchers Jacob Gursky and Samuel Woolley note hoovers user data while sharing “questionable” news.

  • Political activity on TikTok is in the limelight due to organizing by teens and K-pop fans to disrupt Trump’s Tulsa rally a few weeks ago; Taylor Lorenz, Kellen Browning, and Sheera Frenkel have the story for The New York Times. Also in the Times, John Herrman talks to researchers Ioana Literat and Neta Kligler-Vilenchik, who offer a more complex narrative of TikTok users’ political interests, while Frenkel and Cecilia Kang discuss the resurgence of the PizzaGate conspiracy theory on the platform. And for MIT Tech Review, Abby Ohlheiser offers a nuanced exploration of K-pop stan activism online.

  • In a new discussion paper for the Harvard Kennedy School Shorenstein Center, Color of Change’s Brandi Collins-Dexter breaks down COVID-19 conspiracy theories circulating in Black communities online, which further threaten the health of people who are already disproportionately affected by the pandemic. 

  • Digital media literacy: New research in PNAS from Andrew Guess et al. finds that providing people with tips on how to identify false news significantly improved their ability to discern between mainstream and false news headlines. In that vein, check out tips from MapLight’s Ann Ravel and Margaret Sessa-Hawkins on how to recognize and refute disinformation — and, importantly, work towards reform.