Decoder Newsletter: Facebook Trying to Avoid Scrutiny

Margaret Sessa-Hawkins and Viviana Padelli | October 26, 2020

Produced by MapLight, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!

  • Facebook Threatens Ad Observatory: Facebook has demanded that academics at New York University stop work on a project researching the company’s political ads. The NYU Ad Observatory uses a browser plug-in to collect information about which ads are running on the platform, as well as how they are targeted. However, the company has said that because the plug-in “scrapes” data from Facebook, it violates terms of service. The Knight First Amendment Institute at Columbia is representing researchers in their response to Facebook’s cease and desist letter. Alex Abdo, of the institute, has a good tweet thread summing up the situation, and highlighting a few articles stemming from the Ad Observatory’s research (another interesting article is Buzzfeed’s report on Facebook’s failure to label political ads). 

  • Election Disinformation Ads Still Running: Facebook is still failing to fully enforce its policies against voting disinformation. MapLight’s Frank Bass and Bergen Smith found that despite having a policy against them, Facebook is still allowing advertisements that cast doubt on the legitimacy of the election to run. For those interested in helping to track deceptive advertisements on Facebook, MapLight has a (scrape-free!) new tool called the Election Deception Tracker which allows anyone to quickly and easily report deceptive or problematic election-related information on Facebook. 

  • Foreign Interference: Both Iran and Russia have accessed US voter registration information in an effort to interfere with the election, according to Director of Intelligence John Ratcliffe. Iran used the information to send out a series of e-mails purported to be from the far-right group the Proud Boys. Ratcliffe said the campaign was meant to intimidate voters and damage President Trump -- though he offered no proof of the latter claim, and Democrats strongly objected to it. Despite the efforts by Iran, many US officials still view Russia -- which has hacked into U.S. computer systems and is most likely planning to interfere close to or after election day -- as the greater threat. Yochai Benkler though, has your weekly reminder that domestic disinformation still probably poses the greatest threat to the election.

  • Google Antitrust: On Tuesday, the Justice Department sued Google on the grounds that it relied on anticompetitive tactics to gain an outsized market share of searches. The New York Times had earlier reported that the suit was potentially being rushed so it could be announced under the Trump administration. Former Google Chief Executive Eric Schmidt criticized the suit as ‘misguided’ but did say that social networks, broadly speaking, need to be regulated. The Washington Post reports that the suit will be a test of Washington’s ability to keep global tech giants in check, while Politico analyzes why European regulators have failed to curb Google’s dominance -- and why the U.S. could succeed. In the middle of a phenomenal special report on the impact of technological monopolies, Jessica Gonzalez and Timothy Karr of Free Press look at other actions (beyond antitrust) to rein in Google and Facebook.  

  • Disinformation Targeting Latinos: Multiple outlets have been reporting on the ‘deluge’ of disinformation that is being experienced by Latino voters as the election day nears. In FiveThirtyEight, Kayleigh Rogers and Jaime Longoria profile a New York City gamer who started three sites aimed at Latino voters. In The New York Times Patricia Mazzei and Jennifer Medina look at how disinformation is pitting Black and Latino voters against each other. In Fast Company, however, Michael Grothaus reports that LeBron James’ ‘More than a Vote’ initiative is teaming up with Win Black to challenge misinformation that tries to keep Black and Latino communities from voting.

  • Voter Protection Preclearance: MapLight has joined a dozen other organizations in calling for social media companies to more rigorously protect voters by implementing preclearance systems. Under the current system, electoral misinformation from high-profile accounts has often been seen millions of times before it is addressed. However, if a preclearance system were instituted for accounts with more than 250,000 followers and a history of sharing content that violates election integrity rules, the damaging spread of voting disinformation would be limited. You can show your support for the initiative here.

  • Section 230 (Democrats): Democrats have proposed a new bill amendmending Section 230 of the Communications Decency Act. The Protecting Americans from Dangerous Algorithms Act would strip immunity protections from companies that use ‘radicalizing’ algorithms, so that they can be sued in cases alleging civil rights violations. In a statement, lawmakers specifically cited a suit launched last month by four people involved in the Kenosha protests, which alleges Facebook denied their rights through its role in empowering militias. MapLight supports the bill, with Digital Deception Project Director Ann Ravel stating, “We applaud the  leadership in Congress that recognizes the dangerous role of social media platforms in fostering both online and offline radicalization.” 

  • Section 230 (Republicans): Republicans have also proposed a new change to Section 230. Sen. Kelly Loeffler (R-Ga) has introduced the Stop Suppressing Speech Act of 2020, which would narrow protections for platforms that engage in content moderation. Meanwhile, the Senate Judiciary Committee has approved subpoenas for Twitter and Facebook’s CEOs to testify about allegations of anti-conservative bias in the wake of their handling of the now-infamous New York Post story. In Fast Company, Maelle Gavet argues that these efforts around Section 230 are fundamentally misguided, and that instead of using the immunity as a shield (as they have been) tech companies should use it as a sword that gives them the freedom to engage in content oversight without consequence.

  • The Media Manipulation Casebook: A team of researchers led by Joan Donovan of Harvard’s Shorenstein Center on Media, Politics and Public Policy has released a trove of case studies into dozens of media manipulation events. The ‘Media Manipulation Casebook’ features summaries of each campaign that look at which networks the disinformation spread on, who it targeted, what strategies were used to disseminate it, and how it was eventually mitigated. The team says the information is, “for researchers, journalists, technologists, policymakers, educators, and civil society organizers who want to learn about detecting, documenting, describing, and debunking misinformation.

  • Citizen Browser Project: The Markup has announced a new initiative called ‘The Citizen Browser Project’, which will attempt to measure how disinformation spreads across social media platforms. The project will pay 1,200 people to install a custom web browser on their desktops, which will audit what information social media platforms choose to amplify and suppress to a particular user, as well as which online communities they are encouraged to join. The Markup has teamed up with The New York Times to analyze the data and report on the findings.

  • Facebook Lobbying Increase: MapLight found that while donations from Facebook’s Political Action Committee have fallen sharply this year, the company’s lobbying has exploded. While Facebook’s PAC has doled out less than half the amount it did in 2016 this election cycle, it spent more than $10 million through June lobbying on almost three-dozen measures, including the Honest Ads Act -- a measure to increase transparency in online political advertising. 

  • Facebook’s ‘Supreme Court’: Facebook’s Independent Oversight Board is now operational. The board announced Thursday that it was ready to hear appeals on content moderation decisions. Composed of academics, policy experts, and independent advisors, the board said its focus will be on, “protecting human rights and free expression over the long term.” Notably, because the board will select its first cases within a few weeks, it will not have time to hear any appeals on election-related material before the election itself. Over in Platformer, Casey Newton looks at how the board was formed, and what its effect on Facebook could be going forward.

  • #DoYourJob: The “Real Facebook Oversight Board” (not to be confused with the actual real Facebook Oversight Board that launched this week) has started a #DoYourJob campaign. The campaign demands the platform take specific steps to ensure it does not incite violence or undermine the election, including pausing group recommendations, banning content that incites violence, and demoting the top 100 misinformation-spreading groups. Similarly, Mozilla has published an open letter to Facebook and Twitter asking them to pause key recommendations that amplify misinformation. The concerns behind these campaigns are echoed by a majority of Americans, who, according to a recent poll by Accountable Tech and GQR, feel that social media could be used in the aftermath of the election to incite real world violence.