Facebook, with its nearly three billion worldwide users and virtually endless data about the content they engage with and share, has rarely been a friend to transparency or the academic research community. This month, the company took its penchant for secrecy up a level — doubling down on a philosophy that treats independent researchers as enemies to be vanquished rather than partners that can ultimately help the company make its platform less destructive.
At the beginning of the month, Facebook disabled the accounts of several researchers associated with NYU’s Ad Observatory Project, effectively cutting off their ability to investigate how misinformation spreads on the platform. NYU’s Ad Observervatory is a plug-in tool that allows Facebook users to voluntarily share limited and anonymous data about the political ads Facebook shows them, as well as some basic demographics. Before the ban, this information was collected and made available in a public database, enabling journalists, watchdogs, and civil society to better hold the platform accountable for allowing political advertisers to target susceptible users with deceptive messages.
Under current law, digital political ads still don't require on-ad disclaimers showing who paid for them, and online advertisers can target narrow subsets of voters with propaganda without having to disclose who is doing the targeting and how. While Facebook has developed a searchable database of ads about social issues, elections, or politics, the NYU researchers showed how the Facebook Ad Library routinely omits political ads from its archive and leaves out labels saying who paid for each ad. Furthermore, Facebook’s Ad Library does not provide detailed information about targeting criteria, preventing consumers and watchdogs from seeing the full picture of how special interests are trying to influence the public and allowing harmful microtargeting to continue unchecked.
Although Facebook attempted to position the decision as a way to protect people’s privacy in line with a Federal Trade Commission order, the FTC quickly debunked the deceptive claim that NYU’s researchers were using methods that were unauthorized. In reality, the ban is just the latest installment in Facebook’s history of obfuscation of its surveillance-based business model. From COVID-19 vaccine disinformation and racist stereotypes to voter fraud allegations, Facebook knows its algorithms are amplifying hate and conspiracy theories. Yet, the company’s executives refuse to curb these issues for fear doing so would hurt the company’s bottom line.
Just this week, Facebook struck two new blows against independent researchers. First, the company attempted to refute findings from the Center for Countering Digital Hate (CCDH) that just twelve accounts were responsible for spreading nearly three-quarters of the COVID-19 disinformation on the platform. Facebook failed to provide details about its own methodology for refuting CCDH’s findings or any further insights about the amount of COVID-related misinformation and disinformation circulating on its platforms.
On the same day, the social media giant also released a new quarterly report, providing data on what it says is the most viewed content on Facebook as part of an effort to push back against a Twitter account run by Kevin Roose of the New York Times and based on engagement data from Facebook-owned CrowdTangle, that routinely shows right-wing content as among the most popular on the platform. But Facebook’s new report does more to muddy the waters than provide any real transparency. It fails to provide a full picture of what content users actually saw, how they interacted with that content, or how it got in their feeds to begin with. By aggregating data over a three month span, the Facebook report also fails to give researchers the granular data needed for any real insights.
That’s why it’s critical that our elected officials swiftly advance federal legislation that would establish the conditions for access to social media data for researchers rather than be subject to the changing whims of tech executives, such as the Social Media DATA Act (H.R. 3451), introduced in May by Reps. Lori Trahan (D-MA) and Cathy Castor (D-FL), and the Algorithmic Justice and Transparency Act (S. 1896/H.R. 3611), introduced by Sen. Ed Markey (D-MA) and Rep. Doris Matsui (D-CA).
Both bills represent a major step forward in advancing scientific research, enabling independent researchers to gain precious insights into platforms’ ad targeting and content moderation practices, and ultimately holding powerful social media companies like Facebook to account.