NEWS

Facebook’s Political Ads Ban Misses the Mark

Viviana Padelli | September 10, 2020

On September 3, Facebook announced new initiatives allegedly designed to safeguard the integrity of the 2020 U.S. election. The policy changes, unveiled in a post by CEO Mark Zuckerberg, include a ban on new political and issue ads in the final seven days of the campaign, a forwarding limit on Messenger to slow the spread of viral misinformation, and the removal of misleading information about voting and of claims "that people will get COVID-19 if they take part in voting." 

Facebook is also preparing for disputes over the election results. To this end, the platform will continue to crack down on groups “that could be used to organize violence or civil unrest in the period after the elections” (e.g., QAnon), and it will proactively connect users to authoritative sources through its Voting Information Center. Specific measures include educating users about the possibility that it might take days to get official results, signing a partnership with Reuters and the National Election Pool, and adding information labels to content that aims to delegitimize the outcome of the election. Facebook will also label candidates or campaign posts that try to declare victory before final results are in, and redirect users to official sources. 

While it’s helpful for Facebook to acknowledge it must change its policies to slow the spread of false and deceptive information that harms our democracy, these changes come too late and amount to little more than a drop in the ocean — especially when Facebook’s record on enforcement of existing policies is lackluster at best. Outlined below is MapLight's analysis of exactly where Facebook's new policy falls short of the changes we need to protect the election in November and the long-term future of our democracy.

Facebook's overall business model promotes the spread of misinformation — and tinkering around the edges won't change that. 

Because more screen time amounts to more ads seen (and hence more profits), Facebook’s algorithm amplifies content that maximizes user engagement. That business model inevitably rewards polarizing and outrageous information, which makes people click and share more. Furthermore, as Facebook’s News Feed tends to show what users are already interested in, the company partitions us into pods of like-minded folks. The result is digital disinformation running rampant, the radicalization of the political discourse, and a generalized loss of trust in our democratic institutions.

None of the policy changes introduced by the platform meaningfully address the fundamental problem that Facebook's entire business model places it at odds with containing harmful and manipulative content. As stressed by Sam Woolley at the Center for Media Engagement at the University of Texas, “social media has horrendously exacerbated polarization and splintering because it has allowed people to become more siloed and less civil […] because they’re behind a wall of anonymity and because they don’t really see consequences for the things they do.” Putting a ban on political ads will not unwind the polarization that Facebook contributed to creating, nor will measures that aim to slow down the spread of viral political misinformation.

Limiting new political and issue ads one week out is useless at best — and potentially harmful.

Political disinformation on Facebook can spread through two channels: via paid political ads, which are exempt from third-party fact-checking, and via organic content. A ban on political and issue ads the week prior to Election Day will do nothing to contain the latter, nor will stop political ads containing misinformation that are already up and running (or that will be introduced after November 3). If anything, it will strengthen fringe groups’ ability to dominate organically and make it harder for campaigns and groups to react in real-time to voter suppression efforts or changing real-world election conditions. As highlighted by ProPublica, Facebook's ban also includes ads purchased by election administrators to inform the public about voting procedures. The ad ban deprives government officials of a  key information channel, especially with voting methods and locations changing fast amid the COVID-19 pandemic.

Facebook's move will come too late in the election cycle, as an unprecedented number of voters are expected to cast their ballots by mail before Election Day — and possibly before Facebook’s political ads cutoff date on October 27, 2020. Using data from the Election Administration and Voting Survey, MapLight found that in the 2016 presidential election, an absolute majority of voters in key swing states cast their ballot either by mail or using early voting. For example, in Arizona, one of the battlegrounds of the 2020 presidential race, only 26% of voters cast their ballot in-person on Election Day, whereas a staggering 73% used by-mail absentee voting. A minority of voters opted for in-person voting on the day of the elections in Florida (31%), Nevada (31), Texas (34%), North Carolina (35%), and Georgia (42%). ally with voting methods and locations changing fast amid the COVID-19 pandemic.

The chart below outlines 2016 methods of voting in swing states based on MapLight’s recent report, Election in Peril: Procedural Risks to the 2020 Presidential Election.

Method of Voting in the 2016 Election

Source: MapLight, (2020). Election in Peril: Procedural Risks to the 2020 Presidential Election.

The harmful impact of microtargeting still goes unchecked. 

Not only does Facebook allow advertisers to spread political misinformation, but it also allows them to target platform users on the basis of personal information such as age, gender, education, income, multicultural affinity, ZIP code, or interests. While microtargeting can help NGOs, non-profits, and political challengers to reach their audiences in a cost-effective way, it can also enable foreign interference and voter suppression. 

The opt-out model announced by Facebook last January enables users to see fewer political and social issue ads and to stop seeing ads based on custom audience targeting. While a step forward, this model fails to provide comprehensive information to users about how targeting works and to prevent the granular targeting of marginalized communities.

Since Facebook has failed to make meaningful changes to ad targeting, campaigns will still be able to narrowly target groups and individuals with divisive messages without public scrutiny or transparency for voters. 

Facebook’s new rules do little to prevent disinformation about election results.

Given the increase in voting-by-mail, it is unlikely that we will have clear results of the election on election day itself.  As underscored  by the National Task Force on Election Crises, “media should help prepare the public for the possibility that it might take days (or longer) to know the winner of the general election,” as well as “ensure accurate coverage on election night and thereafter.”

With four-in-ten Americans getting news on Facebook, it is essential for the platform to follow the same recommendations that apply to media, and to put in place strong measures to prevent the spread of disinformation about election results.

Unfortunately, Facebook’s decision to provide links to unbiased information rather than removing false and misleading election-related posts — including posts that try to declare victory before final results are in — has the potential to generate confusion rather than dispelling it. Furthermore, Facebook’s labeling strategy is often inconsistent. For once, Zuckerberg should apply Facebook's moderation policies to politicians, and be ready to stand behind his words when he claims that “we have a responsibility to protect our democracy.”

There are concrete steps both Facebook and Congress can take to address the problem.

Facebook’s latest announcement proves yet again the company is incapable of effective self-regulation. Congress, which answers to the people and not to shareholders, must step in to provide clear laws to create a healthier online ecosystem — but that is not going to happen before the election.

In the meantime, MapLight has signed onto the Election Integrity Roadmap released this week by Accountable Tech to demand that social platforms immediately implement bold policies to prevent the spread of misinformation leading up to the election. As we approach November, technology companies and social media platforms must act responsibly to ensure the outcome of the election is not determined by the spread of false and deceptive information.