Facebook, Google and Twitter’s political ad policies are bad for democracy

Ann M. Ravel | December 18, 2019

The following opinion piece by Ann M. Ravel,  Digital Deception Project Director at MapLight, originally appeared on

As the 2020 election nears, the big three social media companies have been scrambling to solidify their approaches to political advertising.

Last month, Google, which owns YouTube, announced that it would limit certain types of political ad targeting, but made no change in allowing politicians to run false ads. Twitter made waves by announcing that it would ban certain types of political advertising on its platform. And Facebook announced that it would not fact-check ads by politicians — but would still fact-check other advertisements.
While Twitter's gamble seems to be paying off in the court of public opinion, Google's and Facebook's are receiving criticism. In the long run, however, all of the platforms' decisions are bad for democracy and show why social media companies shouldn't be self-policing when it comes to political ads.
On the surface, Twitter's announcement may seem like a proactive way to deal with a broken digital political advertising system. However, there are many problems that indicate that Twitter should reverse its ban on political candidates, PACs and SuperPACs. It is likely, for example, that the ban will favor incumbents and establishment candidates with deeper pockets (who can afford television advertising) over smaller challengers. Since digital ads are cheaper than radio or television spots, non-incumbents often rely on them to reach voters, mobilize volunteers and raise money.
Moreover, Twitter still plans to allow certain groups to advertise on what they consider to be political issues — but only so long as they do not advocate for or against political or legislative outcomes. This is likely to create enforcement challenges and cause confusion for groups seeking to educate the public about issues.
Facebook's decision not to fact-check content posted by politicians, whether paid or organic, is no better. The policy has rightly provoked widespread outrage from lawmakers, candidates, civil rights organizations and even the company's own employees. Free speech isn't the same as paid speech, content posted by politicians can seem trustworthy even when it's not, and the policy could undermine trust in the platform — and our democracy. Amidst the outrage, Facebook is considering changing how political ads are targeted and labeled. But as long as it stands by its no fact-checking policy — which it so far has done — Facebook has given tacit approval to those running for office to say anything they like, while not granting the same privilege as others.
Google's recent decision to limit microtargeting — a technique allowing advertisers to target messages to very specific and narrow groups, or even individuals — has also been met with controversy on both sides of the aisle. In its current state, microtargeting is hugely problematic because it allows political actors to run advertising with little transparency about who is seeing what, which could help attempts to run voter suppression campaigns. However, ad targeting also allows campaigns to reach groups of voters or potential donors that they might not otherwise engage. Twitter has also limited microtargeting, while Facebook is considering changes to its own microtargeting policy.
Instead of companies self-policing, when it comes to digital advertisements the focus should be on ensuring that political communications are transparent via regulation. If every political communication contained a clear disclaimer stating who paid for it, and if the identity of these individuals or organizations was rigorously verified, social media companies would be under far less pressure to set their own policies.
When it comes to microtargeting, the criteria for how ads are targeted needs to be made clear to users. Right now, only the political advertisers and platforms know who is receiving what message; this allows manipulative messaging to go unchecked. Additionally, copies of these communications need to be openly visible to the public through a comprehensive public archive. Facebook, Twitter and Google all maintain their own public ad archives, but these are inconsistent and contain inadequate information about how ads are targeted; regulation would ensure that the information provided to the public is consistent and adequate.
Enforcing these policies will require a functioning Federal Election Commission. Currently, the organization is hugely debilitated, with only three sitting members (a minimum of four is required to make decisions), and a partisan split that has resulted in deadlock. If the organization were restructured so that independent commissioners — dedicated to upholding electoral law rather than partisan viewpoints — were appointed, transparency and electoral security would be much stronger.
Such proposals should be far from controversial. For decades, radio and television stations have been subject to transparency requirements for political advertisements under the Communications Act of 1934. It is astounding that a similar system has not already been put in place for digital media. Political discourse is the bedrock of a representative democracy. We want truthful political discourse and debate to inform voters during elections and when discussing important issues that will affect our daily lives. This is not possible without true transparency. Ultimately, social media companies like Facebook, Twitter and Google shouldn't be making sweeping decisions about political speech. If Congress and the FEC did their jobs, they wouldn't have to.

 Ann M. Ravel is the Digital Deception Project Director at MapLight and previously served as chair of the Federal Election Commission.