Produced by MapLight, the Decoder is a newsletter to help you track the most important news, research, and analysis of deceptive digital politics. Each week, we'll send you coverage of the webs of entities that seek to manipulate public opinion, their political spending ties, and the actors working to safeguard our democracy. Know someone who might be interested? Ask them to sign up!
- Upcoming election: Between falsely slamming a New York Times report about his taxes and using disinformation about electoral processes to justify his refusal to commit to a peaceful transfer of power, President Trump has had an eventful week. His serial lying and clear willingness to question the results of the election has been a growing concern, and has raised the question of whether social media companies will be able to check a president who already speaks of needing to stack the electoral deck in his favor.
- Social media response: After queries from Fast Company, Facebook announced it would reject political ads falsely claiming victory while organic posts would be affixed with a label. Without providing examples, Nick Clegg, Facebook’s head of global affairs, told the Financial Times the company had some “break glass” options it was considering in the event of political instability. The company also announced that its long-awaited Oversight Board will launch in October (it’s worth reviewing Columbia Journalism Review’s January look at reactions and critiques when the board’s rules were announced). Google also informed advertisers that it will block election ads after election day. A recent Pew study, which found that 54 percent of Americans think social media companies shouldn’t allow any political ads, suggests that a majority of the country favors making these temporary measures more permanent.
- Is it enough? The company’s past enforcement record leaves much to be wary about. Simulated attacks have shown that state and local officials struggle to counter digital disinformation, while a recent Campaign for Accountability report found that Facebook gave states insufficient tools while still expecting them to monitor for election disinformation. Given the lackluster response to disinformation, a group of industry experts announced Friday that they will be setting up an independent oversight board to analyze and critique Facebook’s content moderation as the election approaches. The New York Times editorial board also weighed in, saying that social media companies need to present a unified, coherent strategy for dealing with election-related disinformation.
- Mail-in Voting disinformation: A new MapLight review has found that despite its policies against voting disinformation, false ads and posts about mail-in voting continue to run on Facebook. In a review of Facebook’s advertising library, MapLight found ads from both grassroots organizations and Super PACs that have run in the past two months and that contain either false or misleading information. This month, MapLight also launched an Election Deception Tracker that allows users to easily report deceptive advertisements -- including ads that contain voting misinformation.
- Facebook Q&A: Between May and August of this year, The Verge obtained 16 recordings from Facebook’s “FYI Live” sessions, as well as screenshots from meetings and groups. The picture they paint, writes Casey Newton, is one of a company whose workers are upset about inaction -- over hate speech, disinformation, and calls to violence -- while much of their more conservative user base feels censored. Ultimately, the recordings provide little evidence that Facebook has been at all motivated by recent events to change its status quo. (Side note: Casey Newton is leaving The Verge to start a newsletter called Platformer, if you’d like to read more of his work.)
- Not a flattering comparison: In testimony before the House Committee on Energy and Commerce, Tim Kendall, former director of monetization at Facebook, said the company, “took a page from Big Tobacco’s playbook,” in its efforts to make the platform as addictive as possible. Marc Ginsberg, president of the Coalition for a Safer Web, then called for the creation of a “Social Media Standards Board” (modeled after the successful 1973 banking industry’s Financial Accounting Standards Board (FASB)) that could suspend the liability protections provided by Section 230 of the Communications Decency Act if companies fail to comply with a code of conduct. Rep. Jan Schakowsky (D-Ill), the Chair of the Consumer Protection and Commerce Subcommittee hosting the session, announced new legislation that aims to fundamentally alter social media companies' business models and give consumers and regulators recourse when they fail in their basic commitments.
- Section 230 proposals: Last Monday, Rep. Lindsey Graham (R-S.C.), the Chairman of the Senate Judiciary Committee, introduced new legislation which would narrow companies’ legal protections if they remove user-posted material from their services. Like the Online Freedom and Viewpoint Diversity Act (introduced by GOP Senators Graham, Wicker, and Blackburn earlier this month), the proposed legislation seems designed to steer the platforms’ content moderation practices. Then last Wednesday, the Department of Justice sent Congress proposed legislation that would limit platforms' protections under Section 230 and create a carve-out for so-called "bad Samaritans" who purposely promote, solicit or facilitate criminal activity. While the proposed law would make keeping a reign on disinformation more difficult for companies, it seems unlikely Congress will take action on it, making it, according to Protocol, more of a blueprint of what the DOJ is looking for from Section 230’s reform.
- QAnon: The past few days have seen a range of deep dives into the QAnon conspiracy theory. In the Reply All podcast, PJ Vogt explores how and why it took off so dramatically. Lili Loofbourow examined the theory’s growing popularity with women in Slate, and Kiera Butler looked at how it infiltrated mom’s groups in Mother Jones. In Popular Info, Judd Legum delves into Trump’s courting of QAnon, and The Guardian looks at QAnon orphans -- people who have cut contact with loved ones because of conspiracy theories. Shira Ovide spoke to the New York Times about how an online crackdown on terrorism recruitment could provide a model for slowing the spread QAnon, while Kaitlyn Tiffany looks back on Reddit’s successful stamping out of the theory on its platform. In Rolling Stone, EJ Dickson interviewed former QAnon believers about what drew them to the con, and what subsequently drove them away.
- Misinformation misdemeanor: A new California state law makes intentionally spreading misinformation about voting by mail a misdemeanor. While it was already a misdemeanor to knowingly mislead people about the location of a polling place or qualifications needed to register to vote, the new law now prohibits intentionally misleading someone about their right to apply for, receive and return a vote-by-mail ballot.
- Subculture platforms & disinformation: A new study out in the Harvard Misinformation Review looks at how misinformation spreads on political boards from 4Chan and Reddit. On a positive note, the study found that “pink slime” -- news that is algorithmically generated -- does not spread easily on the platforms. On a less positive note, videos from Youtube that often contain conspiracy theories and misinformation are frequently shared and discussed on the board.