Cures for the coronavirus, conspiracy theories about recent protests, false allegations surrounding voting by mail: If you’ve started to notice more suspicious information than usual flooding your social media feed right now, you’re not alone. In the midst of protests and a pandemic, and with an upcoming presidential election, misinformation on social media has veritably exploded over the past few months.
This ‘infodemic’ didn’t emerge in a vacuum though. For years, digital disinformation has been a growing problem, with the potential to seriously harm public health and our democracy. But can we do anything about it? While the struggle against false information online can seem futile — especially when you’re trying to convince Uncle Bob to take down those posts about the illuminati — there are actually a few things you can do to help stop the spread of false and manipulative information.
Recognize
The first step to stopping misinformation is to recognize it — especially when a story or post looks to be going viral. Social media is optimized for engagement, not accuracy, and it can be very hard to spot false news stories. If you don’t recognize the news outlet or the sources cited, do an online search to see whether they are reputable. If an individual is being cited, research them. Do they have expertise in the subject they are speaking about? If no source is cited, check if any well-known outlets are reporting on the information. Lastly, check the date. In fast-changing situations (like a pandemic or an election) where new information is emerging constantly, the age of a story is extremely relevant.
If you come across a piece of news and you aren’t sure whether it is true or not, there are a few resources available to help you out. For the coronavirus crisis, the International Fact-Checking Network has created a WhatsApp chat bot that gives users access to more than 4,800 fact-checks in 43 languages; authorities like the WHO and CDC are also trustworthy resources. Fact checkers (like Snopes) are also a good port of call for general news stories. Reputable sources such as local news outlets and election officials (for voting matters) are also good to check, as they may carry smaller stories that fact-checkers will have missed.
Never share a story unless you are sure that it is genuine and comes from a reliable source. You should also consider reporting shared false information to the social media outlet where it is posted. Many platforms prohibit spreading disinformation about the coronavirus, voting processes, or elections — but their moderation systems are imperfect, and they need help to catch problematic content.
Refute
It is simple enough to stop and think about false information on your own feed, but what happens when you see something you know isn’t true tweeted out by a family member or shared on a friend’s Facebook wall? Should you say something? It’s a complex and many times frustrating question, and research into the subject is relatively new. Studies do, however, suggest that calling out disinformation you see online can be helpful. Commenting on a false news story by linking to a reliable source — like the CDC — has been found to lower misconceptions.
One-on-one conversations are also good if you have a close relationship with the person posting, since we are more likely to listen to people we have a good relationship with. Take an empathetic approach instead of directly contradicting someone, and use open-ended questions to share and discuss information from impartial, authoritative sources.
Reform
While being able to recognize and refute disinformation is a good skill to have, disinformation isn’t going to disappear (or even significantly drop) without systemic reform. If we want change, we need to work to pressure both companies and legislators to make stopping disinformation a priority. Fortunately, campaigns to address the issue have already done a lot.
Facebook, Twitter, and Google, for example, all introduced changes to their political advertising policies in response to public outcry over foreign interference in the 2016 election, including adding in political ad databases and banning voter suppression ads. Concern over public perception of the spread of misinformation surrounding the coronavirus led to social media companies taking a comparatively strong stance on disinformation on the topic.
These policies, however, represent the bare minimum of what can be done to combat digital disinformation. They vary greatly from platform to platform and are applied inconsistently. Facebook failing to take action on President Trump’s posts glorifying violence against protesters and spreading false information about mail-in voting (which Twitter took action on) are a prime example. Allowing advertisers to microtarget people ‘susceptible to pseudoscience’ during the pandemic is another.
Legislative reforms that could help address these issues include ensuring users know who is paying for a communication they are seeing and why they are being targeted. Two new bills recently introduced in the House of Representatives, the “Protecting Democracy from Disinformation Act” and the “Banning Microtargeted Political Ads Act,” would both be steps in the right direction on this front. Along with legislation, companies should still be encouraged to implement their policies evenly to all accounts and point users towards reliable sources of information (such as the CDC, local news outlets, and election officials).
The ‘infodemic’ that has risen alongside the coronavirus epidemic shows how dangerous disinformation can be. Luckily, we have a wide array of tools available to combat misinformation, both at a personal and systemic level. We just need to put them into action.