In the desperate fight against the novel coronavirus, social media platforms have achieved an important victory: they have helped limit the dissemination of life-threatening misinformation that could worsen the pandemic. But this success should not cause us to adopt a similar approach to political speech, where greater caution is required.
Facebook, Twitter, and YouTube have each moved quickly to remove coronavirus misinformation that encourages people to take actions that could put them at risk. Google is privileging information from official health agencies, such as the World Health Organization, and has established a 24-hour incident-response team that removes misinformation from search results and YouTube. Facebook’s WhatsApp has teamed up with the WHO to provide a messaging service that offers real-time updates.
Misinformation, rumors, myths, and conspiracy theories still slip through the net, of course, and new threats may yet emerge—for instance, Russia is taking a page from its 2016 playbook and trying to use disinformation about the coronavirus to foment political unrest in Europe and the United States. But so far, social media and Internet groups have earned praise for making a concerted, and thus far successful, effort to limit misinformation. To date, no specific false claim or conspiracy theory has become widespread in the manner often observed during disasters and tragedies.
The digital war against misinformation in the United States dates back to the controversy over the role of “fake news” websites during the 2016 presidential campaign. Since then, tech companies and policymakers have struggled to determine how best to respond to political misinformation—a problem that raises difficult questions about the appropriate role of private companies in policing speech.
The current success of social media platforms in limiting harmful content about COVID-19, the disease caused by the new coronavirus, may inspire a false hope that the same standards can or should be applied to political news. For instance, Ben Smith, a media columnist for The New York Times, asked: “Will the flow of responsible information last beyond this crisis? Could it extend into our upcoming presidential campaign?” Danny Rogers, the co-founder of the Global Disinformation Index, similarly lauded the platforms to The Washington Post: “This is what it looks like when they really decide to take a stand and do something,” he said. “They haven’t had the policy will to act [on political misinformation]. Once they act, they can clearly be a force for good.”
The platforms’ approach to pandemic information has been aggressive, effective, and necessary—but it cannot and should not be applied to politics. Tactics that work against dangerous health misinformation are likely to be less effective and more harmful when applied to political speech within the United States.
False stories about the novel coronavirus are relatively easy to detect compared with political fake news. The platforms can focus their search for false content on a well-defined topic, rather than needing to identify and remove misinformation about any topic whatsoever. Such boundaries enable more effective moderation by artificial intelligence: Facebook sent thousands of content moderators home to avoid the threat of infection in mid-March, and the company was still able to rely on its machine learning systems to identify and remove false claims about the pandemic. The system had some short-term glitches (for example, blocking legitimate sites), but overall, it proved effective.
Evidentiary standards are also far easier to establish and enforce in health and medicine. For instance, it is widely accepted that drinking bleach is dangerous and does not cure coronavirus. False claims like this one can be quickly identified and removed, as Facebook now does in partnership with national and global health organizations. Standards of truth and accuracy in politics are more subjective and likely to provoke controversy.
Under ordinary circumstances, moderating the content of social media requires striking a difficult balance between free speech and public harm. People often disagree over what content should be prohibited and who should make such decisions. The pandemic, by contrast, has generated a strong consensus in favor of limiting harmful content. False information about COVID-19 can be a matter of life or death. As a result, social media platforms are treating the issue differently. As the Facebook CEO Mark Zuckerberg told Smith, with a pandemic, “it’s easier to set policies that are a little more black and white and take a much harder line.”
None of these conditions apply to domestic political misinformation, where the need to protect free expression is more acute. False speech about politics is a necessary byproduct of living in a free society (unless it runs afoul of carefully circumscribed laws against libel and slander). Identifying false claims about politics is a laborious affair that requires difficult judgments about the nature of truth. As a result, the social consensus in favor of reducing political misinformation on social media is more limited. Facebook accordingly does not remove false information from its platform but instead reduces the reach of articles that third-party fact-checking partners identify as false or misleading. Similarly, following the standard practice in broadcast television, Facebook does not remove false ads sponsored by candidates (although it does remove false information about the census and how to vote).
When the coronavirus crisis subsides, the approach that social media platforms have taken will not and should not become the new normal. The domain of medical information differs enormously from that of politics, where free speech must be protected and where exposure to false information does not threaten people’s health. The best a liberal democracy can do is limit the influence of misinformation, not try to eradicate it like a virus.
This article was originally published on ForeignAffairs.com.