On the morning of August 3, Patrick Crusius uploaded a 2,300-word manifesto to 8chan, an online forum popular with white nationalists. Within seconds, byte-sized packets bearing the anti-immigrant screed would cross borders the world over, wending their way from El Paso, Texas, to the Philippines, where 8chan is based. News of what Crusius did next would travel even wider. Armed with an AK-47, the recent college dropout stormed a Walmart not far from the Mexican border, killing 22 and wounding as many more. His self-professed goal: to kill as many Mexicans as possible.
The carnage in El Paso, coupled with a separate mass shooting in Dayton, Ohio, that night, prompted renewed debate over how to respond to online hate. As political pressure mounted, U.S.President Donald Trump borrowed a page from the Obama playbook and convened a “tech summit” on extremism. On August 9, White House officials hosted representatives from major technology companies—Amazon, Google, Facebook, Twitter, Microsoft, and others—and discussed potential ways of disrupting extremist recruitment and coordination online.
The summit will likely be remembered as yet another missed opportunity. Trump’s administration may be more willing than Obama’s to challenge the technology sector, but it has opted to fight the wrong battle, and its efforts risk making the problem worse. Consider the administration’s reported plan to grant the Federal Communications Commission (FCC) broad new authority to regulate social media companies. The authority would not help the FCC force technology companies to more aggressively police their platforms for extremist accounts and content. Rather, the FCC, by virtue of an executive order called Protecting Americans from Online Censorship, would seek to ensure that social media companies aren’t “biased” against conservatives. With Trump’s rhetoric and language often indistinguishable from that of avowed white nationalists, such a measure would make it more difficult for technology companies to counter extremism online.
As the Trump administration dithers, the threat only continues to grow. The horrific attack in El Paso was the latest performance of a troublesome new script, one in which white nationalists, radicalized online, post their fealty to the movement before carrying out a kind of gamified violence designed to be quickly celebrated and shared on the Internet before being repeated anew. The script was written in Norway, where in 2011 Anders Breivik posted a massive manifesto before killing 77 people, primarily youths at a summer camp for a left-leaning political party. Earlier this year, Brenton Tarrant published his manifesto on 8chan before gunning down 51 worshippers at a mosque and Islamic Center in Christchurch, New Zealand. In a horrific twist, Tarrant broadcast video of the attack in real time on Facebook Live. Despite Facebook’s efforts to block the video, it spread like wildfire—it was uploaded, often in modified form, over a million times on Facebook and countless more on sites such as 8chan and Reddit.
In terms of its global reach and lethality, white nationalist terrorism has grown increasingly reminiscent of the jihadi movement. It has also, like the jihadi movement, used the Internet and social media to recruit and radicalize members, disseminate propaganda, and broadcast images and video of its violence. What can tech companies—and governments—do to stop it?
TREAT WHITE NATIONALISM LIKE JIHADISM
Balancing the right to free expression online with the need to monitor and disrupt extremist use of the Internet is by no means a uniquely American problem. In the aftermath of the Christchurch attack, for example, New Zealand Prime Minister Jacinda Ardern worked with France and other major technology companies to launch the Christchurch Call in May. Building on the digital counterextremism efforts of the EU and other bodies (including the Aqaba Process, launched by Jordan in 2015), the Christchurch Call brought tech companies and governments together to commit to “eliminate terrorist and violent extremist content online.”
The Christchurch Call was an admirable start, but, like the Aqaba Process, it is long on good intentions and short on specifics. Thankfully, there are concrete steps that technology companies can take to curb the spread of online hate.
The first step should be to treat all hateful ideologies the same. Until very recently, Facebook and other social media companies have focused far more on jihadi content than on white nationalist and other forms of far-right content. This is largely a legacy of the struggle against ISIS. When the group emerged in 2014, Facebook and other companies tried to preemptively block ISIS content and remove users associated with it and other international terrorist groups. At the same time, they wanted to reassure constituencies dedicated to free speech. They were able to strike this balance because the U.S. government had officially designated ISIS as a terrorist group, giving the companies a legal rationale for restricting ISIS-related content. The U.S. government does not similarly designate domestic terrorist groups.
From a technical and moral point of view, however, white supremacist content is no different from jihadi content—if social media companies can block one, they can block the other. So far, they have avoided doing so because they fear blowback from conservatives. As extremism expert J. M. Berger notes, “Cracking down on white nationalists will … involve removing a lot of people who identify to a greater or lesser extent as Trump supporters, and some people in Trump circles and pro-Trump media will certainly seize on this to complain they are being persecuted.”
Yet such crackdowns work. By taking down individual jihadi accounts, identifying and blocking common types of jihadi propaganda, and cooperating with law enforcement, companies such as Facebook, Twitter, and YouTube have reduced jihadi groups’ online presence. They can do the same to white nationalists. These efforts are not required by law, and especially in the United States, civil libertarians might join racists in opposing them. But the tech firms are private companies and can legally remove hateful content from their platforms without falling afoul of the First Amendment.
Second, tech companies should begin hiring more—and more highly trained—content moderators. Although companies such as Facebook and Google increasingly rely on artificial intelligence to flag problematic content, they also employ thousands of contactors to review those decisions. Facebook alone employs more than 200 terrorism analysts in-house and contracts with over 15,000 content moderators worldwide. This number, however, is not commensurate with the scale of the problem—there are well over two billion Facebook users. In Myanmar, for example, Facebook has struggled to moderate violent content in part because it has too few moderators who speak Burmese.
The quality of content moderators is even more important than their quantity. Effective moderation requires the ability to distinguish between terrorist content and legitimate forms of political speech. This, in turn, requires a large team of analysts with both an in-depth knowledge of terrorism and a wide array of local and regional expertise who can set globally consistent policies. And these moderators must have the language, cultural, and analytical skills necessary to apply those policies quickly and accurately around the world. Attracting moderators with the right qualifications means that companies will have to not only pay them better but also provide them with more prestige and respect than they currently receive.
Tech companies must also improve information sharing across platforms. Most small and early-stage companies lack the resources to invest in counterterrorism expertise and therefore struggle to identify extremist groups and content on their platforms. In 2017, Google, Facebook, and other major companies created the Global Internet Forum to Counter Terrorism. In part, the rationale behind the forum was to share information on suspected terrorist activity and prevent dangerous content from migrating across platforms. After the New Zealand attacks, for example, Facebook “hashed” the original video, essentially giving it a digital fingerprint that allowed Facebook and other companies to more easily identify it. Earlier this month, the company also open-sourced an algorithm that smaller companies and organizations can use to identify terrorist imagery.
These steps are welcome, but more could be done. Most obviously, Facebook and others could publish the full list of extremist organizations and individuals that are banned from their platforms, along with brief explanations for each decision. Such a move would make it far easier for smaller companies without in-house terrorism expertise to assess whether to block specific groups and individuals, too.
DON'T LET THE PROBLEM FESTER
We can’t know if technology companies are getting the balance right if we don’t know what they’re doing, so improving transparency and publicizing metrics for content regulation is another important step. Facebook claims that it is doing its best to counter extremist content, but this claim is difficult to evaluate without comprehensive data—and Facebook is one of the better companies in terms of reporting. In addition to reporting the number of accounts taken down or the amount of content blocked, major companies should list the number of user-flagged content problems and how long it takes them to respond. They should also report their false-positive rate: if Facebook takes down millions of videos, what percentage of those were actually legitimate content (for instance, a news organization that uses a clip of the El Paso manifesto or the Christchurch video)? Such reporting would help ensure that companies do not overreact to hateful content, allowing better judgments as to when legitimate speech might be suppressed, which methods of tagging content are most effective, and whether AI algorithms are effective.
Finally, although calls to ban live-streaming and other media formats are unrealistic, Facebook and other companies can put limits on them in ways that do not impede their core function. In the aftermath of the Christchurch shooting, for instance, Facebook instituted a “one-strike” policy that prevents users who engage with terrorist content in specific ways—for example, by sharing a statement from a terrorist group without adding any context—from using Facebook Live. Likewise, in a bid to cut down on disinformation and extremist propaganda in India, Facebook has placed limits on the number of people users can forward messages to on WhatsApp. Restrictions like these do not meaningfully compromise the free expression of most Facebook users, yet they also make it significantly harder to broadcast and share terrorist attacks in real time. Other social networks and file-sharing services should follow suit.
Reforms such as these will not be a panacea. Yet without them the problem will only grow. At the time of the White House summit last Friday, Crusius was the last known white nationalist to follow Tarrant’s model. By the next evening he no longer was. On Saturday, a young Norwegian man uploaded his own anti-immigrant message to 8chan, praising Tarrant and Crusius by name. Later that night, he entered a mosque outside Oslo armed with a handgun and two “shotgun-like weapons” and opened fire.
This article was originally published on ForeignAffairs.com