Tech That Counters Online Islamic Extremism Must Also Focus On Right-Wing Extremism
As threats of right-wing extremism rise, tech companies and government need to take active measures to address it.
Technology companies, government, and civil society are working together to take action against extremist exploitation of the Internet. This agenda emerged in response to concerns that Islamic State propaganda on YouTube, Twitter, and Facebook might encourage viewers across the world to travel to Syria to join the Islamic State or coordinate attacks in the countries in which they reside. Very quickly, Twitter (for example) shut down thousands of accounts of Pro-IS supporters and recruiters.
More recently, the GIFCT (Global Internet Forum to Counter Terrorism) has developed a ‘hash database,’ which stores hashes or digital fingerprints of content and shares it across technology companies to help them detect and takedown IS content before it is viewed. In another instance, the Home Office funded a company called ASI Data Science with £600,000 to develop a tool to use machine learning to detect and counter Islamic State content.
These are important developments and are having an impact in reducing the capacity for the Islamic State to use digital channels. However, these projects are proceeding with a focus on only IS content. As is well known with research into racial profiling, focusing on a specific group can cause other groups to operate with less scrutiny. In fact, researchers at VOX-Pol found that Twitter’s focus on taking down IS accounts did not effectively disrupt other Islamic extremist groups.
Right-wing extremism is different that IS; it operates in a much more complicated area that technology companies and government have been slow to define and act upon, often relying on ad-hoc actions rather than a sustained effort to counter them. Arguably it is the case that right-wing extremism should occupy more of the focus of efforts to counter online extremism, as recent research demonstrates that British extreme-right terrorist offenders are 3.39 times as likely to learn online than those inspired by jihadism (p. 110). Importantly, the existing techniques used to counter Islamic extremism cannot be simply refocused to counter right-wing extremism. Technology companies and governments must re-evaluate how this agenda addresses right-wing extremists. This starts with two difficult, but important discussions.
Unlike Islamic extremism, the fluidity between the radical right and right-wing extremism makes it incredibly difficult to identify if a given piece of content is extreme, falls foul of the community standards of a given platform, could lead to violence, and should be subject to some kind of punitive action (such as removal of content or suspension of a user’s account). Increasingly, companies like Facebook are developing automated tools to detect hate speech but the dog-whistles and language used by the radical and extreme-right changes rapidly and requires a significant amount of specialist knowledge and frequent retraining of such tools to be effective.
Part of this problem is symptomatic of a failure to understand hate speech and extremism on a continuum; we rather tend to treat them separately. Hate speech is an expression of extremist beliefs and opinions—particularly in its many permutations on social media—and is often tied to explicit radical right political movements. In a recent meeting of the UN Counter-Terrorism Executive Directorate, I stressed the importance of thinking about hate speech and extremism in an integrated manner. However, while policy teams at social media companies are undoubtedly well aware of this complexity, there is little pressure from governments to follow this approach because it may look like ‘censorship.’ Technology companies have been careful to protect free speech for many right-wing extremists, such as Jayda Fransen and Paul Golding of Britain First, who were no longer able to spread their vitriol online only after their arrest.
Right-wing extremists also benefit from the relatively lax attitude to speech regulation espoused by major technology companies who tend to rely on the protections of the First Amendment to the US Constitution. Where hashtags such as “#Khilafa” are quickly banned, it takes more work to build consensus against the use of terms such as “#whitegenocide” because they do not call for support for a specific terrorist organization. Further, with Donald Trump echoing these talking points on a regular basis—most recently in his incendiary interview with The Sun—it becomes increasingly difficult to differentiate extremism from a broader continuum of radical, hateful right-wing views.
The definitions of hate speech and extremism produce a gray zone that right-wing extremists inhabit. This space has allowed white supremacists, white nationalists, neo-Nazis, neo-confederates, and the alt-right to network prior to the 2017 “Unite the Right Rally” in Charlottesville, VA. My ongoing research into the alt-right shows that they are careful not to advocate violence, relying instead on language about the “grand replacement,” “Islamization,” and the cultural “death” of the West. Using these ideas and often backing them up with coordinated disinformation campaigns, extreme right-wing voices have benefitted from growing audiences on social media platforms. By focusing on hateful rhetoric, often in coded forms and dog-whistles, these voices normalize hate, which has a direct effect on hate crime and far-right terrorism. These accounts, alongside Donald Trump’s statements, create an environment that serves to radicalize potential extremists.
This hateful rhetoric produces a culture which legitimizes extreme right-wing worldviews even when they do not actively advocate for violence. In doing so, right-wing extremists produce an environment that legitimizes the dehumanization of minorities and can lead to terrorist attacks (which we have seen in the cases of Anders Breivik, Darren Osborne, and Dylan Roof). As a recent report from the FBI and DHS, white supremacist violence was responsible for more “homicides than any other domestic extremist movement” and victims are predominantly minorities and Jews. Even if extreme right-wing voices on social media do not actively advocate for violence, they signpost users to extreme ideologies and subcultures. This strategy allows them to try to make the claim that when they are shut down by social media platforms for hate, disinformation, and abuse, their rights to free speech are being curtailed.
This dilemma around free speech becomes particularly clear when far-right voices such as Tommy Robinson use free speech as a defense and claim that social media companies are unfairly attacking them. As is the case with IS supporters online, shutting down these accounts can often lead to users becoming more empowered and increasing their visibility online as they use being suspended by a social media platform as a badge of pride and evidence that their voices are ‘speaking truth to power.’
I respectfully ask Congress to allow me to face my accusers.
There have been multiple hearings where the banning of Infowars has been discussed & lobbied for by Democrats. Now it’s happened.
I want to attend an open session where I am allowed to defend my right to free speech.
— Alex Jones (@RealAlexJones) August 10, 2018
The hesitance that social media platforms have to counter these voices or suspend their accounts is understandable, but a reticence to act ensures that their platforms remain useful resources for right-wing extremists.
Social media platforms developing technologies to counter extremism online are in a difficult position. Tools are designed with IS in mind, yet they will not be adequate in preventing future extreme right-wing violence, the justification for which often travels through dog-whistles and coded hate speech that flies under the radar of automated content monitors. Further, detecting this kind of language requires frequent human intervention and review as current data science techniques are not designed with the tactics that right-wing extremists use in mind.
To better counter right-wing extremists, technology companies and governments need to consider more creative approaches. One option is the use of counter-narratives: stories and content that challenge the views of extremists by trying to fact-check, debunk, and criticize these narratives. However, there is limited evidence that suggests that this works for IS supporters—much of the impact studies are based on individual cases—and research has shown that such views are unlikely to be challenged with corrections or facts that might counter such opinions. Another option is censorship, but that leaves these actors in a situation that might energize, rather than counter, the extreme right.
A better approach to countering online extremism must take the extreme right into focus. This requires building on lessons learned from countering online Islamic extremism and investing in more human intervention and review to create dynamic tools to keep up with the changes in right-wing extremist content. More importantly, it requires a hard discussion about the freedom of speech and the ways in which contemporary social media platforms support and give oxygen to hate and right-wing extremism and the propriety of censorship in such cases. What cannot be avoided is a significant investment in countering right-wing extremists online in the same way that actors across the world have begun to work together to counter IS. Without doing so, it is likely that the tools we build will only protect us from one form of extremism and fail to stifle the violence that right-wing extremists have used the Internet to facilitate for many years.
This article is brought to you by the Centre for Analysis of the Radical Right (CARR). Through their research, CARR intends to lead discussions on the development of radical right extremism around the world.