From Radicalization To Insurrection: A Reckoning For Big Tech

While Trump, Republicans, and right-wing media should carry much of the blame, tech companies must atone for their role in far-right radicalization and make bigger changes.
(Graphic Source: Southern Poverty Law Center)

(Graphic Source: Southern Poverty Law Center)

Dr. Julia R. DeCook is an Assistant Professor of Advocacy and Social Change in the School of Communication at Loyola University Chicago.

The attempted coup of January 6 and the subsequent fallout are a stark reminder of the miasma of far-right populism and extremism that has taken hold in the United States in the cult of Trumpism. Resulting in five deaths, the far-right mob that descended upon the Capitol are only a small taste of the kinds of violence that will continue in the coming months and years.

The election of Joe Biden and Kamala Harris does not downplay the fact that more than 70 million eligible voters aimed to reelect Donald Trump and that 77 percent of those who voted for him believe that the election was rigged. Millions of new users joined alternative social media platforms like MeWe and Parler following the election, where speech that had been banned on Twitter and Facebook flourishes.

But in the midst of all of this uncertainty, we are seeing an inkling of a crisis of legitimacy among even the most fervent of Trump’s supporters, particularly following Trump’s recent speech where he effectively conceded. In December, Milo Yiannopoulos, whose career was ruined after being de-platformed from major social media sites, had a meltdown on Parler after the Supreme Court dismissed the Texas lawsuit that aimed to overturn the results of the 2020 election.

Vowing to “have vengeance” and to dedicate the rest of his life to destroy the Republican party, Yiannopoulos lamented that he and others defended a “selfish clown for nothing.” Calling for a secession, Yiannopoulos and other Parler users advocate for a civil war or a “Texit” as a result of the election.

On Saturday, December 12, thousands of pro-Trump supporters including the Proud Boys descended on Washington, D.C., vandalizing four Black churches, stabbing four people, and resulting in 23 arrests. Notably, the December rally drew a larger amount of Proud Boys than previous rallies, perhaps due to the rise in the group’s profile after Trump’s famous “stand back and stand by” comments during the first presidential debate. The rally on January 6 that then turned into an attempted coup was bigger even still, and were only encouraged further by the dearth of conspiracy theories that swirled on these platforms.

All of this has made many researchers, journalists, and concerned citizens wonder – where do we go from here? In the aftermath of the election, the far-right very well may latch on to this event in the hope that it will help to bring about a new era. More importantly, the insurrection that was incited by Donald Trump on January 6 is an example of what will become of QAnon, of the “stop the steal” conspiracy, and the influence of the doubt, suspicion, and skepticism that have become the hallmark affect of the polity, aided and abetted by a flourishing social media news ecosystem that pushes conspiracy theories and dangerous disinformation.

Moments like these require unrelenting truthtelling. We take pride in being reader-funded. If you like our work, support our journalism by joining our private community.

Social media platforms in the midst of the COVID-19 pandemic and the 2020 U.S. presidential election finally started enacting actual measures in preventing the spread of dangerous disinformation, “synthetic” and manipulated content, and “fake news.” Facebook and Instagram just recently removed Donald Trump from its platforms until the end of his term, with Twitter going back and forth during the 48 hours following January 6 and eventually permabanning Trump on January 8.

Although commendable, one thing that has often been lacking in these policy changes is an acknowledgment of the platforms’ primary role in radicalization, in spreading disinformation, and their historical and continued role in these processes. These ideologies are not relegated to the “seedy underbelly” of the Internet, but are spreading and evolving on mainstream platforms like YouTube, Facebook, and now TikTok. A report released on December 8 noted that YouTube was a significant source of information for the Christchurch shooter, who admitted that he had been a more active user and consumer of YouTube than of 4chan and 8chan.

Despite massive changes and initiatives at these platforms to mitigate the spread of misinformation and false content, what the platforms seem to be unable to keep abreast of is how narratives evolve, how users navigate platform constraints, and how they overcome things like bans and censorship.

A professor of mine (Dr. Mary Emery) told me early on as a graduate student that “information does not change behavior,” and despite the best intentions of informing people that they’ve engaged with misinformation or spread fake content, these alerts are insufficient in changing the infrastructure by which this information spreads. Moreover, making things more difficult, research has shown that attempting to correct facts can result in a backfire effect where corrections can actually increase or strengthen their beliefs. The attempt by Facebook and other major platforms to “inform people” when they have engaged with or shared misinformation may result in this similar pattern.

What these platforms really need to do is to break the infrastructure that allows for the creation and spread of this kind of content in the first place. Although we will never eliminate hate speech and misinformation (for they exist in our “offline” worlds, of which our online lives are a mirror of), what platforms can do is change the way that content is pushed into people’s feeds.

This may require a complete redesign of their product and business structure. Focusing only on content and content moderation misses the larger picture: what platforms need to contend with are the cultures of hate that they incubate and increase barriers to make their platforms inhospitable environments to hate speech and disinformation.

Banning content alone is insufficient in stopping the spread of conspiracy theories, “fake news,” hate speech, and dangerous content. What we saw on 6 January with a far-right mob attempting to incite a civil war and overthrow the American government – complete with two pipe bombs – should be a stark example that bans are not enough.

Dismantling the infrastructure that they rely on – things like domain registrars, user accounts, existing networks, algorithms, and Cloudflare services that prevent DDoS attacks – is the best way to mitigate online hate and disinformation’s power. Amazon Web Services took a good step in this direction by announcing they are taking Parler offline.

For platforms and legislative bodies, working with civil society groups – particularly groups that are directed by people who are most often the targets of online hatred – and treating online hate as a human and civil rights issue are steps that they can take in mitigating digital hate culture and disinformation.

We cannot wait for laws to catch up with our tech and we must demand more of these tech companies – namely, like Dr. Joan Donovan writes, that they become more “transparent, accountable, and socially beneficial.” Rather, we must recognize that technology is a process, and not a product, and strategies for its moderation must also reflect this distinction.

This article is brought to you by the Centre for Analysis of the Radical Right (CARR). Through their research, CARR intends to lead discussions on the development of radical right extremism around the world.

Rantt Media and ZipRecruiter


Opinion // CARR / Disinformation / Facebook / Parler / Radical Right / Social Media / Tech / Twitter