Social Media Needs To Do More To Tackle Disinformation
Moments like these require unrelenting truthtelling. We take pride in being reader-funded. If you like our work, support our journalism.
Twitter’s decision to fact check Donald Trump’s recent tweets about mail-in ballots, pushing a baseless conspiracy theory about voter fraud, has sparked a fresh debate about the way that social media companies should deal with fake news and misinformation on their sites. Twitter took the action as it felt it had been backed into a corner to alter its own approach to content on its platform but, in doing so, the company has raised expectations for people who now want to see other sites follow suit or even take stricter action.
Social media has become an integral part of our lives, with people being reliant on it to communicate and coordinate, to launch protest movements, and to learn about what’s happening in the world. Where once they were used just to keep in touch with friends, social media sites have evolved and are now functioning as sources of information and news for many users. Especially during the global pandemic that we are all currently trying to deal with, social media has proved a key method of keeping people engaged and educated, as campaigns move online and we prepare for an American presidential digital election like no other.
Four years ago, we all saw how social media could be corrupted and used by individuals and state actors to sow distrust and spread misinformation. Despite the Trump administration’s repeated attempts to downplay the severity of the situation, Russia’s operation to interfere in the 2016 US election was unlike anything that had been seen before and reached over 125 million people on Facebook and 1.4 million people on Twitter. This wasn’t something that was overly sophisticated. Russia was simply purchasing normal advertisements and creating their own posts, but doing so whilst posing as American citizens. In fact, Facebook claimed that Russia spent less than $200,000 on advertising on its site. But that so-called small amount had a vast impact, with Russian operatives managing to organize rallies and other events.
Since that 2016 influence operation, the use and importance of social media in the US political sphere has only increased, pushed by the President’s keen focus on Twitter to advance his agenda, and further accelerated by the coronavirus outbreak forcing campaigns to move online to reach out to voters. More people are online than ever before. Nearly three in every four US adults use at least one social media site and that figure is likely to increase. At the same time, the internet is being flooded with misinformation in unprecedented quantities.
However, social media companies have failed to take the necessary steps to prevent what happened in 2016 from being repeated. One recent study from Carnegie Mellon University found that nearly half of the Twitter accounts spreading messages about coronavirus are likely to be automated bots, while another by the Center for Countering Digital Hate claims that social media companies are failing to act on 90% of coronavirus misinformation.
Looking to make a difference? Consider signing one of these sponsored petitions:After the last presidential race, social media companies were hauled in front of Congress to explain why they didn’t do enough to tackle interference and influence operations in 2016. As they spoke to politicians, they declared how they were taking steps to roll out policies that would address the concerns raised. Facebook CEO Mark Zuckerberg even admitted that: “It’s clear now that we didn’t do enough to prevent these tools from being used for harm.”. However, two years on from those hearings and it’s clear that the remarks made were just empty words. Zuckerberg has even publicly stated his opposition to introducing fact-checking measures on his site because, apparently, he doesn’t want the company to become the “arbiter of truth”, claiming that he doesn’t want to restrict free expression as he believes it to be a core feature of the democratic process. But, if you allow politicians to continue to lie and spread conspiracy theories, you aren’t protecting democracy, you are corroding it.
For too long, sites have been too willing to allow individuals to spread misinformation and conspiracy theories online if they have a large enough platform and draw enough engagement. These platforms allow information that is fake to go viral because they are set up with algorithms that promote sensational content regardless of factual accuracy. Instead of viewing the increased polarisation in politics as a reason to be concerned, social media sites have seen it as a money-making exercise. They encourage people to remain in their epistemic universes because it is financially lucrative for the sites to do so.
In their minds, if people come to their site for content that they agree with, why would the social media companies do anything to change that by fact-checking it, or placing warnings on it, or removing it from their platforms? You only need to look at Alex Jones and InfoWars if you need an example of this type of attitude. Despite it being clear to everyone, including those running the sites, social media companies allowed Jones and his company to openly spread conspiracy theories and hoaxes with no restrictions. It took far too long for Twitter and Facebook to ban him. The same can be said for Donald Trump, who has used his Twitter account to spread lies and repeat his misleading statements in a way that would’ve got any other account suspended.
That’s why simply labeling posts as misinformation, while a step in the right direction, isn’t enough. Companies need to be aggressive in their strategy for dealing with such content by putting what’s in the public interest above what boosts their profits. They need to remove the false information and the accounts of those who post it, permanently preventing them from returning to the sites. These sites were set up as forces for good, to provide users with a platform to make their voices heard, to connect and inform. Unfortunately, however, those who seek to use misinformation to sow distrust and create conflict within the populace, tearing nations apart in the process, are being aided by these platforms and their lack of interest in dealing with this type of content. Allowing misinformation to remain on their sites goes against their founding principle, sets a dangerous precedent, and seriously undermines the democratic process.
This isn’t about being the “arbiter of truth”, or about companies regulating free speech. This is about social media companies protecting their users from deliberately false information that is designed to confuse and create discontent, manipulating the electorate in order to serve the agendas of those who often have bad intent. Social media companies are shirking their responsibility to deal with fake news on their sites, with reporting systems in place that are not fit for purpose.
It’s not just the case that they aren’t catching this misinformation, they are deliberately refusing to deal with it when it’s handed to them by users who report it. Social media sites have a moral and ethical duty to take serious and meaningful action. With the 2020 election drawing closer, there has never been a more important time for social media companies to step up, once and for all, to address misinformation on their sites and thus to help provide users with balanced, accurate, well-informed content.