Why Social Media Is So Bad At Stopping The Monsters Among Us
The road to Hell is paved with good intentions, once observed St. Bernard of Clairvaux, and in few places is this more true than tech. You see, we in the tech world tend to believe that the tools we create will ultimately be used for good. Maybe it’s the utopian sci-fi we grew up reading, or that so many of us are thinking about how to sell the benefits of our designs, but our ideas on the powers of the web and social apps tend to veer towards the idyllic. For a while there we really thought that social media was empowering democracy and could tear down racism by bringing different people across the world to the same platform and making our differences trivial.
But in doing so, we forgot how humans actually work. We made it trivial to be in cozy echo chambers of like-minded people, to self-segregate and filter out anything we don’t want to hear, and for those with shady intentions to exploit fears and insecurities with surgical precision. In other words, we gave people new ways to connect, educate, and help each other, but also new ways to be far more effective in harassing, trolling, and abusing their targets. We were so worried about how to enable good people to reach out, we didn’t put a whole lot of thought into how to keep out those animated by hate.
Social media absolutely amplified the voices we needed to hear, the voices of people who want to inspire us to learn more, know more, explore further, to imagine the future, and humanize the two-dimensional caricatures of those portrayed in otherwise broad, careless strokes. But it gave the same reach to those who declare that having the right skin color is accomplishment enough, that those unlike you deserve hate, that we need to close minds and borders and reject science and modernity, retreating into the past. Why? Because to code, there’s no obvious difference between the two.
And this has rarely been an issue for social networks until Facebook opened to everyone. When it was limited to just college students, its users were young people far more interested in talking about school and making friends, and its competitor MySpace was mainly populated by teenagers who cared little for politics and went about making gaudy custom pages, hooking up, and trying their hands at blogging and music. Inviting these users’ parents to join slowly but surely transformed the very nature of social networking platforms. They went from the place to get away from the real world and meet people, to the place where your embarrassing racist relatives post their manifestos.
Obviously, Facebook didn’t do this on purpose and neither did Twitter. More than likely, they just didn’t expect what would happen and were still stuck in the utopian mindset of the early 2000’s regarding the web’s potential. Virtually every web evangelist back then was talking about creating social links across the world and letting people meet strangers who would show them how at a fundamental level, we were all, like, just part of the human family, who are in it together on Spaceship Earth, man. Surely they expected there would be the occasional troll or spammer, but that’s why the block button is there.
What they didn’t expect is that people would use the same tools designed to better understand them to serve more targeted ads and make money off this new technology, and self-segregate into cozy echo chambers, connecting to strangers just to threaten them with death, deportation, or call them a bunch of globalist cucks. Just like America in 2016, they were in denial how many of those aforementioned racist relatives there were and as they were finding out their true numbers, they resorted to the tried and true playbook of the techie. If code creates a problem, more code will help solve it. After all, code changed the world, so more code can change it more, this time, the right way.
But that’s not how it works. One of the biggest realizations I made when my work became wider in scope and more abstract is that code is very much like a surgeon’s toolkit. We can deftly use it to solve mechanical problems. Need to crunch through a ton of data? Is there a repetitive process likely to be plagued by human error you’d rather avoid? Do you want to find something in a flood of bits and bytes? Code can do that for you and we have a million ways for it to solve your problems, much like surgeons can use their tools to remove foreign objects, stitch up and replace organs, or implant medical devices.
But if moderation on social networks was like removing an occasional bullet from a patient when they began, today, it’s like treating metastatic cancer. In the end, no matter how much you cut, another tumor will form somewhere else. The best we can do is use AI to flag the problem, but dealing with it isn’t going to be a matter of writing another chunk of code with the trendiest new language or open source libraries. The rampant abuse and misuse of social networks is a human problem, not a technological one. It requires looking at psychology, not APIs, and Silicon Valley is uniquely bad at doing that because its culture is that more code is the answer to everything.
Right now, a lot of people in high places are very busy blaming Facebook and Twitter for fake news and political meddling, as if they really could’ve waived a magic wand and stopped it. But the far more uncomfortable truth is that if you look at how Russian propaganda on those social networks worked, Americans could’ve easily rejected it as nonsense. All it did was play on racial and political tensions and a lot of Americans fell for it not because they were duped or brainwashed, but because they wanted to believe it. They liked the conspiracy theories. They wanted the racist memes and propaganda. They’re primed to believe the West is under siege and at war.
Far from being mind control, Russian fare was very much Fox News and Info Wars without the subtle dog whistles. And not all of it was Russian either. It’s not exactly a secret that a number of American media companies run hyper-partisan sites and Facebook pages that come in far-right and far-left versions, making millions advertising to outrage clicks. In other words, companies are just cashing in on telling people what they want to hear and find an audience using Facebook’s self-promotional tools. How do you code around people in blissful denial that they’re being lied to and welcoming the disinformation with open arms while using the tools you provide to ensconce themselves in cozy, soothing echo chambers?
So if you want to know how social media became so angry and toxic, and why no one seems to be doing much to stop it, it’s because it was built on utopian ideals, assumed that the trolls and bigots would be few and far between, and finds itself stuck in a weird space in which private companies with investors and customers function as de facto public squares and get draw into debates about freedom of speech every time they want to start waiving ban hammers around after being asked by shareholders to clean up people’s timelines. It’s a weird place to be and there are no easy solutions on how to fix it.
Maybe there’s no way to actually fix it in a conventional sense. We have to be willing to stop vomiting hateful bile at each other on one extreme, and resort of absurd both-sideism on the other. Code and technology didn’t do this to us, we did it to each other with code and technology. And we really need to stop before it’s too late because far from moving forward, we’re using the tools we thought would make the modern world easier, more fun, and more useful, to drag ourselves backward, mostly because a lot of us felt bad that the world is moving on regardless of how we feel about it and wanted to take it out on, oh, pretty much everybody else.