How Social Media Can Fight Deepfakes

Researchers keep finding ways to flag fake videos and manipulated photos. Social media platforms need to start integrating those tools into their software.
Screenshot of a deepfake video featuring Mark Zuckerberg

Screenshot of a deepfake video featuring Mark Zuckerberg

Social media today is a hive of hoaxes, propaganda, outrage, and disinformation, with a few rare bright spots. Younger generations are fleeing in droves for social apps which offer up a lot less data to its operators and give them a reprieve from the angry screeds and racist memes posted by their relatives and family friends, who after 2016 didn’t have to pretend to not be bigots anymore. In this context, a number of tech pundits have been rightfully worried about the emergence of deepfakes, doctored images and videos made by artificial neural networks originally intended for special effects.

These creations can make you say whatever a hoaxer wants you to say, insert you into a place you’ve never been, even strip you naked and add you to a porn video. With the current most active denizens of social media seemingly unable and unwilling to tell truth from fiction, deepfakes seem poised to destroy the very fabric of our social interaction online. But there’s hope. By virtue of how artificial neural networks work, the images they generate will contain obvious artifacts, and while they may be hidden to the human eye, other computers will be able to detect them.

This is the approach behind technology currently powering a tool to identify images that have been photoshopped, and a system to detect whether videos of famous politicians have been artificially created. Unfortunately, the rest of us schlubs are going to have a hard time finding this sort of digital protection because there will be too few images of us in order for computers to really understand whether they’ve been faked, but at the same time, this gives deepfake networks less to work with to generate passable frauds. So, in other words, while deepfakes may be coming for us like the Night King with his army of wights marching on Westeros, we do have the tools to stop them.

Rantt Is Ad-Free – Support Independent Journalism

plan_select

Learn more about our community

To create a truly convincing fake image or video in the first place, deepfake neural networks have to study thousands, if not tens of thousands of images of their target. They will then use the images from their library to stitch something altogether new. But in the process, they can’t account for the finer details of facial movements and speech patterns, and have trouble when it comes to resizing and rotating pixels used to render the victim’s face because all of their work is done in two dimensions. Of course, there is an arms race between those who detect deepfakes and those who produce them, and future generations of deep fakes will eventually start using 3D renderings, which will confuse existing detection tools.

Even then, however, there will often be something out of place just enough for computers to raise a red flag because the perfect deepfake render will require the victims to have their faces and facial expressions scanned in excruciating detail with extremely high-resolution cameras and lenses. Fighting these fake videos will not be a cakewalk and will require researchers to push the envelope of what we can do with image analysis, but that said, it’s far from a hopeless task and there is plenty of room for improvement because those trying to debunk deepfakes will have access to the same exact technology as those who make them, and can examine them for both obvious and seemingly invisible flaws.

The code behind detecting those telltale artifacts could easily be packaged into a middleware service and integrated with social media platforms. Every time you would upload a video or image, it could be scanned for evidence of manipulation in the background, and if it fails to pass, it gets tagged with a warning. Of course, devoted conspiracy theorists and political tribalists won’t be swayed by something as trivial as math itself telling them that the images they’re sharing are fraudulent. They’ll be far too invested in the narratives in their heads to care. But those on the fence or undecided, prime targets for conversion into the world of fake news and propaganda, will at least have some pause.

To explain why social media networks would be marking images shared by particular groups as fake would require another layer of conspiracy which alleges that the powers that be are trying to stop the group in question, which is why they’re marking all of their videos as fake. This would not be a novel argument for social media’s top trolls who, coincidentally, assembled this week to be patted on the back by President Trump at the White House for spreading bigoted hoaxes and conspiracies across Twitter, Facebook, and YouTube, while demanding favorable treatment from social media companies trying to distance themselves from their content. And while there are many who believe them, far more seem to be alarmed with what social media platforms tolerate in the first place.

They don’t have time to listen to elaborate conspiracies and participate in pity parties for right-wing trolls, and they have even less desire to dive into the cesspools these trolls are desperate to create. If much of the content being shared is marked as suspect or outright fake, they’re not going to spend the next three hours diving into a tsunami of screenshots from 8Chan trying to cast math and science as “the real fakes,” they’re just going to move on. This is not to mention that retroactive and per-upload scans for deepfakes and hoaxes would be especially useful to combat the worst suggestions of recommendation algorithms which we empirically know can fuel hate, bigotry, and racial resentment by sending social media users down a rabbit hole of far right propaganda.

And this brings us to the question we should be asking every social media platform; will they be working with researchers to add this technology to their media upload process? Based on Facebook’s abhorrent reaction to a doctored video of Nancy Pelosi and basically ignoring a deepfake of CEO Mark Zuckerberg himself, the answer is probably going to be no. After all, their incentive is to make people stay on the sites and apps even if they’re being exposed to lies, and because the people and a record of their habits are the product being sold, it doesn’t matter whether they are being informed or manipulated, or what the real world impact their inability to tell truth from fiction will have. The only thing that’s important to these companies is that people keep scrolling down their timelines.

But this narrow focus on short term profit and callous disregard for the societies in which they operate will only cement the growing distaste and distrust for social media platforms among the generations of future users they need to stay in business. Instead of being seen as useful tools to reach out to friends, family, and neighbors, social networks’ reputations will remain as those of hives of racism and villainy, a place for angry old people to write screeds about everyone they hate and share lies and hoaxes that could have been detected and tagged before they were allowed to race across the web like a plague of locusts.

It’s one thing if we were unable to stop the tide of fake news because there was simply nothing we could do but hope that enough people would see through the ruse, which has happened in Finland. But if we have the technology and allow it to go to waste for the sake of meeting some quarterly target that will be forgotten the next day, that would truly be a disgrace of monstrous proportions. Eventually, the older users of social media binging on their favorite propaganda will pass on. Will younger generations really want to use the same platforms that could have stopped their parents’ and grandparents’ decent into madness with a few hundred lines of code and said “nah?”

Given the fact that we know that Russian trolls used their properties to launder propaganda and conspiracy theories, and catfished political activists to interfere in the American 2016 presidential election, as well as electoral contests across the West, and will doubtlessly try again in 2020, doing nothing about deepfakes or hiding behind transparently nonsensical excuses to do nothing seems not just ill advised but criminally negligent. And considering that President Trump greenlit and welcomed such interference by foreign powers, any social media executive who says it won’t happen again is either delusional or complicit.

Rantt Media’s comprehensive articles source reporting from top news organizations, but they’re also built on brilliant analysis from our team. We are independently-owned and completely ad-free. We strive for quality, not clicks. We take pride in being reader-funded so that we are beholden to you, not corporate interests. If you like the work we do, please consider supporting us by signing up for our newsletter or joining our community chatroom where you can talk news with our team and other like-minded individuals:

plan_select

Learn more about our community

Politech // Artificial Intelligence / Fake News / Social Media