How Mark Zuckerberg Helped Break American Democracy

Facebook has failed to protect its users from disinformation and extremist communities. Its attempts to clean it up are too little too late.
Facebook CEO Mark Zuckerberg arrives before a joint hearing of the Commerce and Judiciary Committees on Capitol Hill in Washington, Tuesday, April 10, 2018. (AP Photo/Andrew Harnik)

Facebook CEO Mark Zuckerberg arrives before a joint hearing of the Commerce and Judiciary Committees on Capitol Hill in Washington, Tuesday, April 10, 2018. (AP Photo/Andrew Harnik)

Russian Interference, Fake News, And The 2016 Election

As the 2016 election campaigns were kicking off, a massive Russian troll form was increasingly growing within the platform. The Internet Research Agency (IRA) is a Russian disinformation agency based in St. Petersburg, and it was part of Russia’s infamous interference in American democracy. The IRA created thousands of fake accounts, groups, and even advertisements, all with the sole intention of sowing division and mistrust among Americans. Ultimately, their content would reach 126 million users throughout 2016 and 2017.

These trolls added fuel to the fire of perhaps the greatest problem facing the company: extreme online communities that congregated as echo chambers, rapidly spreading fake news in order to rile up greater support for their various beliefs. That following October, BuzzFeed News published an investigation studying a number of well-known hyperpartisan pages. Researchers found that across every page the most popular posts were almost always the least accurate. Put simply: the more misleading the content, the greater the interactions. In contrast, mainstream news pages, which included less partisan posts, had significantly lower traffic.

In May 2016, just months before the election, several former Facebook employees admitted to suppressing conservative news on the site’s “Trending” platform. They said that when a right-wing story started trending, Facebook’s news “curators” would often manually replace it with either a different topic or the same story but from a less conservative-leaning source. Interestingly, the employees were told to intentionally not trend news about the company itself, even when numerous posts were appearing across the site.

After this broken system was publicly revealed, Facebook started making changes to how they handled circulating news. At the end of August, the entire editorial team was let go, replaced by engineers who were simply instructed to distance themselves from the process, allowing news to trend via algorithm. This worked for a total of two days. Then, a fake story about Megyn Kelly being fired from Fox hit Facebook’s top trending and remained there for several hours before being taken down. The company apologized, explaining the story met the algorithm’s standards of “a sufficient number of relevant articles and posts about that topic.” On September 11, less than two weeks later, Facebook trended an article about how the World Trade Center attacks were “controlled explosions”.

Some of the most widespread fake news articles in 2016 were:

  • “Pope Francis Shocks World, Endorses Donald Trump for President, Releases Statement” (961,000 shares, comments, reactions)
  • “Trump Offering Free One-Way Tickets to Africa & Mexico for Those Who Wanna Leave America” (802,000)
  • “FBI Agent Suspected in Hillary Email Leaks Found Dead in Apparent Murder-Suicide” (567,000)

The top fake news story that year was: “Obama Signs Executive Order Banning The Pledge of Allegiance In Schools Nationwide”. It had 2,177,000 interactions.

Just days after the election, Mark Zuckerberg defended Facebook’s minimal response and dismissed the impact of fake news, saying, “Personally, I think the idea that fake news on Facebook…influenced the election in any way is a pretty crazy idea.” Two days later, Zuckerberg published a post explaining his feelings about Facebook’s role in diminishing fake news. In it he wrote:

“I am confident we can find ways for our community to tell us what content is most meaningful, but we must be extremely cautious about becoming arbiters of truth ourselves.”

Becoming an “arbiter of truth” is a role Zuckerberg has always been adverse to. Freedom of expression is Facebook’s crown jewel. However, when many of the posts are parading as news, rather than opinions, truth must be arbited. Advertisements–especially political advertisements–can be considered news, and Zuckerberg refused to fact-check any political ads (This is about par with Facebook’s reckless approach to advertising, given Cambridge Analytica exploited the personal data of millions of Facebook users and siphoned it to the Trump Campaign).

He pushed back by saying that facilitating news was not Facebook’s responsibility:

“News and media are not the primary things people do on Facebook, so I find it odd when people insist we call ourselves a news or media company in order to acknowledge its importance. Facebook is mostly about helping people stay connected with friends and family.”

According to a Pew study published in 2016, this was entirely false. Results showed that two-thirds of American adults used Facebook and two-thirds of Facebook users used the platform to get news. This meant that 44% of American adults used Facebook for news, and the majority used Facebook alone. Whatever Zuckerberg may have believed, it was impossible to deny–numerous voters that year had been regularly getting news from his platform.

Moments like these require unrelenting truthtelling. We take pride in being reader-funded. If you like our work, support our journalism.

Failures To Combat Misinformation

In response to the public fallout, Facebook began implementing new measures to battle the spread of misinformation. By December 2016, the company unveiled plans for a new feature allowing users to report posts as “fake news”. Given a large number of reports, the post would be reviewed by prominent third-party fact-checking websites before being officially flagged as “Disputed by [websites]”. Over the years, though, this strategy has proven to be largely ineffective. Reasons why include:

  • Reported posts will often spend days or weeks online before they are finally reviewed and labeled. By then, they will have skyrocketed in popularity and been seen by thousands of users.
  • Many people seeing these posts already distrust the media and simply ignore “disputed” warnings. Because Facebook doesn’t actually take down any of the posts, users will still read and share fake news with friends who also don’t care about the posts’ labels.
  • Certain fake news stories just never end up being flagged. They manage to slip through the cracks of an imperfect system and still make it into users’ news feeds.

One of Facebook’s newest means of handling misinformation is their appointment of an Oversight Board, an organization independent of the company and made up of members with expertise on human rights, law, technology, media, and politics. Originally announced in September 2019, the Oversight Board will finally launch mid-October. According to its website, the board’s purpose is to “promote free expression by making principled, independent decisions regarding content…and by issuing recommendations on the relevant Facebook company content policy.”

In other words, if Facebook originally dismisses a case, the reporter can appeal to the board. An assigned panel then evaluates the reported post given that the case is “difficult, significant and globally…can inform future policy.” Unfortunately, there is no specific criteria for a post’s content mentioned on the site. Even worse, the board is unlikely to resolve any cases before the election, given that it can take up to ninety days before being implemented by Facebook. As a result, a group of critics of the strategy have recently banded together to form the “Real Facebook Oversight Board”. This alternate organization has written up a list of demands insisting Facebook take further action to prevent the spread of fake news and dangerous misinformation.

  • Remove all posts inciting violence–even those from the President himself. The fact that this hasn’t already been done proves that Facebook is willing to ignore its own policies.
  • Ban all paid advertising that mentions election results before the victor is officially declared and the other concedes.
  • Label posts claiming election results before a victor is declared as “untrue and premature”.

Neither Zuckerberg nor his company have publicly addressed the organization’s demands. The Oversight Board website’s latest update promising to “share further updates as we make progress” was in June, and the Board hasn’t posted on social media since July.

According to its “Policy Rationale”, Facebook allows many questionable posts because “there is…a fine line between false news and satire or opinion.” It also strives to avoid “stifling productive public discourse.” This is the philosophy that so many people take issue with. The company’s attempt at transparency doesn’t help to define what kind of news should and shouldn’t be allowed on the site. Before leaving the company in July, software engineer Max Wang shared a video of himself on an internal message board. In it he described his frustrations with Facebook’s decisions throughout the past seven years, blaming leadership for hurting users rather than protecting them. “We are failing,” he said. “And what’s worse, we have enshrined that failure in our policies.”

Conservative Influence

As mentioned above, a significant problem facing the company is misinformation that stems from heavily partisan news pages. Despite promises to keep fake news at bay, these types of pages have been allowed to grow. Recent testimonies from employees reveal a distinct lack of initiative on Facebook’s part to minimize polarization. On the contrary, they are actively worsening the problem.

One way Facebook has attempted to silence fake news sources is by implementing “strikes”. Strikes are given to any page as warnings for posting misinformation. Should a page receive two strikes within 90 days, its advertising is removed and content distribution is limited. However, leaks provided by NBC News in August prove an anti-misinformation team “with direct oversight from company leadership” had been deleting strikes for the past six months. The majority of these had been given to right-wing pages such as Gateway Pundit, Breitbart, Eric Trump, and Donald Trump Jr.

Since the suppression scandal in 2016, it’s obvious why Facebook would be more comfortable using such a corrupt system. There is a clear connection between the strike removals and the mass complaints from conservatives of bias and censorship. These accusations are not unique to Facebook–Google, YouTube, and Twitter have also been under fire, despite minimal evidence of any censorship across the platforms. Media Matters, a nonprofit identifying misinformation in the media, has also published multiple studies disputing these complaints. On an internal message board, one Facebook employee said, “Research has shown no bias against conservatives…so why are we trying to appease them?”

In reality, conservatives aren’t being turned away at all–quite the opposite. Data shows that the top-performing Facebook pages are consistently conservative. “Facebook’s Top 10” is a Twitter account tracking Facebook pages that receive the most interactions. It lists the ten highest for each day. Regularly included are Ben Shapiro, Fox News, Dan Bongino, Breitbart, Franklin Graham, and various conservative community pages such as USA Patriots For Donald Trump, Blue Lives Matter, and Donald Trump 2020 Voters. Every so often, all ten pages on the list end up being conservative. Suffice to say, conservative pages on Facebook aren’t the martyrs they have been portrayed as. Instead, the company is letting them flourish in the face of a repeated disregard of policy.

Looking to make a difference? Consider signing one of these sponsored petitions:

Take Action To Protect Voting Rights With The ACLU Sign Now
Demand Equal COVID-19 Economic Support And Healthcare For African Americans Sign Now
Support The Switch To 100% Renewable Energy Sign Now
*Rantt Media may receive compensation from the partners we feature on our site. However, this in no way affects our news coverage, analysis, or political 101's.

Failures To Combat Extremism

Unfortunately, Facebook has also permitted a darker corner of the platform to rise in popularity: extremist groups. Currently, the main threats are white supremacists and militant groups. The two have often gone hand-in-hand, as evidenced by the Proud Boys and Boogaloo movements. Both have vocally opposed connections to racism, but there are explicit ties between white supremacy and the two groups that cannot be denied. Despite regularly posting content inciting and encouraging violence, Facebook has been slow to take down these pages.

Back in June, Facebook announced that it would make a concerted effort to ban more Boogaloo pages. That same day, eighteen new pages were created, and around the same time, private groups began circulating bomb-making instructions and a how-to kidnapping tutorial.

More recently, Facebook has been thrown into scandal for allowing the militia group “Kenosha Guard” to remain on the platform. This particular group grabbed the spotlight after it hosted an event called: “Armed Citizens to Protect our Lives and Property”. This was a call to arms for militants to show up in Kenosha, Wisconsin during the evening protest on August 25th that culminated in the murder of two protestors by Kyle Rittenhouse.

The larger problem for Facebook was not simply the militia’s ability to thrive, but the fact that this event had received 455 complaints of organized violence, breaking a specific Community Standards policy, without being taken down. Ultimately, the event was removed–but not by Facebook. Instead, an administrator from the Kenosha Guard took it down the next day. Hours later, Facebook banned the page altogether.

Mark Zuckerberg later published a video where he claimed credit for removing both the event and the page. Facebook later apologized for the error. Zuckerberg said that the failure to remove the page sooner was an “operational mistake”, blaming Facebook’s contractor team that was formed to identify and ban “dangerous organizations”. This entire event has cast light onto a disturbing truth: it is easy to create and grow an organization that regularly praises violence on a platform designed to “help people stay connected with friends and family.” Facebook finally made a major step forward by choosing to ban all QAnon accounts and pages. Time will tell whether they can make good on that promise.

Efforts To Encourage Voting

At the beginning of September, Zuckerberg announced new plans Facebook was making that would encourage users to vote. “With Covid-19 affecting communities across the country,” he wrote in a post, “I’m concerned about the challenges people could face when voting”. He went on to describe ways Facebook would help people overcome these issues:

  • Keeping basic voting information and tutorials at the top of users’ News Feeds.
  • Blocking political ads during the last week before Election Day.
  • Removing pandemic-related posts that discourage users from voting.
  • Limiting the amount of posts users can forward all at once via Messenger.
  • Posting up-to-date election results and flagging posts with incorrect data.

Zuckerberg reiterated many of the steps they had already taken, such as taking down violent posts and disbanding conspiracy, militia, and extremist groups. Additionally, he mentioned Facebook’s continued battle against fake news pages created by foreign attack campaigns. “It’s going to take a concerted effort by all of us to live up to our responsibilities,” he concluded. “We all have a part to play in making sure that the democratic process works.” To his credit, approximately 2.5 million people have registered to vote through Facebook and Instagram.

However, the company has recently come under fire for allowing the President himself to spread misleading voting information. Hours after Zuckerberg’s announcement, President Trump posted a suggestion that voters attempt to vote twice–which is a felony. The post was almost immediately flagged and soon thereafter presented to one of Facebook’s policy teams. Ultimately, a simple label was added to the post, explaining the security of mail-in ballots. Despite the post clearly breaking the platform’s Community Standards, nothing further was done.

Employees and users alike were dumbfounded by the lack of initiative to remove false and potentially criminal content. This was even more significant, as Zuckerberg had announced that very same day Facebook’s goals to circulate crucial voting information. Given the post’s timing, many within the company believed Trump had learned of the plans and decided to test Zuckerberg’s will. Ultimately, the President came out victorious.

Facebook Employees’ Experiences

To understand the chaos that has ensued within Facebook itself, one must examine employees’ internal experiences. As police brutality protests were springing up nationwide in late May, President Trump tweeted his infamous “when the looting starts, the shooting starts”, which was published on Facebook as well. The post was not taken down, despite vocal outrage from employees themselves. Zuckerberg explained that the phrase could be interpreted–despite its historical meaning–not as a threat, but as a cause for discussion about use of force.

Unconvinced and angry, hundreds of employees staged a virtual walkout by changing their Workplace avatars to the Black Lives Matter logo and “calling in sick”. That week, morale crumbled. According to a regular satisfaction survey, the percentage of employees who believed Facebook was making the world a better place dropped by 25 percent and confidence that leaders were taking the company in the right direction dropped by 30 percent. Since then, the work environment has become increasingly divided, with some supporting Zuckerberg and others frequently disputing him.

On June 11, Zuckerberg bluntly addressed the tensions in an employee Q&A:

“I’ve been very worried…about the level of disrespect and, in some cases, vitriol that a lot of people…are directing towards each other… If you’re bullying your fellow colleagues into taking a position, then we will fire you.”

This came soon after Facebook engineer Brandon Dail was fired for publicly decrying another employee for not publishing a statement of support for Black Lives Matter. Dail had also tweeted–during the meeting Zuckerberg defended his decision–that it was “crystal clear today that leadership refuses to stand with us.” Facebook has denied that Dail was dismissed for speaking out, but rather for harassment.

In response to one meeting where Zuckerberg talked about labeling violent posts, an exasperated employee wrote on Workplace:

“What material effect does any of this have? Commitments to review offer nothing material. Has nothing changed for you in any meaningful way? Are you at all willing to be wrong here?”

This appears to be the sentiment of many co-workers fed up with their CEO’s decisions. They feel disbelief and frustration at a leader insisting that he knows what is best despite massive outcry to change his positions.

The Rantt Rundown

Through the years, Mark Zuckerberg has consistently defended the importance of “free expression”, but done so at the cost of permitting numerous pages and posts that are misleading, violent, and hateful. White-supremacy, hyper-partisanship, fake news, and militia activity have been allowed to spread across Facebook, with various “one-step forward, two-steps back” solutions. Zuckerberg’s attempt at staying politically neutral by encouraging what he considers to be free speech has failed, and until he reckons with that and makes significant policy changes, it’s unlikely the platform will ever improve.

Rantt Media and ZipRecruiter


News // Disinformation / Donald Trump / Facebook / Mark Zuckerberg / Media / Russian Interference / Social Media / Tech