What should the social media companies do with false information?

Two years ago YouTube had a surge of videos encouraging teenagers to eat Tide Pods because they are secretly candy.

Black salve, a caustic black paste that eats through flesh, was enthusiastically recommended on Facebook last year as a cure for skin and breast cancer.

QAnon is an increasingly active topic on Twitter and Facebook, with too many posts promoting its insane brew of conspiracy theories. (New to QAnon? QAnon believers allege “that the world is run by a cabal of Satan-worshiping pedophiles who are plotting against President Trump while operating a global child sex-trafficking ring.”)

In May, an anti-vaccine propaganda video was viewed millions of times on Facebook and YouTube, with false assertions that vaccines “weaken” our immune systems and wearing a mask will “activate” coronavirus.

There are no laws governing the content that can be posted on social platforms run by private companies like Facebook. The Internet was built on the promise of free speech. But as I said last year:

The negative aspects of free speech are now causing a backlash. Violent threats, fake news, bots, trolling, propaganda, election interference, harassment, gender bigotry, hate speech, racism, white nationalism, religious extremism: free speech has been weaponized, opening a debate about who can and should be heard online. The promise of the open Internet is at risk.”

There is increasing public pressure on the giant tech companies – Facebook, Twitter, and YouTube – to remove offensive content. The companies are aware that they can no longer defend themselves solely by praising the virtues of free speech. If they don’t act voluntarily, there will likely be moves toward regulation around the world, well-intentioned but all too likely to be ineffective or even counterproductive. In addition to demanding that content be removed, lawmakers will also consider fines, possibly company breakups, and perhaps even threaten to send company executives to jail.

In response, Facebook, Twitter, and YouTube have launched a massive experiment in private censorship. There are no laws to guide them, and very little consensus on what they should do. Almost everyone agrees that they should do something to moderate the comments and videos that people post online, with a lot of hand-waving about the details.

Let’s look at Covid misinformation from a few different angles to see why this is such a difficult problem.

Why is it difficult to prevent misinformation from spreading?

Social networks are working at a scale that is difficult for us to imagine. Facebook has two billion separate active users every month. An army of human moderators is already at work, but a hundred armies could not keep up with the volume of posts around the globe. For better or worse, the world is operating at a scale that requires the tech companies to develop algorithms to help us communicate fairly and deal with each other honestly.

There are intense efforts to bypass moderation by everyone from true believers to trolls to foreign countries. Anything less than a complete ban on a topic will result in objectionable content getting thru the filters. To remove everything about a topic, moderation must be overbroad and will include content that is not objectionable.

Facebook has been using human moderators to do initial reviews of the most severe content violations reported by users, with AI algorithms responsible for taking down additional flagged content, removing ads seeking to capitalize on the pandemic, and adding comments or links to authoritative sources of information. In the last few months, more moderation has been done by AI; many moderators have been sent home because of Covid, where they cannot work on content moderation for security or privacy reasons.

Facebook claims to have been making extraordinary efforts to prevent the spread of false Covid information. A Facebook spokesperson said: “Thanks to our global network of fact-checkers, from April to June, we applied warning labels to 98 million pieces of Covid-19 misinformation and removed 7 million pieces of content that could lead to imminent harm. We’ve directed over 2 billion people to resources from health authorities and when someone tries to share a link about Covid-19, we show them a pop-up to connect them with credible health information.”

Propaganda still slips through. A report last week by the nonprofit Avaaz found that pages from the top ten sites peddling pandemic conspiracy theories received almost four times as many views on Facebook as the top ten reputable sites for medical information.

Here’s a recent example. In May, the hoax video “Plandemic” was posted to Facebook and YouTube. Before it was removed a week later, “it had been viewed more than eight million times on YouTube, Facebook, Twitter, and Instagram and had generated countless other posts.” The New York Times wrote a thorough analysis of how it spread.

Facebook and the others were not ignoring the video or the millions of other pieces of false Covid information, but even a short delay is enough for a video to get all around the world. To this day, true believers and trolls are still making minor alterations and re-uploading Plandemic, trying to get past the moderators.

Why is it important to remove inaccurate Covid information?

False information about Covid can be literally a life or death issue.

Medical professionals are now used to dealing with patients misled by online health information. The New York Times reports that Covid has made the problem worse.

“In interviews, more than a dozen doctors and misinformation researchers in the United States and Europe said the volume related to the virus was like nothing they had seen before. “This is no longer just an anecdotal observation that some individual doctors have made,” said Daniel Allington, a senior lecturer at King’s College London and co-author of a recent study that found people who obtained their news online, instead from radio or television, were more likely to believe in conspiracy theories and not follow public health guidelines. “This is a statistically significant pattern that we can observe in a large survey.””

Last week, researchers released a paper studying one piece of misinformation that had circulated in the social networks, a claim that pure alcohol could kill the virus. According to the research paper, at least 800 people worldwide died, and thousands more were hospitalized, as a direct result of the online rumors.

What can the social networks do?

Facebook and the other networks are trying to find the right balance of aggressive content moderation and respect for free speech and diverse public opinion. It’s not easy, although they can respond more aggressively to Covid misinformation because health concerns raise fewer hot-button issues than other types of political speech.

Facebook is removing false information and putting warning labels on misleading posts. Lawfare has this summary of some of its actions:

“Facebook has a page with running updates the company is taking, including a new Information Center at the top of people’s News Feeds providing real-time updates from national health authorities and global organizations such as the WHO; banning ads that seek to capitalize on the crisis and exploit panic; and removing false content or conspiracy theories about the pandemic “as an extension of [the platform’s] existing policies to remove content that could cause physical harm.””

There’s more it could do, of course. Here are two suggestions in a Guardian article:

“Two simple steps could hugely reduce the reach of misinformation. The first would be proactively correcting misinformation that was seen before it was labelled as false, by putting prominent corrections in users feeds.

“Recent research has found corrections like these can halve belief in incorrect reporting, Avaaz said. The other step would be to improve the detection and monitoring of translated and cloned material, so that Zuckerberg’s promise to starve the sites of their audiences is actually made good.”

There are constant cries for Facebook to be more transparent about its algorithms, although that seems more likely to help the bad guys who will scrutinize the code looking for holes. Facebook may be forced to be more aggressive about blocking content and dealing with the howls of people who will complain they are being silenced.

There are no easy answers. Well, there are a few easy answers: wear a mask, wash your hands, don’t touch your face, and don’t believe everything you read on Facebook.

Share This