Freedom of speech and content moderation

We believe in freedom of speech in the US, except when people actually exercise it. Then we discover there are lots of people who should be seen but not heard – and a fair number that we wish we couldn’t see, either.

Governments, regulators and large tech companies are launching into a debate about content moderation that will be long, messy, and unsatisfying. Along the way we might make progress toward more thoughtful and considerate online reporting and conversations, but not smoothly.

I have a few examples that have been in the news recently – no answers but lots of issues that might make you say, Hmmmmm . . .

Free speech: utopian principles and backlash

The Internet was founded on utopian principles of free speech. Free speech ideals were instilled in the DNA of the big tech companies: each person would have an equal voice online and free access to information; the natural result would be truth and beauty, exposing falsehoods and shining a light on the corrupt.

Keep the big picture in mind. The utopian vision of the Internet has by and large come true. We – the global community, all of us – experience the benefits of free and open communication every day.

Facebook, YouTube, Twitter, Reddit, and other large networks were built to allow almost anyone to post almost anything. As they became larger companies, they moved to monetize their platforms by selling advertisements and seeking to increase engagement to maximize the number of ads that can be shown to us. US law gave the tech companies legal immunity from the majority of posted content; with that protection, the tech companies continued to permit content to be posted with as little moderation as possible, both to fulfill the vision of open communication and because moderation is extraordinarily difficult as a practical matter.

The big tech companies have avoided calls to moderate content for as long as possible. Making choices about what is or is not allowable requires huge investments in human beings and machine learning/AI to review content, and inevitably involves difficult and controversial choices. By definition, limiting content is likely to reduce engagement, which interferes with maximizing profits, the single-minded goal of public companies.

But the negative aspects of free speech are now causing a backlash. Violent threats, fake news, bots, trolling, propaganda, election interference, harassment, gender bigotry, hate speech, racism, white nationalism, religious extremism: free speech has been weaponized, opening a debate about who can and should be heard online. The promise of the open Internet is at risk.

Content moderation by tech companies

Mark Zuckerberg published an editorial in the Washington Post over the weekend acknowledging that Facebook is under intense pressure to begin moderating content. “One idea is for third-party bodies to set standards governing the distribution of harmful content and to measure companies against those standards,” he writes. “Regulation could set baselines for what’s prohibited and require companies to build systems for keeping harmful content to a bare minimum.”

Our instinctive reaction tends to be based on the edge cases that seem “easy.” For example, Facebook and YouTube are big companies with the resources to identify and take down videos and posts that advocate violence, right?

White nationalism  White supremacy – well, that gets trickier, because those communities thrive on code words and gestures, and cloak their worst tendencies in bland bromides. The uproar over video from the Christchurch shootings proved to be the final straw. Last week Facebook said it will block “praise, support and representation of white nationalism and separatism” on Facebook and Instagram, and pledged to improve its ability to identify and block material from terrorist groups. Even that involves difficult choices: Facebook said that after three months of consultation with “members of civil society and academics”, it found that white nationalism could not be “meaningfully separated” from white supremacy and organized hate groups. Predictably there are groups of people complaining that they are being unfairly silenced.

Pedophilia  How about child pornography and pedophilia? That should be easy, right? Nothing in the modern world is easy. YouTube recently had to act after attention was drawn to thoroughly disgusting behavior going on in the YouTube comments – but not in the videos themselves. Pedophiles were searching for completely innocent videos of children playing. If a young girl on a swing momentarily exposed her underwear, the comments would fill up with sexualized comments and time stamps so others could go directly to that moment in the video, and links would be posted in online forums. The videos would then get enough views from like-minded sick people that they would rise up in YouTube’s recommendation engine and all of a sudden they would be only a click or two away from, say, women’s shopping videos. Although Google has been trying to stop the use of YouTube by pedophiles for years, they still found a way to connect with each other, egg each other on, and engage in predatory behavior. Google is now updating its algorithms to disable comments on any videos that feature anyone under 13, and is standing by to extend the ban to any video that features anyone from 13-18 if necessary.

Christchurch shooting  Critics say Google and Facebook should do more. The Christchurch shooting provides a good example of how difficult that will be.

The mosque shooting was streamed on Facebook Live, but only viewed by a couple of hundred people live. The video was then uploaded to Facebook, YouTube, Twitter and Instagram, along with links to an online manifesto, all designed to maximize attention. The tech companies attempted to block the videos at upload but there was an extended period when they were still visible on each of the services. Outrage ensued, because outrage is our default reaction to everything. Why can’t these companies block obviously offensive content?

Facebook announced that it had blocked 1.5 million separate uploads of the shooter’s video. That can only be done by automated filters analyzing videos on the fly, looking for identifying characteristics. The reason that some copies of the video still made it online is because people are scum. They’re editing the video to make it subtly different than the original. Or they’re uploading a mirror-reversed copy. Or they’re filming the video while it plays on another device, a screen on another screen. They’re trading tips about how to game the system and defeat the AI filters. And they’re doing this at a scale that is effectively impossible to monitor in real time.

Content moderation by ISPs  There’s another place that barriers can be set up to prevent us from seeing harmful content, and this one is much more troubling. In New Zealand and Australia, the ISPs – the local equivalent of Comcast and Verizon – blocked all access to 4chan and 8chan, the message boards that most actively spread the Christchurch video. They didn’t do that in collaboration with law enforcement or politicians, they just blocked them. Decent people can’t feel too bad – those message boards are terrible places full of the dregs of humanity. But they also have thriving message boards for anime lovers and gamers and other perfectly legitimate topics of conversation. ISPs are not supposed to erect barriers between the users they serve and the websites those users want to visit. New Zealand and Australia do not have net neutrality laws to prevent the ISPs from blocking websites. At the moment, neither does the US. Do you want Comcast deciding what websites you can visit?

Content moderation by legislation

Content moderation is hard. Only the largest tech companies have the resources to do this kind of algorithmic moderation successfully. One of the consequences of calling for content moderation is that it favors the large companies and makes it more difficult or impossible for smaller companies to compete. The big will grow bigger.

Send executives to jail?  Legislation is now being proposed to deal with the abuse of free speech. Australia has proposed a bill that could send executives of social networks to jail for up to three years if they do not “expeditiously” remove “abhorrent” violent content. Australia’s prime minister hopes that Australia will serve as a “model approach” for G20 countries to follow. A draconian remedy like this would have an obvious chilling effect on companies big and small.

EU Copyright Directive  Meanwhile the EU just passed Article 13 (now 17) of a Copyright Directive that has the potential to sharply reduce our access to online material in the name of “copyright,” another type of content moderation. Powerful entertainment industry lobbyists have caused the EU to issue a mandate that all online platforms must block uploads of anything claimed to be copyrighted. The only way to accomplish that is to implement filters that check for copyrighted material on the fly. The EFF explains: “Filters would be incredibly expensive to create, would erroneously block whole libraries’ worth of legitimate materials, allow libraries’ more worth of infringing materials to slip through, and would not be capable of sorting out “fair dealing” uses of copyrighted works from infringing ones.” Only the largest tech companies can even begin to create adequate filters; smaller companies may be forced out of business or at least may shut down virtually all user uploads. Individual countries now have to implement the directive; France’s Minister for Culture gave a speech last week admitting that compliance with the directive is impossible without filters. France may be the first country to turn the directive into French law. The Internet transcends national borders, and the effect of the EU directive may be to restrict your ability in the US to see anything online without the approval of the entertainment industry.

When content moderation turns to censorship

China censorship

There are only blurry, indistinct lines between copyright enforcement, elimination of violent content, and outright censorship. Enthusiasm is growing in the US for government regulation of content, because surely our government would not overreach, right?

We have an example in China of the consequences of unfettered “content moderation” under government control. From the New York Times a few days ago: “In recent years, China has shut tens of thousands of websites and social media accounts that contained what it said was illegal content as well as “vulgar” and pornographic material.” Under government guidance, Shanghai-listed People.cn is the go-to censor for Chinese companies seeking to implement censorship of images, texts, music, video, apps, games, advertisements and animations that do not meet government guidelines. According to another New York Times article last week, China is winnowing out online content that extols individualism, so censors blur out men’s earrings and tattoos on sports players in images, and plop a ladybug cap on the pink hair of a Chinese pop star (see above picture). Chinese censors also scrubbed at least ten scenes with gay references from Bohemian Rhapsody, according to another report last week.

It’s easy to shrug and say, oh, that’s China, we’re not like that. Initial efforts to censor the Internet in the US will be directed at terrorism and white nationalism, which will seem uncontroversial to most of us – but slippery slopes actually exist. As the ACLU says, “We should be very careful before we accept a world in which a few big companies can drive speakers off the internet at their discretion.” It’s hard to believe that the government will make smart choices about what to censor during a time of dysfunctional government run on behalf of large corporations.

Content moderation – censorship – is inevitable in the face of the online breakdown of free speech, but don’t make the mistake of believing that there are easy answers.

Share This