Is Censorship Ever a Good Thing?
In late 2018, Pinterest temporarily blocked search results for vaccinations on content posted to both pins and boards. The move was a stopgap to prevent the spread of misinformation from the anti-vaccination movement on the social media platform. It wasn’t the first time Pinterest had made a move to prevent misinformation campaigns. In a 2016 study published in Vaccine, researchers from Virginia Commonwealth University found 75 percent of pins related to vaccination were negative toward vaccination protocols. In response, Pinterest updated its community guidelines in 2017 to allow banning or blocking both users and pins that spread false health information.
The move caused controversy, particularly after accounts from popular alternative “health news sites” such as Mercola.com and Health Nut News—both of which publish anti-vax and other health conspiracy theories—were removed from Pinterest. In 2018, the social media site sparked controversy again when the wider ban on vaccination searches highlighted the issue of censorship in social media.
Pinterest isn’t alone in taking steps to curb the growth of this epidemic of fake information. Amazon, for example, has started to remove books from their online marketplace that promote fake autism cures. Facebook is also changing the way it populates news feeds, actively working to de-prioritize vaccine hoaxes. And YouTube has removed ads from a selection of anti-vax videos, depriving hoaxers of a revenue stream. The streaming platform has also stopped recommending conspiracy or misinformation-spreading videos under its suggested play bar. But these moves have raised the question of social media’s responsibility in suppressing fake news—and where the rules on censorship lie.
The internet is awash in fake news
Free speech proponents have long held that any form of censorship is bad. But looking at recent history, such as the height of censorship attempts in the late ’90s and early 2000s, it’s clear censorship attempts for public good are not new. One such controversy involved the World Wrestling Entertainment’s show “Raw,” which was frequently the target of censorship plays for so-called moral grounds. The TV show reached its highest audience numbers to date in 1999 with 6.2 million homes tuning in. In 2019, there will be an estimated 2.77 billion social media users.
News, particularly fake news, also travels fast along social media channels. According to a study published in Science, fake news and false rumors reach more people, become embedded more deeply into social network sites (in particular, Twitter), and spread faster than accurate stories.
Trouble is, while the ’90s era of censorship thought TV shows, video games, and explicit music could hurt children, much of today’s misinformation actually does. After nearly 40 years of vaccinations, the CDC declared measles eliminated (or dormant) in the United States in 2000. This was an exciting milestone; before vaccinations started in 1963, about 3 to 4 million people contracted measles every year in the nation, and as many as 500 of those died. Although anti-vaccination debates have been around almost as long as we’ve had vaccinations, the proliferation of information on the internet, as well as the ease of accessing it, has caused a spread of misinformation about the perils of vaccination that has helped this once-fringe movement gain strength. The result: More than 700 cases of measles have been reported in 22 states in 2019, threatening to once again take the lives of children, the elderly, and the infirm.
The anti-vaxxers contingent isn’t the only group that has gained traction with its ideas thanks to the Wild West playground of social media. For example, the almost-deadly Pizzagate falsely connected a North Carolina pizza parlor to a human trafficking operation, leading to a civilian firing a gun into the restaurant in a misguided attempt to “save” the alleged victims. And if you heard the “OK” sign was a white supremacist symbol, you can thank 4chan, which spread the falsehood that the once-friendly hand gesture was a trolling tool of the far right.
Science (sort of) endorses censorship
To date, no sites have embraced outright censorship—but in a sense, that’s exactly what science suggests we should do to slow the growth of misinformation. “The data says that mere exposure to conspiracy thinking leads you to potentially believe the conspiracy,” says Dr. Steven Smallpage, assistant professor of political science at Stetson University in DeLand, Florida. Smallpage researches what motivates people to believe conspiracies, and he observes that, “Believers always say that they were just on YouTube and accidentally stumbled across a flat Earth video. They were bored, watched more videos, and soon enough, they’re converts. There’s a certain power there in just being exposed to these ideas.”
In other words, malicious and misleading information can be infectious. Once someone has entered the fake news rabbit hole, it’s hard to leave. Believers often double down on their conspiracy belief system in the face of contravening evidence, partly because conspiracies are not falsifiable—it’s impossible to prove (to a believer’s satisfaction) that these ideas are false. For evidence, talk to any Moon hoaxer about how NASA persuaded 400,000 aerospace industry employees and the Soviet Union to go along with faking the American Moon landing.
The bottom line is, some experts like Smallpage believe the only reliable way to stop conspiracy theories is to prevent exposure to them in the first place.
Censorship and the First Amendment
But just because some believe censorship has a demonstrable public benefit doesn’t mean we necessarily should impose it. Then there’s the other issue: Can we legally censor others?
That depends upon who the “we” is. The Constitution asserts, “Congress shall make no law…abridging the freedom of speech or of the press.” But it places no such restrictions on private businesses such as social media sites. While these sites are bound by their terms of service, the reality is that online social media platforms practice a form of censorship every day. Facebook, for example, makes decisions about what you see in your feed, prioritizing stories from friends with whom you frequently interact, and how many likes or comments a post has gotten, among other criteria. So, if Facebook is already opting to prioritize certain kinds of information, then it’s not unreasonable to envision an algorithm that never displays—or outright deletes—certain kinds of content.
Then there’s the other, murkier issue. Social media sites are privately owned but publicly used. The laws on social media content are currently fairly undefined; it’s clear that social media sites provide a public platform, but because they’re run by private businesses, censorship laws are still a gray area. A lawsuit by Chuck Johnson, an alt-right activist, against Twitter for alleged censorship was struck down by a state court, which ruled that Twitter has the “right to exercise independent editorial control over the content on its platform.” But the issue’s far from decided: A bill making its way through the Texas legislature, if signed into law, would allow social media users to sue platforms for restricting speech based on personal opinions.
The right to exercise editorial control can also be a slippery slope, especially when social media sites rely on algorithms without human intervention to decide what does and doesn’t appear in your news feed. For example, the Philadelphia Museum of Art’s Facebook account was automatically flagged back in 2016 because it posted Belgian artist Evelyne Axell’s 1964 painting “Ice Cream,” which depicts a woman licking an ice cream cone. The algorithm considered it sexually suggestive.
That incident was unintentional, but what happens when unscrupulous people game the engines of censorship? And if we get used to censorship in some areas, how will that chip away at our resistance to censorship supported by the government? “In Latin America, campaigns against misinformation have been used as an excuse to silence critics,” said Corynne McSherry, legal director at the Electronic Frontier Foundation, in a recent Intelligence Squared debate on free speech and social media. Moreover, she warns, “the government of France is embedding someone at Facebook to help them make decisions about what content should stay up and take down.”
It’s a real concern: With social media as one of the leading providers of news for Americans, decisions made by platforms on what posts get promoted or seen have the power to influence or shape beliefs that in turn lead to actions, including the decision not to vaccinate children or whom to vote for in the next election. But do we want to live in a country where online services have broad, sweeping authority to restrict speech? Stopping the spread of misinformation is a worthwhile goal, but when does that encroach upon free speech and the expression of viewpoints? And where does it stop? What topics should be open to censors’ review, and who decides that? For the marketplace of ideas to remain truly open, a balance must be reached between personal expression and ensuring that users have access to accurate information.