Back to News & Commentary

Turning Tech Companies Into Spies Won鈥檛 Work

Apple logo
Apple logo
Lee Rowland,
Policy Director,
NYCLU
Share This Page
December 11, 2015

This was originally posted at .

Imagine that before engaging in an attack, a terrorist sent an anonymous handwritten note to a local newspaper. Would politicians scramble to demand that we burn all paper, ban all anonymous mail or install government cameras in every newsroom? Of course not.

Yet following the revelation that one of the San Bernardino murderers pledged support for ISIS on social media, we are seeing a renewed flurry of politicians, both established and aspiring, calling for increased censorship of the Internet. Proposals include demands that companies scrub their platforms of terrorism-related content and a federal bill that would turn social media companies into state-mandated reporters of 鈥渢errorist activity.鈥

These responses are deeply misguided. A good two-step framework for evaluating any policy proposal is to ask: 1) is it consistent with our laws and values and 2) is it effective in achieving its goals? Applied to censorship of online speech, the answer to both questions is an emphatic no.

First, pure censorship of speech is just a very bad idea. Whether it鈥檚 labeled 鈥渉ate speech鈥 or 鈥渢errorist speech,鈥 silencing speech that is not itself illegal cuts directly against our free speech values (only a very a narrow and carefully-defined band of speech is itself illegal, like threats, incitement to violence or child pornography). There鈥檚 a reason that our First Amendment protects even the vilest speech. It鈥檚 not just lawyerly paranoia about slippery slopes鈥攊t鈥檚 because transparency itself has immense value. Censorship makes censored speech all the more dangerous because we lose our most powerful tool in combatting evil ideas: the ability to identify them and respond with better ideas.

Mandating tech companies to report on 鈥渢errorist activities,鈥 which Sen. Dianne Feinstein (D-Calif.) proposed in a bill she introduced Tuesday, is a flawed idea as well. Social media companies should and do notify the government if they learn that a user is threatening immediate violence. But we should be wary of proposals that go beyond that. Perhaps the most obvious reason is the simplest: online service providers are not experts on terrorism. They鈥檙e businesses, not intelligence agencies. And there鈥檚 no magical bright line that separates 鈥済ood鈥 from 鈥渂ad鈥 speech, no mystical algorithm possessed by Facebook to figure out exactly what speech鈥攕peech that is not already illegal鈥攕hould be reported to the state.

Let鈥檚 say you work on the front lines of Facebook鈥檚 censorship team. Which of these do you report to the feds: someone who retweets ISIS, someone who writes that they sympathize with ISIS鈥檚 foreign policy goals, someone who 鈥渓ikes鈥 an ISIS Facebook page? How about a page dedicated to a mass murderer? Does it matter whether that murderer鈥檚 name is James Holmes or Abu Bakr al-Baghdadi? These are not simple decisions. Imagine the pressure of a government mandate on Facebook鈥檚 censors. There鈥檚 only one option, really鈥攖o report it all. Asking non-experts to help build a massive and meaningless haystack of offensive speech isn鈥檛 a great counter-terrorism strategy if we ever need to find a needle in a hurry.

More important, it would be terrible for political speech. Speech supporting terrorism lives across a razor-thin margin from speech aboutterrorism, foreign policy, drones, the Middle East and Islam. The idea that speaking to such controversial and important policy issues might get you swept into a government dragnet would be enough to chill many from engaging in the sort of speech that鈥檚 at the heart of the First Amendment.

These concerns aren鈥檛 merely theoretical. Private companies have a history of censoring speech for reasons that turn out to be misguided. For example, Apple voluntarily blocked applications that permitted users to identify sites of U.S. drone strikes for including 鈥渙bjectionable material.鈥 Drone strikes are certainly objectionable. But providing information about our own government鈥檚 actions is emphatically not; our democracy functions best with public oversight and accountability. Apple鈥檚 decision wrongly cut off access to information critical to a foreign policy debate of immense public concern.

Finally, censoring and monitoring social media speech simply isn鈥檛 an effective remedy for radicalization. Social media companies are not omniscient鈥攖hey do not and cannot monitor every bit of speech that is posted on their networks. Even when social media companies are determined to shut down particular speakers or accounts, those speakers can often circumvent the rules and open new accounts faster than companies can keep up. Shutting down the accounts of terrorist organizations would actually deprive the government of an important source of intelligence. And it would certainly deprive Americans of the ability to see and challenge the views of terrorist organizations鈥攚ithout any demonstrable upshot.

It is a grave mistake to expect or require private social media companies to act as arms of the national security state, just as it is a grave mistake to scapegoat the Internet for laying bare the darkest thoughts of the soul. The Internet is made up of ones and zeros. It does not organically create hate. The hate is, sadly, in our human minds and human hearts. Denying that won鈥檛 get us anywhere in the battle for winning those hearts and minds 鈥 but sunlight will. For hundreds of years, we鈥檝e been a nation that is determined to stay safe and free, to hew closely to our values even in times of war, fear, and terror.

So, politicians: don鈥檛 blame the medium for the message. Especially when the medium鈥攖he Internet鈥攎ay be the greatest asset we have in the fight against terror.

Learn More 老澳门开奖结果 the Issues on This Page