Buried on page A25 of Thursday鈥檚 New York Times is a tiny story on what鈥檚 likely to become a big problem after the recent horrific mass shooting. According to the report, top intelligence officials in the New York City Police Department met on Thursday to explore ways to identify 鈥渄eranged鈥 shooters before any attack. One of these tactics would involve 鈥渃reating an algorithm鈥 to identify keywords in online public sources indicative of an impending incident. In other words, they seek to build an algorithm to constantly monitor Facebook and Twitter for terms like 鈥渟hoot鈥 or 鈥渒ill.鈥
This is not a new idea. It鈥檚 part of what the defense and intelligence communities call 鈥渙pen source intelligence鈥 or OSINT. And it can raise serious First Amendment concerns, especially when it鈥檚 used domestically and when it involves automated data mining by law enforcement agencies like the NYPD.
At the outset, it鈥檚 important to understand exactly what we鈥檙e talking about here. This is not tracking when police receive a tip that someone is posting public threats. Nor is it even a police officer taking it upon herself to scour for leads. Here, we鈥檙e talking about a computer at the NYPD automatically reading every post on a social networking site and flagging entries with certain words for police scrutiny. This raises numerous constitutional concerns, many obvious and some less so.
First, even when you鈥檙e talking about relatively sophisticated algorithms that, for instance, are able to distinguish between (like 鈥渟hoot鈥 with a basketball versus a gun), you鈥檙e going to get a vast universe of false positives. Additionally, you鈥檙e also going to get true-false positives鈥攑eople making dumb threats on their Facebook page as, for instance, a joke. To the extent these are 鈥溾 directed at an individual, they receive lesser First Amendment protection, but 鈥渢rue threats鈥 are going to be a small subset of the vast amount of idiotic trolling that happens on social media on a daily basis. This problem presents an insurmountable administrative burden, not to mention the fact the digital dragnet will ensnare numerous innocent people.
Second, and aside from these practical concerns, we have a First Amendment right to be free from government monitoring, even when engaged in public activity. Just because an anti-war group meets in a church that is open to the public doesn鈥檛 mean the FBI should be able to spy on them. The same principle applies in the digital ether. The government should need a good reason鈥攕pecific to a person鈥攂efore it can go and monitor that person鈥檚 activity. Why? Because if we fear that one peaceful protest is being monitored, we fear they all will be. And, people who would otherwise engage in lawful protest won鈥檛. It puts a big wet blanket on political discourse.
Third鈥攁nd my colleague Mike German gets credit for this insight鈥攚hen somebody gets snagged by these dragnets, it鈥檚 very difficult to clear the 鈥渃loud of suspicion.鈥 Consider , the late security guard who was initially praised as a hero in the 1996 Atlanta Olympics bombing and then became the prime suspect based, in part, on statements he made to the press. FBI agents, working under the profile of a 鈥渓one bomber鈥 who planted the device only in order to heroically find it, reviewed Jewell鈥檚 television appearances and believed they matched. Although the investigation smacks of confirmation bias鈥攁gents seeing what they wanted to see鈥擩ewell had great trouble escaping the cloud of suspicion. With an algorithm tracking everyone鈥檚 public statements on social media, take that problem and multiply it many-fold.
Finally, there is the very obvious problem that authorities are unlikely to uncover legitimately probative evidence of an impending shooting through automated OSINT. Put another way, it鈥檚 exceedingly rare鈥攁nd I鈥檓 not aware of a case鈥攚here a mass murderer clearly announced his or her intention beforehand on YouTube, Facebook or Twitter. Rather, automated OSINT will likely start zeroing in, as indicative of dangerous intent, on indications of mental instability, extreme political views or just weird thoughts. These all qualify as constitutionally protected speech, and, indeed, political speech is often said to receive the highest level of First Amendment protection. (I would say that to the extent that an individual does take to a Facebook wall to issue a credible threat, that should of course be reported.)
All of this is to say that automated OSINT, in addition to being constitutionally problematic, just won鈥檛 work. It will divert limited law enforcement resources; focus investigative activity on quacks more than dangerous individuals; and increase the risk that police will miss the true threat, made in private to a trusted confidante, which does deserves swift action to protect the public. It鈥檚 a bad idea.