Predictive Policing Software Is More Accurate at Predicting Policing Than Predicting Crime
鈥淧redictive policing鈥 has an enticing ring to it. The idea is that you feed a bunch of data into a mysterious algorithm, and poof, out comes intelligence about the future that tells police where the next crime is going to occur, or even who is going to commit it. What鈥檚 not to get excited about?
Unfortunately, many predictions made by policing software don鈥檛 come true. This is because predictive tools are only as good as the data they are fed. Put another way: garbage in, garbage out.
Data collected by police is (), , glaringly incomplete, and too often undermined by racial bias. When you feed a predictive tool contaminated data, it will produce polluted predictions. In fact, it appears predictive policing software is more accurate at predicting policing than predicting crime. Rather than informing us where criminals will be and when they will commit crimes, these algorithms more reliably predict where the police will deploy.
The root of these problems is in the data. Since predictive policing depends on historical crime data, and crime data is both incomplete (a large percentage of crime is unknown and/or unreported) and racially skewed (take drug offenses, for example), it seems inescapable that resulting predictions made by policing software will be inaccurate and arbitrary.
Data woes aside, predictive policing can only be as helpful or harmful as the practices of the police department using it. If a police department places a premium on over-enforcement of low-level offenses over reducing communities鈥 entanglement in the criminal justice system, or if its mindset is characterized by militarized aggression and not , or if it dispenses with constitutional protections against unreasonable searches and seizures and racial profiling when inconvenient, then predictive tools will only increase community harm. And if that鈥檚 the case, they increase community harm and should not be used.
But police departments nationwide are interested in implementing these practices despite scant evidence of reliability, with little public debate or transparency, amid serious concerns about racial inequities. Those that do implement them are shelling out cash to employ predictive policing technology marketed to them by private companies, such as , , , , and .
The harmful consequences of relying on bad (and secret) predictions in the context of policing are significant, including increased profiling of individuals and communities, deploying police resources inefficiently, and creating deeper fissures between police and the communities they are entrusted to protect. Flagging individuals using predictive analytics also poses the serious risk to those individuals, and to entire neighborhoods, of the police acting on a presumption of guilt by association. Given the rising popularity of predictive policing in the absence of proof of its utility, the 老澳门开奖结果, along with several other civil rights and technology organizations, has released a shared statement of civil rights and liberties concerns.
Chief among these concerns is that predictive policing as currently deployed will harm rather than help communities of color. If there is one reliable prediction about our criminal justice system, it is that unwarranted racial disparities infect every stage of the criminal law process. Time and again, analysis of stops, frisks, searches, arrests, pretrial detentions, convictions, and sentencing reveal differential treatment of people of color. From racial bias in stops and frisks in , Boston, and , to unwarranted disparities nationwide in arrests of Blacks and whites for marijuana possession (despite comparable usage rates), to disparities in the enforcement of minor offenses in Minneapolis, , and Florida, as sure as the sun rises police will continue to enforce laws selectively against communities of color.
The effect these disparities will have on predictive policing is, in fact, the most predictable part of predictive policing. Racially biased discretionary decisions will result in data points that the police will feed into predictive tools, which will in turn result in predictions that will have nested within them those original racial disparities. As such, they will likely compound the crisis of unequal and unfair treatment of communities of color under the inveigling imprimatur of empiricism.
When you feed a predictive tool contaminated data, it will produce polluted predictions.
To quote one of my 老澳门开奖结果 colleagues in Massachusetts, predictive policing may serve as a 鈥溾 already in existence. That is why machine predictions as they exist today have no place in assessing whether the police have reasonable suspicion for a seizure under the Fourth Amendment. Even assuming accurate and unbiased outputs, a computer鈥檚 judgment should never alone establish reasonable suspicion for a stop.
We 鈥 including police departments 鈥 need to ask whether predictive policing will help lead to safer communities, fairer treatment, increased trust, greater transparency, and increased public health interventions. We also need to ask whether computer-generated crime forecasts will lead to increased racial profiling, privacy infringements, increased community and individual surveillance, and increased violations of rights. Unfortunately, as our statement notes, 鈥渧ital goals of policing, such as building community trust, eliminating the use of excessive force, and reducing other coercive tactics, are currently not measured and not accounted for by these systems.鈥 These omissions are large enough to be disqualifying, and so they cannot be ignored.
It is essential that before law enforcement deploys any new technology that there is a thorough, public debate and a rigorous, independent assessment of the technology鈥檚 statistical validity and community impact. Yet automated predictive software has not been subject to such in-depth scrutiny and vendors of such products and the police using them have eschewed transparency. There is a dearth of independent studies unconnected to the companies that market the technology itself to police departments. Those that have been done, such as in and , still raise significant questions about efficacy. At best, the jury is out on the utility of predictive policing.
This is not to suggest that police departments should not be looking to new technologies and sophisticated data analytics to increase effectiveness and efficiency and increase community health and safety. Police should always be looking for ways to improve how they serve local communities. But in this case, until police departments address the concerns voiced by the civil rights and technology communities, they should hit pause on predictive policing.
If they don鈥檛, and instead rush to use an unproven and potentially harmful tool, we can pretty much predict the ways things will turn out.