Back to News & Commentary

What if Algorithms Worked For Accused People, Instead of Against Them?

A prisoner holding his extended and clasped hands between the bars of his cell.
We created an algorithmic tool to find out what risks the criminal legal system poses to the people entering it, rather than their risk to "public safety."
A prisoner holding his extended and clasped hands between the bars of his cell.
Aaron Horowitz,
Former Head of Analytics,
ÀÏ°ÄÃÅ¿ª½±½á¹û
Kristian Lum,
Assistant Research Professor, University of Pennsylvania
Erica Marshall,
Executive Director, Idaho Justice Project
Mikaela Meyer,
Ph.D. Candidate, Carnegie Mellon University
Share This Page
June 2, 2022

Throughout the U.S., judges, prosecutors, and parole boards are given algorithms to guide life-altering decisions about the liberty of the people before them, based mainly on perceived risks to "public safety." At the same time, people accused and convicted of crimes are given little support. With underfunded public defense in most of these contexts, and no right to counsel in others (e.g., in parole decisions), the system is stacked against them. We wanted to find out what would happen if we flipped the script and used algorithms to benefit people entangled in the legal system, rather than those who wield power against them.

In a , the ÀÏ°ÄÃÅ¿ª½±½á¹û and collaborators at Carnegie Mellon and the University of Pennsylvania asked a simple question: Can one predict the risk of the criminal justice system to the people accused by it, instead of the risks posed by the people themselves?

The answer seems to be yes, and the process of creating a tool like this helps lay bare broader issues in the logic of existing risk assessment tools. While traditional risk assessment tools consider risks to the public such as the likelihood of reoffending, the criminal legal system itself poses a host of risks to the people ensnared in it, many of which extend to their families and communities and have long-term repercussions. These include being denied pretrial release, receiving a sentence disproportionately lengthy for the given conviction, being wrongfully convicted, being saddled with a record that makes it impossible to obtain housing or employment, and more.

The prototype risk assessment instrument we created predicts a person accused of a federal crime's risk of receiving an especially lengthy sentence based on factors that should be legally irrelevant in determining the sentence length, like the accused person's race or the party of the president who appointed the judge. The instrument performs comparably to other risk assessment instruments used in criminal justice settings, and the predictive accuracy matches or exceeds that of many tools deployed across the country. Still, that doesn't mean this tool, or any of the existing tools in use, are necessarily good or make peoples' lives better — just that it meets existing validation standards.

We chose to model the risk of lengthy sentences among people prosecuted federally for several reasons. The most practical is simply that the data existed. In many criminal justice settings, the information advocates and researchers need most is not collected, or collected poorly, such as the details of plea bargains that make up roughly 95 percent of convictions. Lengthy sentences are also a particularly pernicious problem in the U.S. — . Norway caps sentences for most crimes at 21 years, Portugal at 25. Excessively long sentences are noted as a , and substantial evidence suggests longer sentences can actually have a negligible or negative impact on the alleged goal of people sent to prison. Evidence also suggests that lengthy sentences do not . Finally, non-legally relevant factors impacting sentencing decisions are well documented: concluded that similarly situated Black men received, on average, a sentence that is 19.1 percent longer than their white counterparts.

The process of creating this tool illuminated the choices embedded in the creation of other tools that are frequently used in parole and pretrial settings. For instance, we set multiple thresholds, such as the definition of an excessively long sentence and the probability boundary of when we considered someone especially likely to get one of those sentences. These are policy choices, much like the to shift the thresholds on its own tool to reduce the number of medically vulnerable incarcerated people eligible for release during the pandemic, or ICE, which . In short, any time a tool is made, it is likely to center the viewpoint of its creators and create a new policy, which should make us wary of these tools and their applications.

The models we built — unlike existing risk assessment instruments — are built to aid public defenders and accused persons, instead of prosecutors and judges. If we were to provide public defenders with the risk that a defendant will get a severe sentence or just how far they are from other similar cases, perhaps it could help them to make informed decisions when navigating sentencing proceedings and plea bargaining.

There are other possible applications as well. The recently enacted First Step Act enabled incarcerated people, for the first time in history, to directly file motions with the court to seek a sentence reduction where "extraordinary and compelling'' circumstances warrant a reduction. Since then, federal district courts across the country have granted thousands of such motions where the personal history of the defendant, the underlying offense, the original sentence, the disparity created by any changes in the law, and other factors warrant such a reduction. With our models, defendants could point to how far their sentence deviates from what the model would predict based on the characteristics of their case, including the ability to look at how their case may be resolved today versus when they were originally sentenced. Petitioners could also point to the legally irrelevant factors that may have influenced their sentencing.

The ÀÏ°ÄÃÅ¿ª½±½á¹û and our coalition partners have been pushing the Biden administration to use the presidential power of clemency. Though he recently commuted the sentences of 75 people, he is failing to use it systematically. Clemency is mostly used only for specific high-profile cases, and tends to provide relief to people sentenced under now-defunct criminal laws or charges now deemed overly punitive (e.g., non-violent drug offenses). These categories exclude many federally incarcerated people from even a slim possibility of mercy. We built the model in such a way that it can indicate unreasonably long sentences, even for people left out of many criminal justice reforms and clemency actions, such as those who have been sentenced for violent crimes.

We expect that there will be objections to the use of this model on the basis of technical limitations, the acceptability of using an algorithm for such a high stakes decision, or the subjectivity of the choices that were made. But those who might raise such concerns would do well to apply them equally to the tools currently used throughout the system.

Learn More ÀÏ°ÄÃÅ¿ª½±½á¹û the Issues on This Page