« Advocacy groups argue to DOJ that OLC home confinement memo is "incorrect" and should be rescinded | Main | Rounding up some recent reads as another summer week winds down »

August 5, 2021

Prof Slobogin discussing Just Algorithms in guest posts

6a00d83451574769e20224df387165200bIn this recent post, I flagged a notable new forthcoming book authored by Christopher Slobogin.  I asked Prof Slobogin if he might be interested in sharing some key ideas from this important book in a set of guest posts. He kindly agrees, and here is the first of the set.


Doug has graciously invited me to blog about my new book, Just Algorithms: Using Science to Reduce Incarceration and Inform a Jurisprudence of Risk (Cambridge University Press). There follows the first of three excerpts extracted from the Preface (the next two coming soon):

Virtually every state has authorized the use of algorithms that purport to determine the recidivism risk posed by people who have been charged or convicted of crime. Commonly called risk assessment instruments, or RAIs, these algorithms help judges figure out whether individuals who have been arrested should be released pending trial and whether a convicted offender should receive prison time or an enhanced sentence; they assist parole boards in determining whether to release a prisoner; and they aid correctional officials in deciding how offenders should be handled in prison.  Most of these algorithms consist of from five to 15 risk factors associated with criminal history, age, and diagnosis, although an increasing number incorporate other demographic traits and psychological factors as well. Each of these risk factors correlates with a certain number of points that are usually added to compute a person’s risk score; the higher the score, the higher the risk.  Some tools may also aim at identifying needs, such as substance abuse treatment and vocational training, thought to be relevant to rehabilitative interventions that might reduce recidivism. This book will provide examples of a number of these instruments so that the reader can get a sense of their diversity and nuances.

One purpose of this book is to explain how risk algorithms might improve the criminal justice system.  If developed and used properly, RAIs can become a major tool of reform. They can help reduce the use of pretrial detention and prison and the length of prison sentences, without appreciably increasing, and perhaps even decreasing, the peril to the public (goals that are particularly pressing as COVID-19 ravages our penal facilities).  They can mitigate the excessively punitive bail and sentencing regimes that currently exist in most states. They can allocate correctional resources more efficiently and consistently.  And they can provide the springboard for evidence-based rehabilitative programs aimed at reducing recidivism. More broadly, by making criminal justice decision-making more transparent, these tools could force long overdue reexamination of the purposes of the criminal justice system and of the outcomes it should be trying to achieve.

Despite their potential advantages, the risk algorithms used in the criminal justice system today are highly controversial.  A common claim is that they are not good at what they purport to do, which is to identify who will offend and who will not, who will be responsive to rehabilitative efforts and who will not be.  But the tools are also maligned as racially biased, dehumanizing, and, for good measure, antithetical to the foundational principles of criminal justice.  A sampling of recent article and book titles makes the point: “Impoverished Algorithms: Misguided Governments, Flawed Technologies, and Social Control,” “Risk as a Proxy for Race: The Dangers of Risk Assessment,” “Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor.” In 2019, over 110 civil rights groups signed a statement calling for an end to pretrial risk assessment instruments.  That same year 27 Ivy League and MIT academics stated that “technical problems” with risk assessment instruments “cannot be resolved.”  And in 2020 another group of 2323 scholars from a wide range of disciplines “demanded” that Springer publishing company, one of the largest purveyors of healthcare and behavioral science books and journals, “issue a statement condemning the use of criminal justice statistics to predict criminality” because of their unscientific nature.

A second purpose of this book is to explore these claims.  All of them have some basis in fact.  But they can easily be overblown.  And if the impact of these criticisms is to prevent the criminal justice system from using algorithms, a potentially valuable means of reform will be lost.  A key argument in favor of algorithms is comparative in nature.  While algorithms can be associated with a number of problems, alternative predictive techniques may well be much worse in each of these respects.  Unstructured decision-making by judges, parole officers, and mental health professionals is notoriously bad, biased and reflexive, and often relies on stereotypes and generalizations that ignore the goals of the system. Algorithms can do better, at least if subject to certain constraints. .

In blogs to come I describe these constraints, and how RAIs can be integrated into the criminal justice system.

August 5, 2021 at 11:55 PM | Permalink


Post a comment

In the body of your email, please indicate if you are a professor, student, prosecutor, defense attorney, etc. so I can gain a sense of who is reading my blog. Thank you, DAB