« Highlighting the importance of data to ensure equity in diversion efforts | Main | Tale of two sentencings highlights disparity and need for for better data »

August 8, 2021

Part 2 of Prof Slobogin discussing Just Algorithms

6a00d83451574769e20224df387165200bIn this recent post, I explained that I asked Prof Christopher Slobogin to share in a set of guest posts some key ideas from his new book, Just Algorithms: Using Science to Reduce Incarceration and Inform a Jurisprudence of Risk.  Here is the second of the set.

--------

In a previous blog post on my new book Just Algorithms: Using Science to Reduce Incarceration and Inform a Jurisprudence of Risk (Cambridge University Press), I made the case for using risk assessments instruments (RAIs) in the pretrial and post-conviction process as a means of reducing incarceration through providing more accurate and cost-effective assessments of the risk of reoffending.  But not all risk assessment instruments are created equal.  Although algorithms, on average, are superior to unstructured judgment when it comes to prediction, many are seriously defective in a number of respects.  A major goal of my book is to provide a set of principles meant to govern the development of these instruments and to guide judges and other criminal justice actors in determining which measures to use and for what purposes.  Influenced by insights gleaned from the algorithms themselves, it advances, in short, a much-needed “jurisprudence of risk” analogous to the jurisprudence of criminal liability that has long governed the definition of crimes and the scope of punishment.

The first part of the book’s title points to one important aspect of this jurisprudence.  If the recommendations in this book are followed, the usual approach taken by legal decision-makers — which is to treat the algorithmic forecast as simply one factor relevant to risk assessment — would generally be impermissible.  “Adjusting” the results of a well-validated RAI, based on instincts and experience, defeats the purpose of using an RAI, especially when the decision-makers’ intuition about risk is based on factors that have already been considered in the tool.  Incorporating human judgment into the risk assessment will usually make matters worse when the RAI meets the basic requirements outlined in this book.  This notion is one meaning — the literal one — of “Just Algorithms.”

The second meaning of that title is even more important.  Properly cabined, predictive algorithms can be just.  Numerous writers have argued to the contrary, pointing in particular to racial disparities among those who are identified as high risk.  But even if such disparities do exist, they do not necessarily make risk algorithms unjust.  This book takes the position that the fairest approach to evaluating risk is to treat people who are of equal risk equally.  The primary goal of an RAI should be to identify accurately those who are high risk and those who are low risk, regardless of color, even if that means that a greater percentage of people of color are identified as high risk.  At the same time, it must be recognized that assessments of risk may be inaccurate if the influence of racialized policing and prosecutorial practices on the validity of assessment instruments is not taken into account.

Another, related complaint about predictive algorithms — one that has special salience at sentencing and other post-conviction settings — is that punishment should never be based on conduct that has not yet occurred, both given the uncertainty of prediction and its insult to human dignity and autonomy.  The point this book makes on this score is, again, a comparative one.  The primary competitor to sentencing that considers risk is a purely retributive system — one that relies solely on backward-looking assessments of criminal conduct and the mental states that accompany it.  But such a system is rife with speculative claims about just desert, and can be remarkably inattentive to the impact of mitigating human foibles.  Properly regulated algorithmic risk assessments, in contrast, can differentiate high and low risk offenders at least as reliably as judges and juries can calibrate culpability, and can do so without abandoning condemnation based on blame, especially if sentences ranges are still based on retributive principles.  The needs part of theassessment can also facilitate the identification of autonomy-affirming and dignity-enhancing treatment programs that help offenders help themselves.

To realize the full potential of RAIs in the sentencing setting, however, the current fixation on determinate sentencing needs to be rethought.  That is the subject of my third and final blog, soon to come.

Prior related post:

August 8, 2021 at 11:01 PM | Permalink

Comments

Post a comment

In the body of your email, please indicate if you are a professor, student, prosecutor, defense attorney, etc. so I can gain a sense of who is reading my blog. Thank you, DAB