« "The Case for Pattern-and-Practice Investigations Against District Attorney’s Offices" | Main | Highlighting the persistent problems from the US's high recidivism rate »
August 11, 2021
Part 3 of Prof Slobogin discussing Just Algorithms
In this recent post, I explained that I asked Prof Christopher Slobogin to share in a set of guest posts some key ideas from his new book, Just Algorithms: Using Science to Reduce Incarceration and Inform a Jurisprudence of Risk. Here is the third and final post of this set (following the first and second).
--------
In two previous blogs about my new book, Just Algorithms: Using Science to Reduce Incarceration and Inform a Jurisprudence of Risk (Cambridge University Press), I described its thesis that risk assessment instruments (RAIs) can reduce incarceration in a cost-effective manner, and the “jurisprudence of risk” it advances that aims to ensure accurate and fair instruments that, among other things, avoid racially disparate outcomes. To take full advantage of risk assessment’s potential for curbing incarceration and rationalizing sentencing, however, we must also rethink our current punishment regime, which is another goal of the book.
In the past 50 years, a large number of states have moved away from indeterminate sentences controlled by parole boards toward determinate sentencing, which shifts power to prosecutors who can now essentially dictate the sentence received after trial through the charging decision. Most of the states that have not adopted determinate sentencing have effectively gone in the same direction by significantly circumscribing the authority of parole boards to make release decisions. These changes were understandable, given the dispositional disparities that occurred with indeterminate sentencing, the checkered history of parole boards, and the difficulty of assessing risk and rehabilitative potential.
With the advent of more accurate and objective predictive algorithms, however, indeterminate sentencing should be given a second chance. More specifically, while judges should still impose a sentence range that is determined by desert, risk-needs algorithms should be instrumental in determining whether offenders who are imprisoned stay there beyond the minimum term of that sentence. Sentencing would no longer be based on convoluted front-end calculations which attempt to divine the precise culpability of the offender, tempered or enhanced by the prosecutor’s or the judge’s speculative intuitions about deterrence, risk or rehabilitative goals. Rather, after the judge imposes the retributively-defined sentence range based on the charge of conviction, offenders would serve the minimum sentence (which for misdemeanors and lower level felonies may not involve prison), and only be subject to prolonged restraint if they are determined to be high risk via a validated RAI. In this form of limiting retributivism, desert would set the range of the sentence, risk its nature and duration.
With this type of sentencing system, not only will the arbitrariness of the old parole-driven scheme be reduced, but the power structure within the criminal justice system will be profitably re-oriented. Today, the plea bargaining process allows prosecutors to threaten draconian sentences that bludgeon defendants, even innocent ones, into accepting convictions without trial. If, instead, post-trial dispositions within the sentence range depend on a parole board’s determination of risks and needs, the ultimate disposition after a trial will be unknowable, and prosecutorial bargaining power inevitably would be reduced. Defendants can turn down prosecutorial offers with virtual impunity if they are considered low risk. And even high risk defendants might want to roll the dice with the parole board. Innocent people would be much less likely to plead guilty, and guilty people would be much less likely to acquiesce to harshly punitive bargains. The prosecutor’s main leverage will come from offers of reduced charges or alternatives to prison, because with parole boards controlling release, threats to recommend the maximum sentence to the judge will be meaningless.
These proposals may appear to be radical. But in fact they merely reinstate a version of the sentencing regimes that existed in much of this country before the middle of the twentieth century, when dispositions were more flexible and plea bargaining and guilty pleas were less dominant. At the same time, a key difference in these proposals, and the primary reason rejuvenating indeterminate sentencing is justifiable, is the reliance on risk assessment algorithms. Without them, judges and parole boards are simply guessing about dangerousness, and their default judgment — absent heroic efforts to resist public pressure and normal human risk-averseness — will be to find that offenders pose a high risk of reoffending. With them — and assuming their results are treated as presumptive — judges who refuse to imprison an offender and parole boards that make a release decision can point to known base rates (which, in the case of violent crime, are very low) and can blame the algorithm if things go awry.
The overarching hypothesis of this book is this: Whether implemented prior to trial in lieu of the bail system, or post-conviction in lieu of unstructured predictive decision-making, just algorithms can be a central component of any effort to reduce the human and financial cost of incarceration, without sacrificing public safety. That hypothesis may be wrong, but it is worth a fair test. Because when developed and used in a manner consistent with a coherent jurisprudence of risk, algorithms could be the single most potent mechanism we have for bringing about real reform of the American criminal justice system.
I want to thank Doug Berman again for letting me describe my book on his Sentencing Law & Policy Blog.
August 11, 2021 at 10:56 PM | Permalink