« Voting 6-3, SCOTUS reinstates vacated death sentence for Boston Marathon bomber Dzhokhar Tsarnaev | Main | New Federal Sentencing Reporter double issue explores "Financial Sanctions in Sentencing and Corrections" »

March 5, 2022

"Algorithm v. Algorithm"

286-2869560_spy-vs-spy-clipart-2-by-thomas-white-spy-black-spyThe title of this post is the title of this new Duke Law Journal article authored by Cary Coglianese and Alicia Lai. Though not discussing sentencing at length, regular readers know of the many ways the algorithm debate has purchase for the criminal justice system. (In addition, the title of the article reminded me of a cartoon from my youth noted here.) Here is the abstract:

Critics raise alarm bells about governmental use of digital algorithms, charging that they are too complex, inscrutable, and prone to bias.  A realistic assessment of digital algorithms, though, must acknowledge that government is already driven by algorithms of arguably greater complexity and potential for abuse: the algorithms implicit in human decision-making.  The human brain operates algorithmically through complex neural networks.  And when humans make collective decisions, they operate via algorithms too—those reflected in legislative, judicial, and administrative processes.

Yet these human algorithms undeniably fail and are far from transparent.  On an individual level, human decision-making suffers from memory limitations, fatigue, cognitive biases, and racial prejudices, among other problems. On an organizational level, humans succumb to groupthink and free riding, along with other collective dysfunctionalities. As a result, human decisions will in some cases prove far more problematic than their digital counterparts.  Digital algorithms, such as machine learning, can improve governmental performance by facilitating outcomes that are more accurate, timely, and consistent. 

Still, when deciding whether to deploy digital algorithms to perform tasks currently completed by humans, public officials should proceed with care on a case-by-case basis.  They should consider both whether a particular use would satisfy the basic preconditions for successful machine learning and whether it would in fact lead to demonstrable improvements over the status quo.  The question about the future of public administration is not whether digital algorithms are perfect.  Rather, it is a question about what will work better: human algorithms or digital ones.

March 5, 2022 at 12:36 PM | Permalink

Comments

I am reminded of when I was a high school student doing debate back when dinosaurs roamed the earth. One year, the topic chosen by the powers that be was criminal procedure (the resolution was something like the U.S. should establish uniform procedures). Because the death penalty would be a frequent topic, preparation required reading a lot of articles about the pros and cons of the death penalty. This was my first introduction to econometrics and what I learned then (and would relearn several years later in my college statistics class) is how hard it is to isolate the significance of any one factor in the level of crime that we have.

Ultimately, that is what algorithms try to do in the criminal justice system -- look and see what factors tell us whether a person is likely to reoffend (whether on bond, on parole, or after completion of sentence). And to be worthy of confidence, the formula has to be public -- both so that its accuracy can be measured and so that -- in the individual case -- the parties can litigate which factors apply to this defendant.

Furthermore, in a democratic society based on the concept of equal opportunity, those factors have to be perceived to be fair and equitable. Regardless of whether statistics told us that race, income, or gender was associated with the likelihood of committing an offense, such factors would be rejected by the public. And, it is just as unacceptable to the public if the factors that are used are viewed as a proxy for such categories or if the factors are either things with no apparent connection to crime (how many books have you read this year) or tread on some other valued activity (how many guns do you own).

The advantage of a mathematical algorithm is that it is clearly objective and takes out the personal irrationalities of the decision-maker. But facially-neutral rules (both the rich and the poor can't sleep on the banks of the River Seine) are not always actually neutral, and in assessing risk there will always be false negatives and false positives.

Posted by: tmm | Mar 7, 2022 5:25:12 PM

Post a comment

In the body of your email, please indicate if you are a professor, student, prosecutor, defense attorney, etc. so I can gain a sense of who is reading my blog. Thank you, DAB