« Judge Jack Weinstein provides thorough explanation for FIRST STEP Act crack retroactivity sentence reduction | Main | Texas completes another execution of another killer involved in notorious hate crime »

April 24, 2019

"How to Argue with an Algorithm: Lessons from the COMPAS ProPublica Debate"

The title of this post is the title of this notable new article authored by Anne Washington and now available via SSRN.  Here is its abstract:

The United States optimizes the efficiency of its growing criminal justice system with algorithms however, legal scholars have overlooked how to frame courtroom debates about algorithmic predictions.  In State v Loomis, the defense argued that the court’s consideration of risk assessments during sentencing was a violation of due process because the accuracy of the algorithmic prediction could not be verified.  The Wisconsin Supreme Court upheld the consideration of predictive risk at sentencing because the assessment was disclosed and the defendant could challenge the prediction by verifying the accuracy of data fed into the algorithm.

Was the court correct about how to argue with an algorithm?

The Loomis court ignored the computational procedures that processed the data within the algorithm.  How algorithms calculate data is equally as important as the quality of the data calculated.  The arguments in Loomis revealed a need for new forms of reasoning to justify the logic of evidence-based tools.  A “data science reasoning” could provide ways to dispute the integrity of predictive algorithms with arguments grounded in how the technology works.

This article’s contribution is a series of arguments that could support due process claims concerning predictive algorithms, specifically the Correctional Offender Management Profiling for Alternative Sanctions (“COMPAS”) risk assessment.  As a comprehensive treatment, this article outlines the due process arguments in Loomis, analyzes arguments in an ongoing academic debate about COMPAS, and proposes alternative arguments based on the algorithm’s organizational context.

Risk assessment has dominated one of the first wide-ranging academic debates within the emerging field of data science.  ProPublica investigative journalists claimed that the COMPAS algorithm is biased and released their findings as open data sets.  The ProPublica data started a prolific and mathematically-specific conversation about risk assessment as well as a broader conversation on the social impact of algorithms.  The ProPublica-COMPAS debate repeatedly considered three main themes: mathematical definitions of fairness, explainable interpretation of models, and the importance of population comparison groups.

While the Loomis decision addressed permissible use for a risk assessment at sentencing, a deeper understanding of daily practice within the organization could extend debates about algorithms to questions about procurement, implementation, or training.  The criminal justice organization that purchased the risk assessment is in the best position to justify how one individual’s assessment matches the algorithm designed for its administrative needs.  People subject to a risk assessment cannot conjecture how the algorithm ranked them without knowing why they were classified within a certain group and what criteria control the rankings.  The controversy over risk assessment algorithms hints at whether procedural due process is the cost of automating a criminal justice system that is operating at administrative capacity.

April 24, 2019 at 12:39 PM | Permalink


Post a comment

In the body of your email, please indicate if you are a professor, student, prosecutor, defense attorney, etc. so I can gain a sense of who is reading my blog. Thank you, DAB