« Prez Trump gets back in clemency business by granting full pardon to US veteran convicted of killing Iraqi prisoner | Main | Latest issue of ABA Journal focuses on addressing collateral consequences »

May 7, 2019

"Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System"

The title of this post is the title of this notable new report "written by the staff of the Partnership on AI (PAI) and many of [its] Partner organizations."  Here is part of the report's executive summary:

This report documents the serious shortcomings of risk assessment tools in the U.S. criminal justice system, most particularly in the context of pretrial detentions, though many of our observations also apply to their uses for other purposes such as probation and sentencing.  Several jurisdictions have already passed legislation mandating the use of these tools, despite numerous deeply concerning problems and limitations. Gathering the views of the artificial intelligence and machine learning research community, PAI has outlined ten largely unfulfilled requirements that jurisdictions should weigh heavily and address before further use of risk assessment tools in the criminal justice system.

Using risk assessment tools to make fair decisions about human liberty would require solving deep ethical, technical, and statistical challenges, including ensuring that the tools are designed and built to mitigate bias at both the model and data layers, and that proper protocols are in place to promote transparency and accountability.  The tools currently available and under consideration for widespread use suffer from several of these failures, as outlined within this document.

We identified these shortcomings through consultations with our expert members, as well as reviewing the literature on risk assessment tools and publicly available resources regarding tools currently in use. Our research was limited in some cases by the fact that most tools do not provide sufficiently detailed information about their current usage to evaluate them on all of the requirements in this report.  Jurisdictions and companies developing these tools should implement Requirement 8, which calls for greater transparency around the data and algorithms used, to address this issue for future research projects.  That said, many of the concerns outlined in this report apply to any attempt to use existing criminal justice data to train statistical models or to create heuristics to make decisions about the liberty of individuals.

Challenges in using these tools effectively fall broadly into three categories, each of which corresponds to a section of our report:

-- Concerns about the validity, accuracy, and bias in the tools themselves;

-- Issues with the interface between the tools and the humans who interact with them; and

-- Questions of governance, transparency, and accountability.

Although the use of these tools is in part motivated by the desire to mitigate existing human fallibility in the criminal justice system, it is a serious misunderstanding to view tools as objective or neutral simply because they are based on data.  While formulas and statistical models provide some degree of consistency and replicability, they still share or amplify many weaknesses of human decision-making.  Decisions regarding what data to use, how to handle missing data, what objectives to optimize, and what thresholds to set all have significant implications on the accuracy, validity, and bias of these tools, and ultimately on the lives and liberty of the individuals they assess....

In light of these issues, as a general principle, these tools should not be used alone to make decisions to detain or to continue detention.  Given the pressing issue of mass incarceration, it might be reasonable to use these tools to facilitate the automatic pretrial release of more individuals, but they should not be used to detain individuals automatically without additional (and timely) individualized hearings.  Moreover, any use of these tools should address the bias, human-computer interface, transparency, and accountability concerns outlined in this report.

This report highlights some of the key problems encountered using risk assessment tools for criminal justice applications.  Many important questions remain open, however, and unknown issues may yet emerge in this space.  Surfacing and answering those concerns will require ongoing research and collaboration between policymakers, the AI research community, and civil society groups.  It is PAI’s mission to spur and facilitate these conversations and to produce research to bridge these gaps.

May 7, 2019 at 12:56 PM | Permalink

Comments

Three quick notes -- most of which aren't new.

Transparency is crucial. Parties need to know what factors matter so that they can present information to the court/agency performing the assessment to assure that the assessment is accurate. Additionally, transparency makes it easier for policy makers to correct the assessment.

Correlation vs. Causation -- Certain factors (e.g. prior criminal history) probably are part of the risk factors that make it more likely that the subject of the assessment will re-offend. Other factors might simply be present in high crime areas and do not distinguish between the criminals and the law-abiding portion of the community. It is important to try to limit the number of factors that are related to race, poverty, and where you live and rely on factors that distinguish between the high risk and low risk members of those communities.

How much risk do we want -- Any risk assessment is a probability analysis. Accuracy in a measurement means that if a certain score predicts that X% of the people will "fail" (i.e. re-offend or fail to appear) then, over time, approximately X% of the offenders will fail. But if X% of people with Y scored will fail to appear or re-offend,then 100%-X% will not re-offend. No score will demonstrate that an offender will certainly reoffend/fail to appear or that an offender will make all court dates and not re-offend. An accurate risk assessment tool can tell you the level of risk, but there still needs to be the policy decision of how much risk is too much -- remembering that too much leniency bears the costs of additional victims or additional costs in finding an offender who absconds but that too little leniency means spending money incarcerating somebody who did not need to be incarcerated.

Posted by: tmm | May 7, 2019 4:31:26 PM

Post a comment

In the body of your email, please indicate if you are a professor, student, prosecutor, defense attorney, etc. so I can gain a sense of who is reading my blog. Thank you, DAB