« Is it time for new optimism or persistent pessimism on the latest prospects for statutory federal sentencing reform? | Main | "The Right Way: More Republican lawmakers championing death penalty repeal" »

October 27, 2017

Expressing concerns about how risk assessment algorithms learn

This New York Times op-ed, headlined "When an Algorithm Helps Send You to Prison," is authored by Ellora Thadaney Israni, a law student and former software engineer at Facebook. In the course of covering now familiar ground in the debate over the use of risk assessment tools at sentencing, the piece adds some points about how these tools may evolve and soundly urges more transparency in their creation and development:

Machine learning algorithms often work on a feedback loop.  If they are not constantly retrained, they “lean in” to the assumed correctness of their initial determinations, drifting away from both reality and fairness.  As a former Silicon Valley software engineer, I saw this time and again: Google’s image classification algorithms mistakenly labeling black people as gorillas, or Microsoft’s Twitter bot immediately becoming a “racist jerk.”...

With transparency and accountability, algorithms in the criminal justice system do have potential for good.  For example, New Jersey used a risk assessment program known as the Public Safety Assessment to reform its bail system this year, leading to a 16 percent decrease in its pre-trial jail population.  The same algorithm helped Lucas County, Ohio double the number of pre-trial releases without bail, and cut pre-trial crime in half.  But that program’s functioning was detailed in a published report, allowing those with subject-matter expertise to confirm that morally troubling (and constitutionally impermissible) variables — such as race, gender and variables that could proxy the two (for example, ZIP code) — were not being considered.

For now, the only people with visibility into COMPAS’s functioning are its programmers, who are in many ways less equipped than judges to deliver justice.  Judges have legal training, are bound by ethical oaths, and must account for not only their decisions but also their reasoning in published opinions.  Programmers lack each of these safeguards. Computers may be intelligent, but they are not wise.  Everything they know, we taught them, and we taught them our biases.  They are not going to un-learn them without transparency and corrective action by humans.

October 27, 2017 at 04:24 PM | Permalink

Comments

My proposal has the algorithms owned, written, and revised by the legislature, all with tort liability for deviations from professional standards of due care. The revisions should be driven by validated household crime victimization surveys.

Posted by: David Behar | Oct 27, 2017 10:03:11 PM

Post a comment

In the body of your email, please indicate if you are a professor, student, prosecutor, defense attorney, etc. so I can gain a sense of who is reading my blog. Thank you, DAB