« US Sentencing Commission conducting public hearing with testimony on alternatives to incarceration and synthetic drugs | Main | US District Court finds multiple constitutional problems with local banishment of sex offenders »
April 18, 2017
"Courts Are Using AI to Sentence Criminals. That Must Stop Now."
The title of this post is the headline of this new WIRED commentary authored by Jason Tashea. Here are excerpts:
Currently, courts and corrections departments around the US use algorithms to determine a defendant’s “risk”, which ranges from the probability that an individual will commit another crime to the likelihood a defendant will appear for his or her court date. These algorithmic outputs inform decisions about bail, sentencing, and parole. Each tool aspires to improve on the accuracy of human decision-making that allows for a better allocation of finite resources.
Typically, government agencies do not write their own algorithms; they buy them from private businesses. This often means the algorithm is proprietary or “black boxed”, meaning only the owners, and to a limited degree the purchaser, can see how the software makes decisions. Currently, there is no federal law that sets standards or requires the inspection of these tools, the way the FDA does with new drugs.
This lack of transparency has real consequences. In the case of Wisconsin v. Loomis, defendant Eric Loomis was found guilty for his role in a drive-by shooting. During intake, Loomis answered a series of questions that were then entered into Compas, a risk-assessment tool developed by a privately held company and used by the Wisconsin Department of Corrections. The trial judge gave Loomis a long sentence partially because of the “high risk” score the defendant received from this black box risk-assessment tool. Loomis challenged his sentence, because he was not allowed to assess the algorithm. Last summer, the state supreme court ruled against Loomis, reasoning that knowledge of the algorithm’s output was a sufficient level of transparency.
By keeping the algorithm hidden, Loomis leaves these tools unchecked. This is a worrisome precedent as risk assessments evolve from algorithms that are possible to assess, like Compas, to opaque neural networks. Neural networks, a deep learning algorithm meant to act like the human brain, cannot be transparent because of their very nature. Rather than being explicitly programmed, a neural network creates connections on its own. This process is hidden and always changing, which runs the risk of limiting a judge’s ability to render a fully informed decision and defense counsel’s ability to zealously defend their clients....
[H]ow does a judge weigh the validity of a risk-assessment tool if she cannot understand its decision-making process? How could an appeals court know if the tool decided that socioeconomic factors, a constitutionally dubious input, determined a defendant’s risk to society? Following the reasoning in Loomis, the court would have no choice but to abdicate a part of its responsibility to a hidden decision-making process.
Already, basic machine-learning techniques are being used in the justice system. The not-far-off role of AI in our courts creates two potential paths for the criminal justice and legal communities: Either blindly allow the march of technology to go forward, or create a moratorium on the use of opaque AI in criminal justice risk assessment until there are processes and procedures in place that allow for a meaningful examination of these tools. The legal community has never fully discussed the implications of algorithmic risk assessments. Now, attorneys and judges are grappling with the lack of oversight and impact of these tools after their proliferation.
To hit pause and create a preventative moratorium would allow courts time to create rules governing how AI risk assessments should be examined during trial. It will give policy makers the window to create standards and a mechanism for oversight. Finally, it will allow educational and advocacy organizations time to teach attorneys how to handle these novel tools in court. These steps can reinforce the rule of law and protect individual rights.
As noted in this prior post, the Loomis case is right now pending before the US Supreme Court with a pending SCOTUS request for a brief from the Acting Solicitor General concerning a possible cert grant. And here are some prior related posts on Loomis case:
- Wisconsin appeals court urges state's top court to review use of risk-assessment software at sentencing
- Looking into the Wisconsin case looking into the use of risk-assessment tools at sentencing
- Wisconsin Supreme Court rejects due process challenge to use of risk-assessment instrument at sentencing
- No grants, but latest SCOTUS order list still has lots of intrigue for criminal justice fans (especially those concerned with risk-assessment sentencing)
April 18, 2017 at 10:28 AM | Permalink
I don't see it being much different than unaided-sentencing. It's not like it is possible to examine the sentencing judge's mind any more than it is possible to examine these algorithms.
Posted by: Soronel Haetir | Apr 18, 2017 12:38:37 PM
Machines are 100 times better than living beings. Compare a car to a horse. I have proposed that legislature enacted algorithms replace prosecutors, defense lawyers and judges for an upgrade in consistency, compliance with the law, and proper due process.
Chess has 37 possible moves, and is similar to legal decision making in complexity. Computers beat the best chess players long ago. Wired ran an article where a computer beat the best Go player. This is a Chinese board game with a billion potential moves.
Which of you would ride a horse commuting to work in a snow storm? That is what is happening in sentencing law and policy. The atavism of this specialty is ridiculous, and is a strong factor in its utter failure.
Private algorithms should be banned, I agree.
Some of the savings must be invested in follow up studies to establish the highest validity possible in science, predictive validity. These data may be used to continuously improve these algorithms from the legislatures. Errors in the algorithms should subject the legislature to tort liability, for example, a wrong data entry or a simple math error. All immunities must be repealed, including the Eleventh Amendment and its case law.
Posted by: David Behar | Apr 18, 2017 4:57:22 PM