« Notable defense of prison labor from a former prisoner | Main | Reviewing publicity's role in federal sentencing decision-making »
October 23, 2017
"In Defense of Risk-Assessment Tools"
The title of this post is the headline of this notable new Marshall Project commentary authored by Adam Neufeld. its subheadline highlights its main theme: " Algorithms can help the criminal justice system, but only alongside thoughtful humans." And here is an excerpt:
It may seem weird to rely on an impersonal algorithm to predict a person’s behavior given the enormous stakes. But the gravity of the outcome — in cost, crime, and wasted human potential — is exactly why we should use an algorithm.
Studies suggest that well-designed algorithms may be far more accurate than a judge alone. For example, a recent study of New York City’s pretrial decisions found that an algorithm’s assessment of risk would far outperform judges’ track record. If the city relied on the algorithm, an estimated 42 percent of detainees could be set free without any increase in people skipping trial or committing crimes pretrial, the study found.
But we are far from where we need to be in the use of these algorithms in the criminal justice system. Most jurisdictions don’t use any algorithms, relying instead on each individual judge or decisionmaker to make critical decisions based on their personal experience, intuition, and whatever they decide is relevant. Jurisdictions that do use algorithms only use them in a few areas, in some instances with algorithms that have not been critically evaluated and implemented.
Used appropriately, algorithms could help in many more areas, from predicting who needs confinement in a maximum security prison to who needs support resources after release from prison.
However, with great (algorithmic) power comes great (human) responsibility. First, before racing to adopt an algorithm, jurisdictions need to have the foundational conversation with relevant stakeholders about what their goals are in adopting an algorithm. Certain goals will be consistent across jurisdictions, such as reducing the number of people who skip trial, but other goals will be specific to a jurisdiction and cannot just be delegated to the algorithm’s creator....
Many criticisms of algorithms to date point out where they fall short. However, an algorithm should be evaluated not just against some perfect ideal, but also against the very imperfect status quo. Preliminary studies suggest these tools improve accuracy, but the research base must be expanded. Only well-designed evaluations will tell us when algorithms will improve fairness and accuracy in the criminal justice system.
Public officials have a social responsibility to pursue the opportunities that algorithms present, but to do so thoughtfully and rigorously. That is a hard balance, but the stakes are too high not to try.
A few (of many) prior related posts:
- Thoughtful account of what to think about risk assessment tools
- "The Use of Risk Assessment at Sentencing: Implications for Research and Policy"
- Wisconsin Supreme Court rejects due process challenge to use of risk-assessment instrument at sentencing
- Parole precogs: computerized risk assessments impacting state parole decision-making
- ProPublica takes deep dive to idenitfy statistical biases in risk assessment software
- Thoughtful look into fairness/bias concerns with risk-assessment instruments like COMPAS
- "Gender, Risk Assessment, and Sanctioning: The Cost of Treating Women Like Men"
October 23, 2017 at 10:53 AM | Permalink
Comments
"For example, a recent study of New York City’s pretrial decisions found that an algorithm’s assessment of risk would far outperform judges’ track record."
That's not a benefit, that's a flaw. My problem with algorithms is that they undermine democratic accountability. They shift the totem of legitimacy away from democratic processes to academic ones. A judge may be imperfect---no scratch that--is imperfect but the people can gold a judge accountable. There is no way to hold algorithms accountable. This is especially so since most of the matrix that makes up these methods are patented and considered trade secrets.
When this is grasped the so-called argument in favor of algorithms amounts to "this closed process of voodoo produces a result I happen to like so therefore everyone should support it." That's not a persuasive argument, at least not to me.
Posted by: Daniel | Oct 23, 2017 11:12:45 AM
Algorithms should not be patented.
1) Any worthwhile algorithm would be the product of published statistical analysis. To use a variation on Daubert, to be scientific, any validity studies must be repeatable and verifiable which is difficult to do when the component parts of the algorithm are secret. (Without knowing the particular factors involved and how weighted, it is impossible to determine if the components are actually statistically significant and if the weighting is correlated to their statistical significance. It also makes it impossible for other researchers to run samples of historical cases through the algorithm and determine whether the results for particular scores correspond to the predicted results. (In other words, if a score of 30 corresponds with a 20% risk of jumping bond or committing a new offense while on bond, historically have 20% of those with a score of 30 jumped bond or re-offended while on bond.)
2) An algorithm can only reach the correct results if the inputs are correct. If the component parts are secret, then neither side knows what information underlies the court's decision and neither side has the ability to challenge the accuracy of that information. While the adversarial system is imperfect, so are potential alternative sources of information.
3) A patented algorithm is contrary to democratic accountability. There are some factors that -- while potentially statistically valid -- would be contrary to values that are important to society -- e.g. race, gender, age, income status, sexual orientation, etc. Even if the algorithm does not contain such factors, a secret algorithm would certainly lead to complaints that the algorithm appears to be biased against particular groups.
In any other government process, there would be a demand that any scoring system used to make decision be set forth publicly. I know that, when I was counsel to the county zoning board, our criteria for determining which areas were ready for residential rezoning were set forth in the zoning ordinance so that applicants could determine in advance their chances at success and could challenge the zoning administrator's calculations.
Posted by: tmm | Oct 23, 2017 2:26:04 PM
My Algorithm for Sentencing Algorithms
1) Algorithms should be written and owned by the legislature.
2) They should change yearly, based on continual feedback from real world experience.
3) They should be subject to continual and unlimited public comment.
4) They should carry tort liability for any deviation from professional standards of due care, although they totally qualify for strict liability.
5) They should be farmed out to experienced data mining businesses, such as Amazon.
6) They should be validated by household surveys of crime victimization, the gold standard of crime measurement. The surveys should include other crimes than the 8 common law crimes, such as identity theft.
7) All biases, such a left wing values, should be mercilessly excluded, in compliance with Equal Protection rights. End all false mitigation factors, which are really, aggravating factors. Those advocating for groups being privileged by the Democratic Party, the party of lawyer employment, should be forced to accept released felons in the houses surrounding theirs.
8) machines are 100 times better than living beings. Try communing to work in a car or on a horse, on a snow day. All judges and prosecutors should be replaced by robots running algorithms. Equal treatment and less extreme stupidity of decisions would be the benefits.
Posted by: David Behar | Oct 23, 2017 3:02:06 PM