October 13, 2013
Parole precogs: computerized risk assessments impacting state parole decision-makingPredicting who is likely to commit a crime in the future is no easy task, as fans of "Minority Report" know well. But states that retain discretionary parole release mechanisms to some extent require its officials to do just that. And, as this lengthy Wall Street Journal article explains, state officials are (in my view, wisely) relying more and more on computerized risk assessment instruments when making parole decisions. The WSJ piece is headlined "State Parole Boards Use Software to Decide Which Inmates to Release: Programs look at prisoners' biographies for patterns that predict future crime," and here are excerpts:
Driven to cut ballooning corrections costs, more states are requiring parole boards to make better decisions about which convicts to keep in prison and which to release. Increasingly, parole officials are adopting data- and evidence-based methods, many involving software programs, to calculate an inmate's odds of recidivism.
The policy changes are leading to a quiet and surprising shift across the U.S. in how parole decisions are made. Officials accustomed to relying heavily on experience and intuition when making parole rulings now find they also must take computerized inmate assessments and personality tests into account.
In the traditional system, factors like the severity of a crime or whether an offender shows remorse weigh heavily in parole rulings, criminologists say. By contrast, automated assessments based on inmate interviews and biographical data such as age at first arrest are designed to recognize patterns that may predict future crime and make release decisions more objective, advocates of the new tools say.
In the past several years, at least 15 states including Louisiana, Kentucky, Hawaii and Ohio have implemented policies requiring corrections systems to adopt modern risk-assessment methods, according to the Pew Charitable Trusts' Public Safety Performance Project. California is using computerized inmate assessments to make decisions about levels of supervision for individual parolees. This year, West Virginia began requiring that all felons receive risk assessments; judges receive the reports before sentencing with the option to incorporate the scores into their decisions.
Such methods can contradict the instincts of corrections officials, by classifying violent offenders as a lower recidivism risk than someone convicted of a nonviolent robbery or drug offense. Criminologists say people convicted of crimes like murder often are older when considered for release, making them less likely to reoffend. Inmates convicted of nonviolent crimes like property theft, meanwhile, tend to be younger, more impulsive and adventurous—all predictors of repeat criminality....
Wider acceptance of computerized risk assessments, along with other measures to reduce state corrections budgets, has coincided with the first declines in the national incarceration rate in more than a decade.
The number of inmates in state and federal facilities fell nearly 1% in 2011 to 1.6 million, after edging down 0.1% in the prior year. The 2011 decline came entirely from state prisons, which shed 21,600 inmates, offsetting an increase of 6,600 federal prisoners. Preliminary 2012 data shows an even larger fall in state inmates of 29,000.
Experts say one reason for the decline is that fewer parolees are returning to prison. About 12% of parolees were re-incarcerated at some point in 2011 compared with 15% in 2006, representing the fifth straight year of decline, according to Justice Department data.
Texas, by reputation a tough-on-crime state, has been consistently using risk assessment longer than many states and is boosting the number of prisoners it paroles each year. With its current system, in use since 2001, it released 37% of parole applicants in 2012 versus 28% in 2005 — some 10,000 more prisoners released in 2012.
Officials in Michigan credit computerized assessments, introduced in 2006 and adopted statewide in 2008, with helping reduce the state's prison population by more than 15% from its peak in 2007 and with lowering the three-year recidivism rate by 10 percentage points since 2005.
Still, experts say it is difficult to measure the direct impact of risk prediction because states have also taken other steps to rein in corrections costs, such as reducing penalties for drug offenses and transferring inmates to local jails.
Michigan's assessments withstood a legal challenge in 2011, when prosecutors sought to reverse the parole of Michelle Elias, who had served 25 years for murdering her lover's husband. A lower court, siding with the prosecutor, ruled the parole board hadn't placed enough weight on the "egregious nature of the crime," court documents say. The Michigan Court of Appeals overturned the decision and upheld Ms. Elias's release.
Yet earlier this month, the same appeals court ruled the Michigan parole board had abused its discretion by releasing a man convicted of molesting his daughter. He hadn't received sex-offender therapy while in prison, but three assessments, including one using [the computer program] Compas, had deemed him a low risk of reoffending. The appeals court, in an unpublished decision that echoed a lower court, said that Compas could be manipulated if presented "with inadequate data or individuals who lie."
The Compas software designer, Northpointe Inc., says the assessments are meant to improve, not replace, human intelligence. Tim Brennan, chief scientist at Northpointe, a unit of Volaris Group, said the Compas system has features that help detect lying, but data-entry mistakes or inmate deceptiveness can affect accuracy, he said. The company says that officials should override the system's decisions at rates of 8% to 15%.
Many assessment systems lean heavily on research by criminologists including Edward Latessa, professor at the Center for Criminal Justice Research at the University of Cincinnati. Parole boards, typically staffed with political appointees, have lacked the information, training and time to make sound decisions about who should be released, Dr. Latessa said. The process, he said, is one factor contributing to the population surge in the nation's prisons, including a fivefold increase in the number of prisoners nationwide from 1978 to 2009, according to the Department of Justice.
"The problem with a judge or a parole board is they can't pull together all the information they need to make good decisions," said Dr. Latessa, who developed an open-source software assessment system called ORAS used in Ohio and other states. Ohio adopted ORAS last year as the result of legislation aimed at addressing overcrowded prisons and containing corrections spending. Dr. Latessa does paid consulting work with state corrections agencies but isn't paid for use of the system. "They look at one or two things," he said. "Good assessment tools look at 50 things."
Some assessments analyze as many as 100 factors, including whether the offender is married, the age of first arrest and whether he believes his conviction is unfair. In Texas, a rudimentary risk-assessment measures just 10 factors. Data gathered in interviews with inmates is transmitted to the offices of Texas parole board members, who vote remotely, often by computer.
Parole officials say assessment scores are just one factor they consider. Some experts say relying on statistics can result in racial bias, even though questionnaires don't explicitly ask about race. Data such as how many times a person has been incarcerated can act as an unfair proxy for race, said Bernard Harcourt, a University of Chicago professor of law and political science. "There's a real connection between race and prior criminal history, and a real link between prior criminal history and prediction," Mr. Harcourt said. "The two combine in a toxic and combustive way."
Christopher Baird, former head of the National Council on Crime and Delinquency, said statistical tools are best used to help set supervision guidelines for parolees rather than determine prison sentences or decide who should be released. "It's very important to realize what their limitations are," said Mr. Baird, who developed one of the earliest risk-assessment tools, for the state of Wisconsin in the late 1970s. "That's lost when you start introducing the word 'prediction' and start applying that to individual cases."
October 13, 2013 at 06:57 PM | Permalink
TrackBack URL for this entry:
Listed below are links to weblogs that reference Parole precogs: computerized risk assessments impacting state parole decision-making:
I strongly suport the use of empirically validated decision making. The best, fairest and safest is still 123D.
Daubert applies to the criminal trial. Does it apply to sentencing decision at trial and after trial, such as in revocation of parole? I don't know the case precedents.
Since these arguments support the defense, I want fairness credit.
1) Most risk factors are derived from studies of large groups where parametric statistics are used. The prediction of the behavior of a single subject requires the application of the binomial statistics. Parametric statistics accurately predict larger population rates. They cannot be used nor applied to individual case. The binomial statistic is the one that describes coin tosses. It is covered Day 1 of 11th grade high school statistics. It appears these leaders skipped class that day. Other weaknesses, such as exclusion criteria, non-random selection of subjects.
2)Because 95% of charges are pled to, they are fictitious and far less severe than the real crime. So The input is fictitious.
3) Even if the criminal admits to non-adjudicated crimes, they forget a lot. So there is a marked underestimate of the number of crimes committed. I would double the number of immunized reporting and increase 10 fold the number of crimes in the record.
Posted by: Supremacy Claus | Oct 13, 2013 10:55:20 PM
These assessment tests have NO scientific validity. Of course neither does the gut feelings of parole commissioners. But often parole decisions are made for political reasons and not what parole was designed to do. Many states are doing away with parole and going to sentences in the 85-87 percent range. Many states use various different sex offender risk assessments that are similar in nature to ORAS. Such tests remove the human element from the decision making process. The test answers can be completed by untrained individuals and through the magic of psychological terms carry more weight than their predictive value.
Posted by: m | Oct 13, 2013 11:25:16 PM
This article is using the term "robbery" in a sense that I am unfamiliar with if it is counted as a non-violent offense.
Posted by: Soronel Haetir | Oct 14, 2013 1:19:52 AM
..by classifying violent offenders as a lower recidivism risk than someone convicted of a nonviolent robbery or drug offense..
I concur with Soronel Haetir: robbery is, by nature, a violent offense.
Criminologists say people convicted of crimes like murder often are older when considered for release, making them less likely to reoffend.
I don't think mercy killers present the same level of danger to society than serial murderers.
Some experts say relying on statistics can result in racial bias, even though questionnaires don't explicitly ask about race. Data such as how many times a person has been incarcerated can act as an unfair proxy for race,..
The number of incarcerations is not an "unfair proxy for race" but a proxy for the lack of regard for the common rules.
When will some person free themselves from such racial delusions.
Posted by: visitor | Oct 14, 2013 8:19:31 AM
It just seems to make dollars and sense to take another look at an offender some time (several years?) after sentencing to see what salient factors may now be available to the decision-maker and/or matters that might be new and significant as regards release. Guidelines of some sort and risk prediction science of some kind can be added to assist the decision-making and/or to reduce politics to some extent. Most seem to agree that age is a very important consideration as to future dangerousness and clearly deteriorating health can impact what we can predict for a release. But however high or complex the renewed release bar is set and no matter what crimes/criminals are prohibited from such consideration, moving some out of the system earlier than the sentence would require saves money. And isn't that really the bottom line? (Yes I know that the left and the right see other more significant purposes for sentencing, but if money wasn't key why do they take the death penalty off the negotiating table so often?) Corrections systems move inmates down the security levels as sound fiscal and management tool. Why not just take the next logical step? And the U.S. Parole Commission is still available to handle these matters . . . .
Posted by: alan chaset | Oct 14, 2013 10:01:52 AM
If their recidivism is down it is probably due to treating them like human beings rather than accurate tests.
In the news by Karen Franklin PhD
Tuesday, October 8, 2013
Study: Risk tools don't work with psychopaths
If you want to know whether that psychopathic fellow sitting across the table from you will commit a violent crime within the next three years, you might as well flip a coin as use a violence risk assessment tool.
Popular risk assessment instruments such as the HCR-20 and the VRAG perform no better than chance in predicting risk among prisoners high in psychopathy, according to a new study published in the British Journal of Psychiatry. The study followed a large, high-risk sample of released male prisoners in England and Wales.
Risk assessment tools performed fairly well for men with no mental disorder. Utility was decreased for men diagnosed with schizophrenia or depression, became worse yet for those with substance abuse, and ranged from poor to no better than chance for individuals with personality disorders. But the instruments bombed completed when it came to men with high scores on the Psychopathy Checklist-Revised (PCL-R) (which, as regular readers of this blog know, has real-world validity problems all its own).
"Our findings have major implications for risk assessment in criminal populations," noted study authors Jeremy Coid, Simone Ullrich and Constantinos Kallis. "Routine use of these risk assessment instruments will have major limitations in settings with high prevalence of severe personality disorder, such as secure psychiatric hospitals and prisons."
The study, "Predicting future violence among individuals with psychopathy," may be requested from the first author, Jeremy Coid (click HERE).
Posted by: George | Oct 14, 2013 1:19:30 PM
In the news by Karen Franklin PhD
Wednesday, September 4, 2013
'Authorship bias' plays role in research on risk assessment tools, study finds
Reported predictive validity higher in studies by an instrument's designers than by independent researchers
The use of actuarial risk assessment instruments to predict violence is becoming more and more central to forensic psychology practice. And clinicians and courts rely on published data to establish that the tools live up to their claims of accurately separating high-risk from low-risk offenders.
But as it turns out, the predictive validity of risk assessment instruments such as the Static-99 and the VRAG depends in part on the researcher's connection to the instrument in question.
Publication bias in pharmaceutical research
has been well documented
Published studies authored by tool designers reported predictive validity findings around two times higher than investigations by independent researchers, according to a systematic meta-analysis that included 30,165 participants in 104 samples from 83 independent studies.
Conflicts of interest shrouded
Compounding the problem, in not a single case did instrument designers openly report this potential conflict of interest, even when a journal's policies mandated such disclosure.
As the study authors point out, an instrument’s designers have a vested interest in their procedure working well. Financial profits from manuals, coding sheets and training sessions depend in part on the perceived accuracy of a risk assessment tool. Indirectly, developers of successful instruments can be hired as expert witnesses, attract research funding, and achieve professional recognition and career advancement.
These potential rewards may make tool designers more reluctant to publish studies in which their instrument performs poorly. This "file drawer problem," well established in other scientific fields, has led to a call for researchers to publicly register intended studies in advance, before their outcomes are known.
The researchers found no evidence that the authorship effect was due to higher methodological rigor in studies carried out by instrument designers, such as better inter-rater reliability or more standardized training of instrument raters.
"The credibility of future research findings may be questioned in the absence of measures to tackle these issues," the authors warn. "To promote transparency in future research, tool authors and translators should routinely report their potential conflict of interest when publishing research investigating the predictive validity of their tool."
The meta-analysis examined all published and unpublished research on the nine most commonly used risk assessment tools over a 45-year period:
Historical, Clinical, Risk Management-20 (HCR-20)
Level of Service Inventory-Revised (LSI-R)
Psychopathy Checklist-Revised (PCL-R)
Spousal Assault Risk Assessment (SARA)
Structured Assessment of Violence Risk in Youth (SAVRY)
Sex Offender Risk Appraisal Guide (SORAG)
Sexual Violence Risk-20 (SVR-20)
Violence Risk Appraisal Guide (VRAG)
Although the researchers were not able to break down so-called "authorship bias" by instrument, the effect appeared more pronounced with actuarial instruments than with instruments that used structured professional judgment, such as the HCR-20. The majority of the samples in the study involved actuarial instruments. The three most common instruments studied were the Static-99 and VRAG, both actuarials, and the PCL-R, a structured professional judgment measure of psychopathy that has been criticized criticized for its vulnerability to partisan allegiance and other subjective examiner effects.
This is the latest important contribution by the hard-working team of Jay Singh of Molde University College in Norway and the Department of Justice in Switzerland, (the late) Martin Grann of the Centre for Violence Prevention at the Karolinska Institute, Stockholm, Sweden and Seena Fazel of Oxford University.
A goal was to settle once and for all a dispute over whether the authorship bias effect is real. The effect was first reported in 2008 by the team of Blair, Marcus and Boccaccini, in regard to the Static-99, VRAG and SORAG instruments. Two years later, the co-authors of two of those instruments, the VRAG and SORAG, fired back a rebuttal, disputing the allegiance effect finding. However, Singh and colleagues say the statistic they used, the receiver operating characteristic curve (AUC), may not have been up to the task, and they "provided no statistical tests to support their conclusions."
Prominent researcher Martin Grann dead at 44
Sadly, this will be the last contribution to the violence risk field by team member Martin Grann, who has just passed away at the young age of 44. His death is a tragedy for the field. Writing in the legal publication Das Juridik, editor Stefan Wahlberg noted Grann's "brilliant intellect" and "genuine humanism and curiosity":
Martin Grann came in the last decade to be one of the most influential voices in both academic circles and in the public debate on matters of forensic psychiatry, risk and hazard assessments of criminals and ... treatment within the prison system. His very broad knowledge in these areas ranged from the law on one hand to clinical therapies at the individual level on the other -- and everything in between. This week, he would also debut as a novelist with the book "The Nightingale."
The article, Authorship Bias in Violence Risk Assessment? A Systematic Review and Meta-Analysis, is freely available online via PloS ONE (HERE).
Related blog reports:
Violence risk instruments overpredicting danger (Aug. 2, 2012)
Violence risk in schizophrenics: Are forensic tools reliable predictors? (Sept. 14, 2011)
Violence risk meta-meta: Instrument choice does matter (June 19, 2011)
Psychology rife with inaccurate research findings (Nov. 20, 2011)
International violence risk researchers launch free news service (June 13, 2013)
Posted by: George | Oct 14, 2013 2:51:04 PM