« "Drug Policy as Race Policy: Best Seller Galvanizes the Debate" | Main | "From Peer-to-Peer Networks to Cloud Computing: How Technology Is Redefining Child Pornography Laws" »

March 7, 2012

Notable FPD fact sheet says "TRAC Analysis of Variations in Sentencing Misses the Mark"

The afternoon I received an effective brief "Fact Sheet" produced by some federal public defenders which discusses limits on the federal sentencing data released by TRAC earlier this week.  The full two-page fact sheet, which is titled "TRAC Analysis of Variations in Sentencing Misses the Mark" and can be downloaded below, gets started this way:

On March 5, 2012, the Transactional Records Access Clearinghouse (TRAC) announced “Wide Variations Seen in Federal Sentencing.” The press release accompanying TRAC’s report stated it had discovered “extensive and hard-to-explain variations in the sentencing practices of district court judges.”  Media reports claimed “widely disparate sentences for similar crimes.” (AP)

The data released by TRAC might in the future shed light on federal sentencing, but its initial analyses, and media coverage, demonstrate the danger of a little knowledge about a complex subject.  TRAC’s analysis fails to meet minimal academic standards and should not be a basis for policy making.

The cases sentenced by the judges in the study are not similar.

  • The only similarity among the cases sentenced in each district is that prosecutors categorized them as “drug,” “white collar,” etc. All other case differences are ignored.  Heroin or marijuana cases, involving 1 gram or 1 ton, are all called “similar” drug cases.  First-time offenders are lumped with lifetime criminals.
  • Academic researchers studying disparity use data from the U. S. Sentencing Commission to categorize cases along dozens of different variables, but this data was not used in TRAC’s analysis.

The intra-district comparisons intended to control for differences among cases are flawed.

  • The study compared median (half below, half above) sentences among judges in a particular district, on the assumption that these judges sentenced similar types of cases. But this is often untrue. 
  • Many districts have several courthouses in different cities, which sentence very different types of crimes.  Average sentences should be different among judges who sentence different types of offenses and offenders.
  • Academic researchers faced with this problem are careful to compare only judges in the same courthouse who are part of the same random case assignment pool.  This helps compensate for individual case differences in the long run.

Download FPD Fact Sheet on TRAC data

Recent related posts:

March 7, 2012 at 08:56 PM | Permalink

TrackBack

TrackBack URL for this entry:
https://www.typepad.com/services/trackback/6a00d83451574769e20167638a3144970b

Listed below are links to weblogs that reference Notable FPD fact sheet says "TRAC Analysis of Variations in Sentencing Misses the Mark":

Comments

This is a song about desperation. Every now and then we do get desperate.

Posted by: Denying the Obvious; Missing he Forest | Mar 8, 2012 7:06:18 AM

That's why they (a) only chose judges who had 40+ of these cases, (b) took the median, and (c) analyzed it per judge. Across 40 cases, assuming a random distribution, you'd expect each judge to have good cases and bad cases, and the median would be a relatively safe representation of how they sentence. In some of the districts, one judge had a median twice that of all the other judges. That can't easily be explained by case differences.

Put another way, say you picked up 200 rocks. Each rock would be different. Some larger, some smaller. You then divided them into 4 piles of 50 rocks randomly and measured the median weight of the rocks in each pile with four different scales that should be identical. If one pile had a median measured weight that was twice that of the other three, you should be suspicious of that scale.

Posted by: Anonymouse | Mar 8, 2012 7:22:26 AM

Anonymouse ,

Unless 50% of the rocks came from one location, 30% from another and the remainder from still a third and then you measure without mixing. It would not surprise me at all if such an exercise were to produce widely varying results. And even then the entire process has problems because the very fact you have someone picking up rocks is probably going to limit the range of what they try measuring.

I live on the ocean coast in Alaska and the beach below me is a giant rock field. The rocks vary from larger than car sized (3 or four of those), several hundred that would still require heavy equipment to move and thousands or millions that could be easily moved by a single person. Now, unless someone has that heavy equipment they are going to completely ignore the boulders, but if you have someone else who does have the equipment they are likely to ignore the smaller stuff. And so you would get two very different answers from gleaning rocks even from the same location.

In short, whatever the merits or faults of the study, I don't think your rock picking analogy is any sort of valid criticism.

Posted by: Soronel Haetir | Mar 8, 2012 10:03:53 AM

I sent TRAC a message telling them that the info was not useful for yet another reason: How many of these sentences resulted from plea deals versus trial? (Of course we know defendants rarely go to trial, and generally a defendant who opts for trial is punished by facing added charges way beyond the proposed plea deal.)

Posted by: Gloria Grening Wolk | Mar 16, 2012 4:08:42 PM

Post a comment

In the body of your email, please indicate if you are a professor, student, prosecutor, defense attorney, etc. so I can gain a sense of who is reading my blog. Thank you, DAB