« Another new legal challenge as Ohio prepares to conduct an execution with another new protocol | Main | Unwrapping the Eleventh Circuit's final 2013 holiday present to prisoners challenging sentencing errors »

January 8, 2014

"Probability and Punishment"

The title of this post is the title of this notable new paper by Jacob Schuman now available via SSRN. Here is the abstract:

Imagine two defendants, A and B, who have each been convicted of drug trafficking. Defendant A was arrested with 1,000 grams of crack-cocaine.  Defendant B was arrested with only 100 grams of crack but also a large quantity of cash, which he more than likely, though not certainly, earned by selling 900 grams of crack shortly before his arrest.  Should A and B receive the same punishment?

Federal criminal law says that they should.  This Article will argue that they should not. The probability that A sold 1,000 grams is higher than the probability that B did, so B deserves the lighter sentence.

The justice system can never determine with absolute certainty that an accused defendant committed a particular crime.  To render judgment, therefore, the criminal law must estimate the probability that each defendant is guilty of the offense charged and then translate that probability into specific penal consequences.  The guilt stage of criminal proceedings — the criminal trial — uses what scholars have called a “threshold model” of translation. Under this model, the prosecution may convict a defendant by establishing that the likelihood that he committed the crime charged exceeds a certain “threshold” level of probability.  If it is “beyond a reasonable doubt” that the defendant did the deed, he will receive a guilty verdict.  Otherwise, he will walk free.  Neither outcome will reflect a precise measure of the odds of the defendant’s guilt. A “probabilistic model” of translation, by contrast, would vary the outcome of each trial depending on the probability that the defendant committed the crime of which he is accused.

This Article breaks new ground by demonstrating that the penalty stage of criminal proceedings — the sentencing hearing — also uses a “threshold model.” The United States Sentencing Guidelines instruct federal judges to make a series of factual findings that either add to or subtract from a recommended sentence for every case.  Each adjustment to the recommended sentence depends on whether a certain factual predicate is “more likely than not” to be true — just like at trial, this threshold level of probability fails to precisely measure the odds of the defendant’s culpability.  However, while scholars have offered several important justifications for the threshold model of conviction, these arguments do not hold up for the threshold model of sentencing.  Moreover, the two flaws identified with the threshold model of conviction — inefficiency and unfairness — are not only present at the penalty stage of the proceedings, but in fact are exacerbated by a few unique features of the law of sentencing.

The threshold model of sentencing poses a particular problem when it comes to determinations of drug quantity in the punishment of drug offenders.  Courts often rely on extrapolation and inference to make such determinations, and as a result, they frequently mete out lengthy sentences based on quantity estimations that carry a high risk of error. District courts and policymakers should mitigate the inefficiencies and injustices that result from these fact-findings by incorporating probability into drug quantity determinations at sentencing.

January 8, 2014 at 09:50 AM | Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d83451574769e201a3fbe2054c970b

Listed below are links to weblogs that reference "Probability and Punishment":

Comments

Article contains 11th grade math. Lawyer math stops at the 4th grade level, that needed to count money. So 123D is better.

Posted by: Supremacy Claus | Jan 8, 2014 10:10:38 AM

How exactly is it "breaking new ground" to reveal that Federal sentencing uses a preponderance standard?

Posted by: Jonathan Edelstein | Jan 8, 2014 10:38:47 AM

Given Booker, I am not sure that federal sentencing still can be accurately described as based on a preponderance standard. Additionally, treating federal sentencing practices as a good description of sentencing practice overall is equally dubious.

However, an argument for a more relativistic/probability approach rather than the traditional yes/no approach in sentencing is somewhat different -- at least on the criminal side. Lost chance theory as a way to measure damages has had mixed results on the civil side.

Posted by: tmm | Jan 8, 2014 12:34:51 PM

If we can start with basic definitions, i would appreciate lawyer comment on how correct these are.

Prepoderance means 51 percent likely.

Clear an convincing means two thirds likely or 67 percent.

Beyond a reasonable doubt means 80 percent likely. Indeed, for every 5 executions a year there is one exoneration.

Beyond any doubt wharsoever means 99 percent certainty.

These make the designed in error rate of the legal system appalling and a crime against humanity.

Imagine a mechanic or a barber or a doctor withose error rates. They would be arrested, not sued. The immunities, which are self dealt prevent any progress.

Posted by: Supremacy Claus | Jan 8, 2014 3:44:12 PM

Preponderance means 50.000000000000000001 percent (or however many zeroes you want to add in between the decimal point and the 1).

The other legal terms are not associated with a percent.

Because of that 50.0000000~1 percent in legal proceedings, a professional (doctor, mechanic, barber) can't be arrested or sued with a 49% chance that a decision was wrong (but such an error rate would undoubtedly lead to a decline in customers).

Posted by: tmm | Jan 8, 2014 6:34:23 PM

Post a comment

In the body of your email, please indicate if you are a professor, student, prosecutor, defense attorney, etc. so I can gain a sense of who is reading my blog. Thank you, DAB