« Federal judge gives first-person account of FIRST STEP Act's impact through retroactive crack provision | Main | Big day for the next stage of FIRST STEP Act implementation »

July 18, 2019

"Measuring Algorithmic Fairness"

The title of this post is the title of this new paper now available via SSRN authored by Deborah Hellman.  With all the use of risk assessment tools throughout the criminal justice system, as with the risk-and-needs tool required by the FIRST STEP Act due out very soon, this discussion of "algorithmic fairness" cause my eye. Here is its abstract:

Algorithmic decision making is both increasingly common and increasingly controversial.  Critics worry that algorithmic tools are not transparent, accountable or fair.  Assessing the fairness of these tools has been especially fraught as it requires that we agree about what fairness is and what it entails. Unfortunately, we do not.  The technological literature is now littered with a multitude of measures, each purporting to assess fairness along some dimension.  Two types of measures stand out.  According to one, algorithmic fairness requires that the score an algorithm produces should be equally accurate for members of legally protected groups, blacks and whites for example.  According to the other, algorithmic fairness requires that the algorithm produces the same percentage of false positives or false negatives for each of the groups at issue.  Unfortunately, there is often no way to achieve parity in both these dimensions.  This fact has led to a pressing question.  Which type of measure should we prioritize and why?

This Article makes three contributions to the debate about how best to measure algorithmic fairness: one conceptual, one normative, and one legal.  Equal predictive accuracy ensures that a score means the same thing for each group at issue.  As such, it relates to what one ought to believe about a scored individual.  Because questions of fairness usually relate to action not belief, this measure is ill-suited as a measure of fairness.  This is the Article’s conceptual contribution.  Second, this Article argues that parity in the ratio of false positives to false negatives is a normatively significant measure.  While a lack of parity in this dimension is not constitutive of unfairness, this measure provides important reasons to suspect that unfairness exists.  This is the Article’s normative contribution.  Interestingly, improving the accuracy of algorithms overall will lessen this unfairness. Unfortunately, a common assumption that antidiscrimination law prohibits the use of racial and other protected classifications in all contexts is inhibiting those who design algorithms from making them as fair and accurate as possible. This Article’s third contribution is to show that the law poses less of a barrier than many assume. 

July 18, 2019 at 01:51 PM | Permalink

Comments

Post a comment

In the body of your email, please indicate if you are a professor, student, prosecutor, defense attorney, etc. so I can gain a sense of who is reading my blog. Thank you, DAB