AI Isn't Making The Criminal Justice System Any Smarter
from the BIAS-2.0 dept
We've covered the increasing reliance on tech to replace human judgment in the criminal justice system, and the news just keeps getting worse. The job of sentencing is being turned over to software owned by private contractors, which puts a non-governmental party between defendants and challenges to sentence length.
The systems being used haven't been rigorously tested and are as prone to bias as the humans they're replacing. The system used by Washington, DC courts to sentence juvenile defendants hasn't been examined ever, and yet it's still being used to determine how long a person's freedom should be taken away.
This system had been in place for 14 years before anyone challenged it. Defense lawyers found nothing that explained the court's confidence in using it to sentence juveniles.
[I]n this particular case, the defense attorneys were able to get access to the questions used to administer the risk assessment as well as the methods of administering it. When they dug into the validity behind the system, they found only two studies of its efficacy, neither of which made the case for the system’s validity; one was 20 years old and the other was an unreviewed, unpublished Master’s thesis. The long-held assumption that the system had been rigorously validated turned out to be untrue, even though many lives were shaped due to its unproven determination of ‘risk’.
One system used in courts all over the nation is developed by Equivant (formerly Northpointe). It's called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). COMPAS uses a set of questions to determine how much of the book is thrown at defendants, using data that only makes the United States' carceral habits worse.
Northpointe’s core product is a set of scores derived from 137 questions that are either answered by defendants or pulled from criminal records. Race is not one of the questions. The survey asks defendants such things as: “Was one of your parents ever sent to jail or prison?” “How many of your friends/acquaintances are taking drugs illegally?” and “How often did you get in fights while at school?” The questionnaire also asks people to agree or disagree with statements such as “A hungry person has a right to steal” and “If people make me angry or lose my temper, I can be dangerous.”
The US locks up an alarming number of people every year and an alarming percentage of them are black. Feed this data into a system that wants to see if it's locking up enough black people and the data will tell judges to keep hitting black people with longer sentences. It's a feedback loop no one can escape from. Every new sentence using these calculations only adds more data telling the system it's "right."
Not only is the "improved" system introducing its own algorithmic biases, its proprietary biases are no better than those it's replacing. This is how the system has been proven wrong repeatedly. It spits out lower recidivism risk scores for white defendants, only to have those defendants commit more crimes in the future than their black counterparts -- even when black people arrested for the same criminal activity have been given considerably higher risk scores by COMPAS.
That's not the only problem. Since it's privately-owned, defense lawyers and researchers have been unable to examine the software itself. You may be able to challenge it based on sentencing data (if you can even manage to get that), but you won't be able to attack the software itself because it wasn't developed by the government.
Equivant doesn’t have to share its proprietary technology with the court. “The company that makes COMPAS has decided to seal some of the details of their algorithm, and you don’t know exactly how those scores are computed,” says Sharad Goel, a computer-science professor at Stanford University who researches criminal-sentencing tools. The result is something Kafkaesque: a jurisprudential system that doesn’t have to explain itself.
The new way gives us the same results as the old way. But it can't be examined. It can only be questioned, and that's not really getting anyone anywhere. A few sentences have been challenged, but every day it's in use, COMPAS keeps generating sentences for "risky" defendants. And these sentences go right back into the database, confirming the software's biases.
Filed Under: ai, compas, criminal justice
Companies: equivant, northpointe