Harrisburg University Researchers Claim Their 'Unbiased' Facial Recognition Software Can Identify Potential Criminals
from the fresh-hells-delivered-daily dept
Given all we know about facial recognition tech, it is literally jaw-dropping that anyone could make this claim… especially without being vetted independently.
A group of Harrisburg University professors and a PhD student have developed an automated computer facial recognition software capable of predicting whether someone is likely to be a criminal.
The software is able to predict if someone is a criminal with 80% accuracy and with no racial bias. The prediction is calculated solely based on a picture of their face.
There's a whole lot of "what even the fuck" in CBS 21's reprint of a press release, but let's start with the claim about "no racial bias." That's a lot to swallow when the underlying research hasn't been released yet. Let's see what the National Institute of Standards and Technology has to say on the subject. This is the result of the NIST's examination of 189 facial recognition AI programs -- all far more established than whatever it is Harrisburg researchers have cooked up.
Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.
The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.
Why is this acceptable? The report inadvertently supplies the answer:
Middle-aged white men generally benefited from the highest accuracy rates.
Yep. And guess who's making laws or running police departments or marketing AI to cops or telling people on Twitter not to break the law or etc. etc. etc.
To craft a terrible pun, the researchers' claim of "no racial bias" is absurd on its face. Per se stupid af to use legal terminology.
Moving on from that, there's the 80% accuracy, which is apparently good enough since it will only threaten the life and liberty of 20% of the people it's inflicted on. I guess if it's the FBI's gold standard, it's good enough for everyone.
Maybe this is just bad reporting. Maybe something got copy-pasted wrong from the spammed press release. Let's go to the source… one that somehow still doesn't include a link to any underlying research documents.
What does any of this mean? Are we ready to embrace a bit of pre-crime eugenics? Or is this just the most hamfisted phrasing Harrisburg researchers could come up with?
A group of Harrisburg University professors and a Ph.D. student have developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal.
The most charitable interpretation of this statement is that the wrong-20%-of-the-time AI is going to be applied to the super-sketchy "predictive policing" field. Predictive policing -- a theory that says it's ok to treat people like criminals if they live and work in an area where criminals live -- is its own biased mess, relying on garbage data generated by biased policing to turn racist policing into an AI-blessed "work smarter not harder" LEO equivalent.
The question about "likely" is answered in the next paragraph, somewhat assuring readers the AI won't be applied to ultrasound images.
With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face. The software is intended to help law enforcement prevent crime.
There's a big difference between "going to be" and "is," and researchers using actual science should know better than to use both phrases to describe their AI efforts. One means scanning someone's face to determine whether they might eventually engage in criminal acts. The other means matching faces to images of known criminals. They are far from interchangeable terms.
If you think the above quotes are, at best, disjointed, brace yourself for this jargon-fest which clarifies nothing and suggests the AI itself wrote the pullquote:
“We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,” Sadeghian said. “This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.”
"Minute features in an image that are highly predictive of criminality." And what, pray tell, are those "minute features?" Skin tone? "I AM A CRIMINAL IN THE MAKING" forehead tattoos? Bullshit on top of bullshit? Come on. This is word salad, but a salad pretending to be a law enforcement tool with actual utility. Nothing about this suggests Harrisburg has come up with anything better than the shitty "tools" already being inflicted on us by law enforcement's early adopters.
I wish we could dig deeper into this but we'll all have to wait until this excitable group of clueless researchers decide to publish their findings. According to this site, the research is being sealed inside a "research book," which means it will take a lot of money to actually prove this isn't any better than anything that's been offered before. This could be the next Clearview, but we won't know if it is until the research is published. If we're lucky, it will be before Harrisburg patents this awful product and starts selling it to all and sundry. Don't hold your breath.
Filed Under: ai, artificial intelligence, bias, facial recognition, precrime
Companies: harrisburg university