Detroit PD Now Linked To Two Bogus Arrests Stemming From Facial Recognition False Positives
from the no-problem,-it's-just-rights-and-freedom dept
Late last month, the first known false arrest linked to facial recognition software was reported. But that first in AI police work now appears to be merely a repeat offender. There have been two bogus arrests linked to facial recognition false positive. And both bogus arrests were performed by the same law enforcement agency, the Detroit Police Department. Elisha Anderson of the Detroit Free Press has the details on the first blown call by the PD's software.
The high-profile case of a Black man wrongly arrested earlier this year wasn't the first misidentification linked to controversial facial recognition technology used by Detroit Police, the Free Press has learned.
Last year, a 25-year-old Detroit man was wrongly accused of a felony for supposedly reaching into a teacher’s vehicle, grabbing a cellphone and throwing it, cracking the screen and breaking the case.
Detroit Police used facial recognition technology in that investigation, too.
This man, Michael Oliver, was charged with larceny for the May 2019 incident he didn't actually participate in. The report by the Free Press contains photos of both the person caught on the phone's camera and Michael Oliver. They highlight one major problem with facial recognition software: even if one could be persuaded the two faces are a close match (and they don't appear to be), the recording used by investigators to search for a match showed the suspect's bare arms. The person committing the crime had no tattoos. Michael Oliver's arms are covered with tattoos, running from the top of his hands all the way up to his shoulders.
The facial recognition software delivered its mistake to investigators, who included this mismatch in the photos they presented to the person whose phone had been grabbed.
During the investigation, police captured an image from the cellphone video, sent it for facial recognition and the photo came back to Oliver, the police report said.
After Oliver was singled out, a picture of his face was included in a photo lineup of possible suspects that was presented to the teacher.
A second person, a student, was also captured in the video with the suspect. The officer in charge of the case testified he didn’t interview that person though he'd been given that student’s name.
Once again, the Detroit PD and local prosecutors are promising the thing that has already happened twice won't happen again. There are new processes in place, although it's unclear when those policies went into effect. Oliver was arrested late last year. The other bogus arrest occurred earlier this year. In both cases, reporters were assured by law enforcement spokespeople that things have changed.
Here's what official say is now in place, even though it's too little too late for two Black men arrested and charged for crimes they didn't commit. There are "stricter rules" in effect. Matches returned by the system are not probable cause for anything, but can only be used as "investigative leads." Supposedly, this software will now only be used to identify people wanted for violent felonies.
Prosecutors are doing things a bit differently, too. But it's a reaction, rather than a proactive effort. t's only now -- after two false arrests -- that the prosecutor's office is mandating review of all facial recognition evidence by the city's top prosecutor, Kym Worthy. Investigators must also produce corroborating evidence before seeking to arrest or charge someone based on a facial recognition match.
This seems unlikely to change anything. Outside of the limitation to violent crimes, both of these cases could have gone through the review process -- along with the limited corroborating evidence (in both cases, crime victims picked the AI's mismatches out of a lineup) -- and still resulted in the arrest of these two men. In both cases, investigators ended their investigations after this step, even though they were given the opportunity to interview other witnesses. If corroboration is nothing more than discovering humans are just as bad at identifying people as the PD's software is, the mistakes the PD claims will never happen again will keep on happening.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: detroit, detroit police department, facial recognition, false arrest, michael oliver
Reader Comments
Subscribe: RSS
View by: Time | Thread
I thought the previous article explicitly said this was already the case. I don't see how "pounding the book" is going to actually make people listen any more than the did the first time.
So, this will never happen again... but only because the "attempted arrest" will always "lead" to violent shoot outs with the 'violent felon' dead while resisting arrest. /s
[ link to this | view in thread ]
Pretty bad
Wow, that software is really bad at trying to match african-americans. Those two aren't remotely similar other than both being black men (and even then, they aren't even the same shade).
[ link to this | view in thread ]
Re: Pretty bad
Is it a software problem, or is facial recognition being used like drug dogs, probable cause used by police when they want to hassle someone?
[ link to this | view in thread ]
The software could be 100% free of problems (which it isn’t and won’t ever be), and police could still use it as a tool of racism by feeding the software a database of faces with a heavy bias towards people of color.
[ link to this | view in thread ]
Re: Pretty bad
"Those two aren't remotely similar other than both being black men (and even then, they aren't even the same shade)."
Ah, but you know...every <fill in ethnic slur> looks the same - according, apparently, to certain computer algorithms.
Who says there's no progress? You can now program racism. /s
[ link to this | view in thread ]
Cause and symptom
Well, it's sort of a case for affirmative action. If 95% of the people in a computer vision lab happen to be Caucasian, the computer vision is going to be good at discriminating Caucasians. If 95% of the officers in a police department happen to be Caucasian, the officers are going to be good at discriminating Caucasians. Because in one's private life, people tend to have contacts whose facial features and expression they are familiar with, and those are quite often of similar outward features than themselves.
So even if you want to stipulate some correlation between competence and outward appearance (whether attributing it to hereditary features or a historic difference in access to education), a homogenous research community will fall short of finding appropriate solutions for a heterogenous population.
[ link to this | view in thread ]
Re: Re: Pretty bad
Hey now, being unable to accurately distinguish between to members of a species/race Is not racist (or specesist).
I can't tell some cats apart, and I like cats. If the computer doesn't have enough exposure to all races of humanity to tell any two humans apart (I'm not sure I can do that either, now that I think about it), that's not a sign of racist/prejudice/etc on the part of the computer (even assuming we could ascribe such things to a computer).
In other words: to "program racism" you need something like malice. So we aren't quite there yet.
(obviously the people reading the computers results are providing the own "spins")
[ link to this | view in thread ]
Something is happening
People, especially police, are being trained to do whatever a machine tells them to do.
[ link to this | view in thread ]
To program prejudice, all anyone would need is a prejudiced database of input. If the people who generate the info for that database (e.g., the cops) happen to hold a malicious prejudice towards certain groups of people? Well gee, I wonder what that would do to the input database~.
[ link to this | view in thread ]
Nothing new to see here...well, you know, besides the systemic racism.
This has been going on as a pilot project for years, only there it was called the DMCA, where private individuals and companies have taken automated sleuthing and elevated it to an error-prone art form. Could we have imagined that the pros would have done any better?
Truth is that we're still at least a decade or two away from any of these systems working within a 10% error rate (which still isn't close to being good enough to be relied upon for major cases), and shouldn't be using people's liberties as some kind of grand beta test. It's what you get when you confuse a tool with a process, and right now, these automated technologies are just tools that need to be baby-sat by human beings.
You know, kinda like the people using them.
Oh, and one other thing:
And how is this better? So, instead of accidentally accusing somebody of a minor crime, now we're going to ruin innocent peoples' lives for the big stuff? Go big or go home, I guess.
[ link to this | view in thread ]
Re: Something is happening
They trained the machine to tell them that.
[ link to this | view in thread ]
Re:
So if I understand you argument correctly (and feel free to say I don't because its doesn't really make sense in context): it is that being feed invalid inputs often malicious intent, as predicates confers that that malice to the one who rationalizes based on the predicates?
Because in the context of the previous post that's the only way this rebuttal makes sense.
[ link to this | view in thread ]
Re: Something is happening
Well, the problem with racial profiling is that it works. Because of the way the tables are tilted in society. Prejudice tends to be more often right than wrong.
The problem is not as much that it works rather that it cements the status quo by rendering any attempt of progress futile because people are not rewarded or punished for their individual achievements but because of the statistics of the group they share their visuals with.
Training neural networks for artificial intelligence based recognition and prediction is just fancy talk for outsourcing prejudice and making it someone else's problem.
If you have a subset of the population with 0.5% crime rate and a subset of the population with 0.4% crime rate, the most efficient manner of finding criminals is looking only in the group with the crime rate of 0.5%.
So far, so bad. But then we have the additional complication of the U.S. justice system that will lead to "successful" felony arrests predominantly in population groups of limited financial resources since any made-up charge without sufficient evidence can be handled on a "bankruptcy through defense cost, or plea deal" basis that leads to a disproportionally high "conviction" rate for citizens of limited means.
Again, this outcome is material that the computers are getting trained with, making them an important tool in giving a scientific varnish to prejudice-governed maintenance of the status quo that keeps "social mobility" down and delivers statistics proving "success".
I'd like some "law-and-order" guys to focus on low crime rates rather than high incarceration rates as a measure of their success.
[ link to this | view in thread ]
Your question is so grammatically challenged that I can’t parse it. Ask a clearly understandable question and I’ll give you an answer.
[ link to this | view in thread ]
Abusing the 'R' word
Sorry, I don't see racism in the actions of the police in the case stated in the article. That doesn't mean that the Detroit PD isn't racist, nor that the software is not racist. Members of the Detroit PD might be stupid and/or lazy, as there were clues as to who the perpetrator was and they did not follow them nor include them when ID'ing the 'culprit'. They brought in the wrong man when they could have easily eliminated him.
The fact is that the person who did the crime was black. It isn't racist to go after a black person when a black person actually did the crime. Nor is going after the wrong black person. Wrong, yes, but not racist.
[ link to this | view in thread ]
Re:
you are correct. I didn't read it through again after making changes, so here's the question with the grammar clean up:
So if I understand you argument correctly (and feel free to say I don't because its doesn't really make sense in context): it is that being feed invalid and malicious input as predicates confers that that malice to the one who rationalizes based on the predicates?
Because in the context of the previous post that's the only way this rebuttal makes sense.
[ link to this | view in thread ]
Re: Re:
That really wasn't much clearer but I'll respond anyhow:
If the data you feed into facial recognition software during training includes 90% white people and 10% black people the computer will not have enough information to accurately identify black people. This will result in misidentification of anyone whose skin color is dark.
The bias is in how the AI is trained, not the AI itself.
[ link to this | view in thread ]
Re:
I think it all comes down to what you mean by "racist". For many (most?) the term implies malice as well as bias. Undoubtedly the computer program is racially biased, but you can't really ascribe malice to a mere program, at least not yet.
On the other hand, there is no doubt some white americans are/were deeply racially biased but held no actual malice. They simply, honestly, believed that people with darker skin were morally and intellectually inferior to their white neighbours and friends. So, a question to those who do not think the computer program is racist - are such white people racist? I would think most would agree that they are. If so, why is the computer program not racist?
[ link to this | view in thread ]
Re:
Pounding the book is actually quite effective when said book is pounded upon said person in question requiring pounding.
[ link to this | view in thread ]
Re: Abusing the 'R' word
The racism is in the facial recognition AI's inability to properly identify people of color. Or more specifically, the racism is in the training of that AI. The programmers didn't bother to provide enough photos of black people for the software to learn how to properly identify black people. So the AI makes lots of mistakes with non-white images and the police just say "there's our guy, go arrest him" instead of doing actual police work.
I wonder how much more thorough they are with following up on white people identified by this same software. Even with white people this software has a terrible track record.
[ link to this | view in thread ]
An algorithm can’t say “you’re being racist right now” when it receives racially biased data. The algorithm isn’t sentient; it can’t detect and point out racism in its input data, regardless of the intent of those who generate and input the data, and thus refuse to process the data.
[ link to this | view in thread ]
Re: Re: Re:
"The bias is in how the AI is trained, not the AI itself."
Indeed, this seems to be a problem across the board. As another example, there were lots of complains from Scotland and some other areas that voice recognition didn't understand them most of the time. The reason for that is not that Apple and Google hate the Scottish. It's that the devices were initially trained with more neutral accents. The reason for that might be as simple as there not being a lot of Scots in the training groups. This has apparently improved great, which is likely to be in part due to the AI encountering more people with those accents in the wild. Which is to be expected. I laughed a little when I first read that Trainspotting had to be redubbed to make the accent easier for Americans, but the reason I understood the original dub and many Americans did not was because I'd been exposed to a wider range of local accents when I was growing up.
This issue is getting representative and diverse sets of data to train on, not necessarily a bias in the system itself.
[ link to this | view in thread ]
Re: Re: Re:
Yes 'bias'. not raciest.
I admit that I am biased for being able to distinguish certain types of cats better than others. I wouldn't say I am "cat racist".
Of course I already covered that.
[ link to this | view in thread ]
SSDD
I wonder if this will work as well as it has in San Francisco?
From yesterday:
[ link to this | view in thread ]
Pffffew
good thing "false positives" don't happen with Corona testing...YaoMing?
[ link to this | view in thread ]
Re: Pffffew
Just two arrests? That's pretty good. What's the problem?
[ link to this | view in thread ]
Re: Re: Pffffew
Arrested solely on a photo match, one at least should have been rejected by the officer comparing images, that's not police work, that is outright racism.
[ link to this | view in thread ]
facial tattoos
I do believe we've actually found a reason to argue that a 'facial tattoo' is a good thing...
Until folks realize they could use a temp-tat to frame someone!
Here's what really get's me... doubles are a thing in film (although less important as video editing becomes easier and less noticeable)... why, because it's very well likely that there are people that look similar enough to everyone out there unless they have some truly distinctive feature that couldn't be replicated... so its not going to be 100% even where you have flawless photos/video to feed in... why do some people keep thinking it's a solve all???
[ link to this | view in thread ]
Re: Abusing the 'R' word
The fact is that the person who did the crime was black. It isn't racist to go after a black person when a black person actually did the crime. Nor is going after the wrong black person. Wrong, yes, but not racist.
If that was all there was too it then no, it wouldn't have been. Shoddy programming and a mistaken positive yes, but racist, not so much. That however was not all there was too it, as the photo used for the identification clearly showed the suspect's exposed and tattoo-free arms, which means the second they saw that the person they thought was responsible had arms positively covered in tattoos he should have been dismissed as a suspect immediately.
At that point the only potential reasons to continue to blame the innocent guy are all bad ones, as if it wasn't 'he's black, he's got to be guilty of something' racism you're looking at indifferent corruption, where they stuck with the wrong guy not because they were racist but because they were too damn lazy to try again and simply didn't care that they had the wrong person.
[ link to this | view in thread ]
Not always nice to be right...
They highlight one major problem with facial recognition software: even if one could be persuaded the two faces are a close match (and they don't appear to be), the recording used by investigators to search for a match showed the suspect's bare arms. The person committing the crime had no tattoos. Michael Oliver's arms are covered with tattoos, running from the top of his hands all the way up to his shoulders.
Ignored glaring inconsistencies, charged the original suspect anyway, as I read I couldn't help but think 'I bet he's black' and after checking the comparison yup, he is.
Two bogus arrests tied to facial recognition software, both by the same department, and both involving black men, truly the most stunning of coincidences.
[ link to this | view in thread ]
Re: Pffffew
False positive corona testing exist, as it will with any test. What doesn't lie is the death rate, which is steadily climbing in the US consistently with the rise in infections over the last few weeks.
[ link to this | view in thread ]
Re: Re: Pffffew
Well, only one thing to do then: Stop testing and stop keeping track of trivial details like 'how many hundreds/thousands of people died this week from a disease that other countries have on the ropes?', since both of those make the Dear Leader look bad and as such are nothing less than pure heresy to the Cult of Trump.
[ link to this | view in thread ]
Re: Re: Re: Pretty bad
"Hey now, being unable to accurately distinguish between to members of a species/race Is not racist (or specesist)."
Correct. As far as it goes.
The issue crops up when the facial recognition software has a 90%+ accuracy rate on non-black faces but can't tell Prince from Oprah Winfrey.
I'm inclined to think the actual malice is more in the part where a company peddles a product they damn well know doesn't work and fails to specify in what way their product doesn't work.
But incidental or not the software is...extra problematic when it comes to identify non-white faces.
[ link to this | view in thread ]
Re: Re:
"...it is that being feed invalid and malicious input as predicates confers that that malice to the one who rationalizes based on the predicates?"
Good grief, man. A simple "So you mean; Garbage In, Garbage Out?" would have sufficed. 😅
There is a multitude of issues surrounding the fact that not only is facial recognition tech inherently flawed, it's also extra flawed when it comes to non-white faces where it often suddenly can't even tell whether it's looking at a man or woman.
We could apply Hanlon's Razor and claim the manufacturers of the software were so inept they only used white people while programming the markers.
...or we could claim malice in that they weren't that inept but launched the product they knew to be dysfunctional anyway and refused to include the known bugs in the documentation.
The latter may not be racist per se but certainly produces a racial/ethnic bias as a direct result.
[ link to this | view in thread ]
Re:
"And how is this better? So, instead of accidentally accusing somebody of a minor crime, now we're going to ruin innocent peoples' lives for the big stuff?"
My favorite example here is that crowded airport where local heavily armed peacekeepers and TSA guards suddenly get told a security camera just spotted Attila The Well-Armed Mass Murderer hauling a heavy suitcase into the seething throng of arrivals. And ten minutes later everyone goes "oops" when they check the id and find out the guy they just shot five times in the head is an innocent father on his way to pick up his family - whose lives will now be colored by the day they caught bits of their dads or husbands brain with their face.
"Go big or go home, I guess."
You're assuming a guy "identified" as an armed murderer, for instance, gets to "go home" rather than to the morgue?
[ link to this | view in thread ]
Re: Pffffew
"good thing "false positives" don't happen with Corona testing...YaoMing?"
There's a bit of difference between "Hmm, you might want to go to hospital for a checkup and treatment" and "He's armed and dangerous, if he even twitches, don't stop shooting until you're real sure he's dead".
The non-insane know this of course and don't try to compare these two situations to start with.
And then there are the real batshit crazy people who believe that fewer tests will help beating covid because, in their minds, what you don't know of won't kill you.
[ link to this | view in thread ]
Re: Re:
"For many (most?) the term implies malice as well as bias."
It's one of those slightly fuzzy borders. If you have a team of programmers who build a facial recognition program which is completely unable to properly differentiate black people - then you have an issue.
Hanlon's razor says they've been inept enough not to realize there's more than three different shades of "pasty white" the program needs to recognize markers for.
And if Hanlon's razor breaks its edge and malice must be assumed it's more probable that the malice lies in launching a product the dev team knew didn't meet the demanded criteria.
"...are such white people racist? I would think most would agree that they are. If so, why is the computer program not racist?"
What you are talking about is systemic racism. The program is as properly described of that as, say, a law which mandates that a black person should be seated at the back of the bus.
Most people like to link racism to malice. But that's flawed reasoning. You need no active malice to make the lives of a minority disproportionately crappier than that of the majority. You just need casual ignorance and a missing political will to even the scales.
[ link to this | view in thread ]