Detroit Skating Rink Horns In On Detroit PD's Facial Recognition Gaffe Racket, Denies Teen Girl Opportunity To Skate

from the even-if-our-software-is-wrong,-we-will-still-enforce-its-decision dept

It looks like Detroit, Michigan is trying to corner the market on bad facial recognition tech. The city's police department is already associated with two false arrests based on bad matches by facial recognition software. This latest news, via Techdirt reader Jeffrey Nonken, shows mismatches aren't just limited to the public sector.

A Black teenager in the US was barred from entering a roller rink after a facial-recognition system wrongly identified her as a person who had been previously banned for starting a fight there.

Lamya Robinson, 14, had been dropped off by her parents at Riverside Arena, an indoor rollerskating space in Livonia, Michigan, at the weekend to spend time with her pals. Facial-recognition cameras installed inside the premises matched her face to a photo of somebody else apparently barred following a skirmish with other skaters.

The teen told staff it couldn't possibly be her since she had never visited the venue before. But it didn't matter to the management of the rink, which is located in a Detroit suburb. She was asked to leave and now her parents are considering suing the rink over the false positive. Fortunately, no one at the rink felt compelled to call the police, which likely wouldn't have helped anything considering local law enforcement's track record with faulty facial recognition search results.

As for Riverside Arena, it's apologetic but not exactly helpful. Management claims deploying facial recognition tech on patrons is part of the "usual" entry process. It also unexplained that it's "hard to look into things when the system is running," which I suppose means that's why no one could double-check the match while Robinson was still there contesting the search results.

Being wrong some of the time is also good enough for non-government work, apparently.

"The software had her daughter at a 97 percent match. This is what we looked at, not the thumbnail photos Ms. Robinson took a picture of, if there was a mistake, we apologize for that."

Obviously, there was a mistake. So, this should have just been an apology, not a half-hearted offer of an apology if, at some point in the future, someone other than the people directly affected by this automated decision steps forward to declare the mismatch a mismatch.

This is the other side of the facial recognition coin: private sector use. This is bound to result in just as many mismatches as government use does, only with software that's perhaps undergone even less vetting and without any direct oversight outside of company management. False positives will continue to be a problem. How expensive a problem remains to be seen, but since private companies are free to choose who gets to use their services, lawsuits probably won't be much of a deterrent to deploying and using unvetted software that will keep the wrong people out and, perhaps more disturbingly, let the wrong people in.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: detroit, facial recognition, livonia, michigan, skating rink
Companies: riverside arena


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. identicon
    Anonymous Coward, 29 Jul 2021 @ 4:02pm

    Oh look, casual racism being hidden as a technological mistake. Shocking.

    link to this | view in thread ]

  2. identicon
    David, 29 Jul 2021 @ 4:12pm

    "The software had her daughter at a 97 percent match."

    There is no such thing as "at a 97 percent match". That is a meaningless expression. It doesn't tell what is being matched to what with what underlying statistic.

    More likely than not it isn't even based on an actual statistic but just a number the programmers made up to feel more tangible.

    Facial expression recognition tends to fare quite worse with black skin, particularly with less than optimal lighting: that's just a matter of different contrast. If 3% of the patrons are black girls and the facial recognition does not manage to go further than that, you can claim a 97% match, for example.

    And the software does not even need to be able to distinguish a black girl from a calico cat or a bike shed because the latter are not in the training and test sets.

    link to this | view in thread ]

  3. identicon
    Anonymous Coward, 29 Jul 2021 @ 4:43pm

    Re: "The software had her daughter at a 97 percent match."

    It was a 97 percent match to bad data.

    link to this | view in thread ]

  4. identicon
    Anonymous Coward, 29 Jul 2021 @ 4:47pm

    i'm sure a skating rink's investment in farcical recognition is really going to pay off. "wait, wait, we really need to justify the expense of this dumpster fire."

    link to this | view in thread ]

  5. identicon
    Anonymous Coward, 29 Jul 2021 @ 5:22pm

    Re: Re: "The software had her daughter at a 97 percent matc

    Data is not good, or bad. It is data.

    The conclusions you draw from it may be good, or bad. The quality of the data compared to your needs may be good, or bad.

    But data doesn't go out saving lives, or shooting people.
    Except on reruns of Star Trek, I guess.

    link to this | view in thread ]

  6. identicon
    Anonymous Coward, 29 Jul 2021 @ 5:25pm

    Re: great work

    I'm'a thinking comments like yours should be Actually Nuked, to avoid your links getting any SEO benefit from your crap. Maybe then you'd move on.

    link to this | view in thread ]

  7. identicon
    Anonymous Coward, 29 Jul 2021 @ 6:54pm

    Re: Re: Re: "The software had her daughter at a 97 percent

    Collecting data in an incomplete manner to bias the end results of the statistical analysis is very much a real thing. What would you call that kind of data if it's not bad?

    Police send officers to black neighborhood --> cops arrest black people --> department enters these arrests into something like COMPSTAT --> department now has "data" that shows the black neighborhoods have high crime rates --> deploy even more officers to black neighborhoods now that they have been deemed hotspots of crime.

    It's not very hard to launder racism through "data" and "technology".

    link to this | view in thread ]

  8. icon
    PaulT (profile), 29 Jul 2021 @ 11:41pm

    This is the real problem with facial recognition - not the underlying tech, but the way in which incredulous fools deal with results.

    It should be easy - a 97% match doesn't really prove anything. It could be the same person in different lighting. It could be another 14 year old girl who happens to share some similar features (are these actually calibrated for minors, or are they likely to get more false positives, I wonder? We already know there's unintended racial bias, but is ageism in there as well?).

    This should be somewhat easy to resolve - the operator brings up the images against which the person in front of them is being matched, then they make a judgement call with human eyes. The problem is when they start offloading responsibility to the system with no human input - sorry, computer says no, Mr Buttle...

    However accurate or otherwise the underlying tech is, the problem is always going to be when humans decide to defer to it to make decisions for them instead of merely advising human decisions.

    link to this | view in thread ]

  9. icon
    PaulT (profile), 29 Jul 2021 @ 11:42pm

    Re: Re: great work

    Judging by the fact that I can't see what you were replying to, nor any comment in this thread that contains a link, I think you got your wish ;)

    link to this | view in thread ]

  10. identicon
    Bobvious, 30 Jul 2021 @ 2:11am

    Re: Buttle

    link to this | view in thread ]

  11. identicon
    Anonymous Coward, 30 Jul 2021 @ 6:00am

    Re: Re: Re: "The software had her daughter at a 97 percent

    OMG, I'd never heard a more wrong statement in my life. Please tell me you're not working in any area of scientific research.

    link to this | view in thread ]

  12. identicon
    Anonymous Coward, 30 Jul 2021 @ 7:34am

    who's idea was it to use white facial recognition on blacks and other non-whites?

    link to this | view in thread ]

  13. icon
    TheResidentSkeptic (profile), 30 Jul 2021 @ 9:25am

    If it gets good enough...

    ... to tell identical twins apart, then they can roll it out to the public.

    Think that day will ever come?

    link to this | view in thread ]

  14. identicon
    Who Cares, 30 Jul 2021 @ 10:52am

    Re: If it gets good enough...

    There are these triplets. One of them is evil… (Meanwhile, back in the cloning facility.)

    We humans are crazy critters.

    link to this | view in thread ]

  15. icon
    Lostinlodos (profile), 30 Jul 2021 @ 1:48pm

    Re: Re: Re: "The software had her daughter at a 97 percent

    Hey now, the enterprise did solve that issue!

    link to this | view in thread ]

  16. icon
    Lostinlodos (profile), 30 Jul 2021 @ 1:54pm

    Private policy

    Ultimately this is a private company using (less than perfect) technology to make their own rules. Be it algorithms or cameras, they can use whatever they want.

    link to this | view in thread ]

  17. icon
    That Anonymous Coward (profile), 30 Jul 2021 @ 6:44pm

    stomps around pouting in moderation

    link to this | view in thread ]

  18. identicon
    Bobvious, 30 Jul 2021 @ 7:23pm

    Re: Private policy

    Ultimately Techdirt/Facebook/Twitter/Google is a private organisation using (less than perfect) technology to make their own rules. Be it algorithms or community flagging, they can use whatever they want.

    link to this | view in thread ]

  19. icon
    Lostinlodos (profile), 30 Jul 2021 @ 8:29pm

    Re: Re: Private policy

    I do believe that is what I implied quite clearly: isn’t it.

    link to this | view in thread ]

  20. icon
    PaulT (profile), 1 Aug 2021 @ 10:42am

    Re: Private policy

    Ah, once again, you go for the rights of corporations to negatively impact the rights of others rather than take the most basic responsibility for making sure that doesn't happen.

    As I mentioned, the tech isn't the problem, it's implication when the corporation decides it should replace nuance and thought - and we should always err on the side of the people whose lives are otherwise unnecessarily destroyed by convenience.

    link to this | view in thread ]

  21. icon
    Lostinlodos (profile), 1 Aug 2021 @ 12:28pm

    Re: Re: Private policy

    “ rights of”
    corporations->small business
    “to negatively impact the rights of others”
    By trying to keep the rink safe for the majority.
    It’s not the best choice solution but it’s a private business.
    They don’t have to serve anyone.

    At least I’m generally consistent.

    link to this | view in thread ]

  22. icon
    PaulT (profile), 1 Aug 2021 @ 1:49pm

    Re: Re: Re: Private policy

    "corporations->small business"

    Well, corporation I used as an umbrella, but it's true that here it's a small business rather than a massive chain. Which actually raises the issue further of why they pass control off to another company to make decisions for them, more likely to be a corporate entity than a local security guard.

    I could be wrong, but this seems to be yet another cost cutting measure that doesn't actually cut costs in the long term.

    "By trying to keep the rink safe for the majority."

    Except, they didn't. These decisions didn't protect anyone (assuming the story presented is correct), and a possible outcome of them making these decisions is that the rink isn't around to serve everyone. There is the issue of accepting that some people lives need to be unnecessarily reduced in order to serve the majority, but it's possible that this won't even achieve that. The next time might be a person with a local connection and a competing venue to use that they drive customers to.

    There's also the question of what would happen had the tech made a mistake in the other direction. What if, instead of a false positive that would have been easily remedied by human interaction, they had a false negative that allowed a person who had been banned to enter the premises? Given that they're apparently blindly believing what an imperfect system tells them, they could have just as easily introduced more risk.

    There may be more details to come, but I don't see how the customers really benefit here. It seems to be a measure to ensure they don't have to employ enough security guards to effectually run the place and pass off decisions on to a third party. Which doesn't exactly scream "safety" to me, rather than they're not be sufficiently staffed to deal with a real problem later.

    "They don’t have to serve anyone."

    Customers also don't have to go anywhere in particular.

    link to this | view in thread ]

  23. icon
    Lostinlodos (profile), 1 Aug 2021 @ 2:26pm

    Re: Re: Re: Re: Private policy

    Wow. We are in agreement on bad choice.

    Mistakes happen. The quest is how to deal with them.
    I don’t condone how they dealt with it here.
    But ultimately it is up to them.

    link to this | view in thread ]

  24. icon
    PaulT (profile), 2 Aug 2021 @ 2:13am

    Re: Re: Re: Re: Re: Private policy

    Nobody has said it's not their decision to make. Merely that it's a very dumb decision, and the precedent it helps to set (people ignoring common sense and the evidence of their own eyes in favour of a what a computer tells them based on known faulty algorithms) is not particularly good.

    link to this | view in thread ]

  25. icon
    Scary Devil Monastery (profile), 2 Aug 2021 @ 7:13am

    Re: Re: Re: "The software had her daughter at a 97 percent

    "Data is not good, or bad. It is data."

    Err...no. In the lab I could get data from a badly calibrated pH-meter. It's what we in the biz i was in back then would definitely call "bad data". Context matters.

    "The quality of the data compared to your needs may be good, or bad." I.e. Good or bad data. QED.

    link to this | view in thread ]

  26. icon
    Scary Devil Monastery (profile), 2 Aug 2021 @ 7:15am

    Re:

    "who's idea was it to use white facial recognition on blacks and other non-whites?"

    A number of morons being told by "security experts" that the facial recognition tech was 97% accurate and didn't read the fine print of "...except if it concerns non-caucasian individuals, particularly black, latino or asians where it will struggle to tell Prince from Oprah Winfrey".

    link to this | view in thread ]

  27. icon
    Scary Devil Monastery (profile), 2 Aug 2021 @ 7:23am

    Re: Private policy

    "Be it algorithms or cameras, they can use whatever they want."

    No one has said differently.

    We have, however, with due affect, declared that use of algorithms and cameras varying degrees of foolish. It's not a good look when it turns out your security system in practice has a racial bias and your use of it then becomes part of systemic racism.

    Honestly, if you know from the start you've got an expensive security system which has a high chance of identifying black people, in particular, as any other black person flagged "asshole" in the database, then using that system isn't a great plan. In fact it'll be hard to swing casual disregard for bias against an entire demographic as anything other than negligent racism.

    At some point any business owner needs to go over their planned purchases and think "Will any of these new toys make me look real bad within the week?". If the answer is "Yes" then don't buy that piece of overpriced garbage.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.