London Metropolitan Police's Facial Recognition System Is Now Only Misidentifying People 81% Of The Time

from the any-year-now... dept

The London Metropolitan Police's spectacular run of failure continues. Sky News reports the latest data shows the Met's facial recognition tech is still better at fucking up than doing what it says on the tin.

Researchers found that the controversial system is 81% inaccurate - meaning that, in the vast majority of cases, it flagged up faces to police when they were not on a wanted list.

Needless to say, this has raised "significant concerns" by the sort of people most likely to be concerned about false positives. Needless to say, this does not include the London Metropolitan Police, which continues to deploy this tech despite its only marginally-improved failure rate.

In 2018, it was reported the Metropolitan Police's tech was misidentifying people at an astounding 100% rate. False positives were apparently the only thing the system was capable of. Things had improved by May 2019, bringing the Met's false positive rate down to 96%. The sample size was still pretty small, meaning this had a negligible effect on the possibility of the Metropolitan Police rounding up the unusual suspects the system claimed were the usual suspects.

Perhaps this should be viewed as a positive development, but when a system has only managed to work its way up to being wrong 81% of the time, we should probably hold our applause until the end of the presentation.

As it stands now, the tech is better at being wrong than identifying criminals. But what's just as concerning is the Met's unshaken faith in its failing tech. It defends its facial recognition software with stats that are literally unbelievable.

The Met police insists its technology makes an error in only one in 1,000 instances, but it hasn’t shared its methodology for arriving at that statistic.

This much lower error rate springs from Metropolitan Police's generous accounting of its facial recognition program. Its method compares successful and unsuccessful matches with the total number of faces processed. That's how it arrives at a failure rate that sounds much, much better than a system that is far more often wrong than right.

No matter which math is used, it's not acceptable to deploy tech that's wrong so often when the public is routinely stripped of its agency by secret discussions and quiet rollouts. Here in the US, two cities have banned this tech, citing its unreliability and the potential harms caused by its deployment. Out in London, law enforcement has never been told "No." A city covered by cameras is witnessing surveillance mission creep utilizing notoriously unreliable tech.

The tech is being challenged in court by Big Brother Watch, which points out that every new report of the tech's utter failure only strengthens its case. Government officials, however, aren't so sure. And by "not so sure," I mean, "mired in deep denial."

The Home Office defended the Met, telling Sky News: "We support the police as they trial new technologies to protect the public, including facial recognition, which can help them identity criminals.

But it clearly does not do that.

It misidentifies people as criminals, which isn't even remotely close to "identifying criminals." It's the exact opposite and it's going to harm London residents. And the government offers nothing but shrugs and empty assurances of public safety.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: accuracy, facial recognition, london, metropolitan police, privacy, surveillance


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    hij (profile), 19 Jul 2019 @ 2:51am

    Crack'n neural net Gromit!

    Cut to a scene with Gromit working on the project with Wallace and reading the book, "Principle Component Analysis For Dogs."

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 19 Jul 2019 @ 3:44am

    The Met police insists its technology makes an error in only one in 1,000 instances, but it hasn’t shared its methodology for arriving at that statistic.

    That comma is a decimal separator, not a thousands separator...

    link to this | view in chronology ]

  • identicon
    Daydream, 19 Jul 2019 @ 4:21am

    What happens to these facial recognition systems if suspected criminals, say, wear a fake moustache? Or grow out a real one?

    link to this | view in chronology ]

  • identicon
    Robert Beckman, 19 Jul 2019 @ 5:04am

    Safest place in London

    According to the Met, 999 of 1000 are correctly identified, and if 19% of the 1/1000 are correctly identified as criminals then there are only 19 criminals per 100,000 people, which is easily the lowest rate in London, let alone in the world.

    Great job, London Metropolitan Police, you’ve shown you have the safest jurisdiction and must not need this fancy tech.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 19 Jul 2019 @ 6:01am

    Friday deep thoughts:

    If humans are animals, how come we don't eat humans too?
    Humans are not animals.

    Friday deep thoughts

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 19 Jul 2019 @ 1:37pm

      Re: Friday deep thoughts:

      Humans are animals, did you fall asleep in biology again?

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 19 Jul 2019 @ 6:21am

    Apparently it does not matter if they get the right person, all that matters is that they got someone, and the fact that that person's life may be irreversible impacted is of no concern to them I guess.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 19 Jul 2019 @ 6:30am

    Working exactly as intended

    It gives them an excuse to stop and search. The police can even target black or asian youths "the computer told me to".

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 20 Jul 2019 @ 3:02pm

      Re: Working exactly as intended

      Cool, just replace the cops with robots then.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 19 Jul 2019 @ 7:47am

    "Misidentifying People 81% Of The Time" might be better than misidentifying 0.1% of the time. In principle, it means every cop should know they need to double-check each result closely, whereas a more accurate system could easily be seen as infallible. (Unless, as another person said, it's just a pretext to hassle people.)

    link to this | view in chronology ]

  • identicon
    Pete Austin, 19 Jul 2019 @ 7:56am

    Failure Rate = false positives and false negatives

    It's easy enough to vary the recognition threshold and decrease the percentage of false positives (mis-detecting fewer innocent people) at the expense of increasing the percentage of false negatives (detecting fewer criminals).

    Unless we have both percentages, I strongly suspect this is what the police have done, and that would NOT be an improved failure rate.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 19 Jul 2019 @ 8:34am

    and in true UK fashion, no one in the Police Force or the UK government gives a flyin' fuck! it doesn't matter who gets falsely arrested, it doesn't matter who gets away instead of caught and it definitely doesn't matter who is harmed because of the screwed up, useless software that wont be usable until at least 2050! the UK used to be at the forefront of democracy, of freedom and of truth, now it's a leader in how to fuck your own citizens, how to remove freedom and privacy and how to (try) to keep tabs on everyone except those it should (criminals and terrorists)! what a reversal!!

    link to this | view in chronology ]

  • icon
    Ninja (profile), 19 Jul 2019 @ 9:03am

    Now imagine a contraceptive that fails only 81% of the time. But hey, in law enforcement wonderland 19% precision is awesome!

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 19 Jul 2019 @ 10:25am

    Time for the Devil's advocate....

    Seems to me that, depending on how they're using this tech, their stats are being misinterpreted.

    The FR tech is presumably being used to flag potential bad actors. As such, it is meant to eliminate 90% of the population as a first pass, so that the police then only have to concentrate on the final 10%, thus simplifying their detective work significantly.

    As such, as long as their FP rate is below 100%, they are getting utility from the system. Without it, they'd have to use some other method to comb through the entire population to shrink the sample size, and the method they used to do that would most likely involve profiling, which has some nasty side effects.

    The big question is: what's their FN rate like? If it's too high, that means that they're mostly flagging up innocent people while failing to find the people they're actually looking for. This would be bad, and should indicate they just drop the program as useless for policing.

    But if the FN rate is reasonable, they likely end up in a situation where they get 10 people flagged, 1 of which is the perp. This looks suspiciously like the trusted police lineup situation, but with a random sampling of similar looking people as the faces in the lineup -- potentially a better lineup than the meatspace ones. Because at that point, they can track the CCTV footage for these 10 people, make some targeted inquiries around the crime that actually took place, and quickly narrow things down to 1 suspect.

    In the police's view, the worst thing they could have is a system that only flags people who are criminals -- because then they'd have a MUCH harder time figuring out which criminally minded person actually committed the crime.

    Yes, there are plenty of logical fallacies in what I just wrote, but this is the logic behind their arguments, and this is what you need to refute to make any sort of a difference. Just going on about an 81% FP rate does nothing but spread ignorance about how this data is actually processed and what its significance is.

    link to this | view in chronology ]

    • This comment has been flagged by the community. Click here to show it
      icon
      Mason Wheeler (profile), 19 Jul 2019 @ 11:09am

      Re: Time for the Devil's advocate....

      This is basically correct. Also, it went from 100% false positives to 81% false positives over the course of a year. Extrapolate out that rate of improvement, and in another 4 years the false positive rate is likely to be close to 0. (And that's making the rather pessimistic assumption that the rate of growth will only be linear; in the world of computing, Moore's Law applies to most things.)

      In other words, this is just another typical Libertarian hit piece by our resident Libertarian nutjob.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 19 Jul 2019 @ 11:55am

        Re: Re: Time for the Devil's advocate....

        I wouldn't go this far... the rate of improvement for this tech is on a logarithmic curve; it's not exponential or even linear.

        But even following a logarithmic curve, it will only take about 4 years to get to, say, a 12% FP rate. For the purposes they're applying it to, assuming a reasonable FN rate, that's leaps and bounds beyond what they could achieve using any other method.

        Using modern Machine Learning techniques, it is highly unlikely they can get their FP rate below around 34%. However, the maths are always improving in this space, so I'd say that 12% is plausible, even if not likely.

        But even a 34% rate is amazing. Remember: this isn't using facial recognition to say "is that person a criminal?" It's using facial recognition to say "around the scene of the crime, whose face looks like someone we know commits these sorts of crimes?"

        It's an investigation technique that mimics what humans already do -- it's not replacing a judge and jury.

        Personally, I'm happy with an 81% FP rate. If they got it close to 0, law enforcement might be tempted to assume guilt when they got a match. And THEN we'd be in trouble similar to that documented in https://www.techdirt.com/articles/20190716/19112442601/public-records-request-nets-users-manual-pala ntirs-souped-up-surveillance-software.shtml

        link to this | view in chronology ]

  • icon
    That One Guy (profile), 19 Jul 2019 @ 2:18pm

    Easy fix

    Step one: Point the cameras at the police and any politicians who support the program.

    Step two: Any and all hits will result in an arrest/stop/further investigation by a third party who will have the same incentives to secure a 'hit' as the police do.

    If either step if not followed, the program is tossed entirely. I imagine a week or so of those pushing the program being subject to it themselves would be enough for them to care about how accurate it isn't.

    link to this | view in chronology ]

  • identicon
    bobob, 20 Jul 2019 @ 10:32am

    On the plus side, this is good for defense attorneys (excuse me, barristers). Now, even if their clients are guilty, relying on facial recognition to id suspects ought to provide good reasons to get acquittals.

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.