London Police Move Forward With Full-Time Deployment Of Facial Recognition Tech That Can't Accurately Recognize Faces
from the don't-let-the-fact-that-it-doesn't-work-stop-you dept
The London Metropolitan Police sure loves its facial recognition tech. But it's an unrequited love. The tech doesn't appear to have done anything for the Met during its past deployments.
Documents obtained by Big Brother Watch in 2018 showed the Met's deployments had rung up a 98% false positive rate in May of that year. Nothing improved as time went on. Subsequent documents showed a false positive rate of 100%. Every "match" was wrong. Not exactly the sort of thing you want to hear about tech capable of scanning 300 faces per second.
This followed an earlier report covering a test run by the South Wales Police at a handful of public events. In comparison, the South Wales tests were a success: a mere 92% of its matches were false positives.
The Met's tech showed some slight improvement in 2019, moving up to a 96% false positive rate. This continued failure to recognize faces -- along with a number of privacy concerns -- prompted a UK Parliamentary Committee to call for an end of the use of facial recognition tech by UK government agencies. This advice was ignored by the Home Office, which apparently believed UK law enforcement would be able to fail upwards towards a brave new world of facial recognition tech worth the money being spent on it.
We've apparently reached that inflection point. Test runs are a thing of the past. It's time for Londoners to put their best face forward.
British police are to start operational use of live facial recognition (LFR) cameras in London, despite warnings over privacy from rights groups and concerns expressed by the government's own surveillance watchdog.
First used in the capital at the Notting Hill carnival in 2016, the cameras will alert police when they spot anyone already on "wanted" lists.
"The use of live facial recognition technology will be intelligence-led and deployed to specific locations in London," the city's Metropolitan Police said in a statement.
"Intelligence-led," says the agency that has so far only managed to incorrectly identify people almost 100% of the time. There's more "intelligence" further on in the article when the Met says the software that's hardly managed to correctly identify people will help "identify and apprehend suspects." Gun and knife crime top the list of things expected to be curtailed by unproven tech, followed by the sexual abuse of children and "protecting the vulnerable."
Also lol a bit at this, which uses a trite phrase made even triter by the abysmal performance of the Met's AI:
Metropolitan Police Assistant Commissioner Nick Ephgrave said in a statement: "We are using a tried-and-tested technology, and have taken a considered and transparent approach in order to arrive at this point."
He's technically correct. It's has been tried and tested. What it hasn't been is accurate and that's what counts most when people's rights and freedoms are on the line. But better an unknown number of innocent people be misidentified than allow a single suspect to go unscanned, I guess.
Filed Under: facial recognition, london, metropolitan police, police