Amateur Online Detectives Have Apparently Decided Facial Recognition Tech Is Good As Long As They're The Ones Using It
from the OnlyFans-accounts-but-for-watching-amateurs-stroke-their-frontal-lobes dept
The exponential growth of facial recognition tech over the past decade is cause for concern. The tech is unproven and caters to pre-existing biases. The biggest beneficiaries of the tech explosion are the usual suspects: white guys over the age of 35. Cops claim it's a godsend -- a tool that gives them what they need to close cases, even when it's usually just doing what they've always done: deciding any minority "fits the description."
A late entrant to the facial recognition games drew a lot of heat. Clearview gives its government customers (along with billionaires, retailers, and… um… gyms) access to billions of images, every one of them scraped from public posts at hundreds of websites and social media platforms. The legality of the scraping is still unsettled, but the company's desire to turn online posts into fodder for misidentifications that can lead to wrongful incarceration is truly disturbing.
But it's not just the public sector taking advantage of AI "advances." Facial recognition is a game even amateurs can play. Online tools allow the Wikipedia Browns of the world to misidentify people just as often as the pros do. Ryan Reilly and Jesselyn Cook of Huffington Post detail the online sleuthing done by internet denizens following the January 6 insurrection attempt. Here's how the possible ID of two Capitol raid participants went down:
Jason Beddingfield admits he made his way to the Capitol on Jan. 6. Videos discovered by citizen sleuths and by HuffPost show a man who looks identical to Jason Beddingfield hopping over barriers while holding a Trump flag and storming past bike racks moments after members of the mob ripped them down. At the same time, approaching the Capitol via a different path, a young man holding an American flag led a crowd and shouted at the police. Soon the young man was brawling with cops at the front of the police line. He later entered the Capitol building, and emerged from the Senate side after being pepper-sprayed.
The young man bore a striking resemblance to Jason Beddingfield’s son Matthew.
This similarity might never have come to light without the use of a highly controversial technology. A member of a network of citizen sleuths plugged images of the man ― whom online insurrection investigators dubbed #SoggyKidInsider, because he later emerged from the Capitol covered in liquid, and #NaziGrayHat, because he appeared to give a Sieg Heil salute ― into PimEyes, a facial recognition website. One of Matthew Beddingfield’s mug shots popped up.
Matthew Beddingfield was charged with murder in 2019. He was released on bail. While on bail, he apparently made his way to Washington DC with his father, Jason Beddingfield. His mug shot proved instrumental to this search of photos and footage of the Capitol raid. The elder Beddingfield stated in Facebook post he was headed to the nation's capital to take it back from "commie bastards." According to online sleuths, it was "Bring Your Child To The Insurrection Day."
Beddingfield's son was more careful about his online posts. He didn't admit to traveling to DC (something that likely would have violated his bond conditions). But he allegedly assaulted a police officer with a fire extinguisher. And he has, so far, managed to avoid being hit with federal charges for his apparent involvement in the Capitol raid.
But even if this all ends up being factual, there's still cause for alarm. The availability of tech tools claiming to be capable of facial recognition is a huge problem since so many of these tools are pretty terrible at recognizing faces. Amateur detectives using open source tools shouldn't be trusted any more than law enforcement agencies utilizing the same tech. The possibility of actual harm remains, even if no one's capable of performing a citizen's arrest over the internet. Redditors crowdsourcing info about the Boston Marathon bombing got a lot of things wrong, resulting in the infliction of misery on innocent people by pitchfork-bearing shit-posters as well as law enforcement officers willing to believe the internet had actually identified the perpetrators.
It's not that crowdsourcing information always results in wrong conclusions. Occasionally, the internet is right about stuff. Adding untested facial recognition AI to the mix is a recipe for disaster. Fortunately, the feds didn't act on this unvetted intel. But at some point they might. And the barrier to entry -- in this case, less than a dollar day -- means anyone can use facial recognition tech to do far more harmful things, like stalk people or set them up for possibly deadly encounters with local law enforcement. The most problematic aspect of this particular use of facial recognition AI is that PimEyes is just Clearview, but with even less accountability.
Clearview AI saw a 26% spike in search volume in the wake of the Jan. 6 insurrection. It has a database of more than 3 billion images and is primarily used by private companies and law enforcement agencies. Members of the #SeditionHunters community are unable to access Clearview. Instead, they’re using PimEyes, a lesser-known Polish company offering free and paid facial recognition services to the public, through a search engine that reportedly contains more than 900 million faces. Like Clearview, PimEyes has prompted serious concerns about misuse, abuse, privacy and civil liberties violations.
You cannot decry the use of unproven tech offered by sketchy AI companies by law enforcement and then pretend these tools are somehow now "good" just because it's regular people, rather than cops, using them. Sure, private citizens may not be able to effect arrests and lock people up, but they can engage in violence using questionable intel. A whole cottage industry of sex offender-related vigilantism shows how much real damage can be done by people who, with the best of intents, decide they can play cop with internet tools.
And when citizens do it, they're operating with fewer restraints and less accountability. The path to compensation for wrongful arrests and violated rights via our court system may be less than ideal when it comes to law enforcement agencies. But the path to justice is almost nonexistent when it comes to amateur online sleuthing. The tools suck. They don't suck less just because JoeAnon is using them, rather than federal or local law enforcement agencies. Pretending otherwise is to fall into the same criminal justice Dunning-Krueger trap. The tech is faulty and the ends cannot ever justify using broken means.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: amateur sleuthing, facial recognition, law enforcement, privacy, vigilantes
Reader Comments
Subscribe: RSS
View by: Time | Thread
Interesting
interesting, but a little bit scary;)
[ link to this | view in chronology ]
Is this even possible?
[ link to this | view in chronology ]
Yes, the amateur detective hour can get pretty scary as most people don't know their limits and are way too prone to thinking they are absolutely right when they can construct a story that suits themselves. This is sad, because many times people caring enough about something to try and solve a problem could be good. Targeting and harassing, not so much. Hell, people go too far even for situations where it turns out that they were correct. Add farcical decognition into the mix and you are just asking for trouble.
P.S. i think FR is bad even if it were 100% reliable. It will never happen, but that's pretty much even worse.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
It was a good one, indeed.
[ link to this | view in chronology ]
Must strongly disagree on accountability
Online sleuths are far easier to hold accountable than actual police given their near impunity. Vigilantees likewise.
As for the misidentification rates can we just see an experiment of facial recognition algorithim trained on minority biased ones and compare the relative accuracy of comparable sample set sizes? It was probably done somewhere but breathless reporting of assumptions gets the most hits.
Given issues of Chinese ones with white people I suspect it should put to bed that dumb trope. That the reason for the inaccuracies are inherently because the facial recognition programmers are racist or too white as opposed to math being "whatever gets the most overall accuracy in the training set gets the most weight is favored" in sheer utterly amoral "needs of the many over the few" fashion. Thinking about it how the fuck else it is supposed to remotely bootstrap judgement other than that? The emergent bias is still a good reason not to deploy it for law enforcement applications but as the pandemic shows so well a good reason not to do something is usually heavily ignored.
[ link to this | view in chronology ]
Re: Must strongly disagree on accountability
Nice try. Assuming that you are the default (consciously or not) and failing to account for the multitude of other variations isn't just piss-poor design and programming, it is in fact racist.
One does not need to consider oneself to be an active enemy of some other group to do racist things. A lot of stuff is built into culture. Defending and excusing that shit when pointed out is doubling down on racism. Saying, "Holy shit that was a bad mistake, we have to fix that," would be the obvious move for anyone who wants a working system whether it's their product or their culture.
P.S. Newsflash, all Chinese are hardly the same ethnic group or race. So how does that go, again?
[ link to this | view in chronology ]
amateur detectives outing pros
Add to the list the use of facial recognition to out cops. The plus side is that many false identifications can be recognized because the match isn't an officer. But given that the police deliberately use tactics (with official approval) to keep from being identified, any tools are better than no tools. An undercover cop hiding his identity is one thing. A constable on patrol doing so is an entirely different one.
[ link to this | view in chronology ]
Count me on the side of power tools for everyone, not just the secret police and secret corporations. If someone had the mother database of all databases and made it public, all those other databases that cops and corps hoard would be come worthless.
[ link to this | view in chronology ]
Re:
Speaking of which:
https://www.bloomberg.com/news/articles/2021-04-03/facebook-data-on-533-million-users-leaked- business-insider
[ link to this | view in chronology ]
The Last Word
“Count me on the side of power tools for everyone, not just the secret police and secret corporations. If someone had the mother database of all databases and made it public, all those other databases that cops and corps hoard would be come worthless.