Clearview Is So Toxic Even Other Surveillance Tech Purveyors Want Nothing To Do With It
from the bring-back-spitting-in-disgust-old-world-style dept
Outside of Clearview's CEO Hoan Ton-That, it's unclear who truly likes or admires the upstart facial recognition tech company. In the short time since its existence was uncovered, Clearview has managed to turn itself into Pariah-in-Chief of a surveillance industry full of pariahs.
Clearview hasn't endeared itself to the sources for its 10-billion image database, which are (in descending order) (1) any publicly-accessible website/social media platform, and (2) their users. The company has been sued (for violating state privacy laws) in the United States and politely asked to leave by Canada, which found Clearview's nonconsensual harvesting of personal info illegal.
It has subpoenaed activists demanding access to their (protected by the First Amendment) conversations with journalists. It has made claims about law enforcement efficacy that have been directly contradicted by the namechecked police departments. It has invited private companies, billionaire tech investors, police departments in the US, and government agencies around in the world to test drive the software by running searches on friends, family members, and whoever else potential customers might find interesting/useful to compile a (scraped) digital dossier on.
Clearview intends to swallow up all the web it can. Caroline Haskins' report for Business Insider (alt link here) catches Clearview's vice president of federal sales pretty much saying the only way to avoid being added to Clearview's database is to stop being online.
"People are constantly dumping their — it's just a constant," Clearview's Jones said during the roundtable event, referring to the steady stream of people posting their photos online, only for Clearview to scrape them.
The same report shows why Clearview's is so bad at PR. It has a PR team -- one that continually stresses the AI does not provide "matches," but rather "investigative leads." The difference between the terms is the extent of liability. If Clearview only generates leads, it cannot be blamed for false positives. If it says it delivers "matches," it possibly can be sued for wrongful arrests.
But CEO Hoan Ton-That can't stop using the terms interchangeably, which indicates he feels his software generates matches, thus inviting additional culpability.
"No one has done what we do at this scale with this accuracy," he said later in the conference, adding that anyone "is able to solve cases instantaneously if they get matches in the system."
The "accuracy" Ton-That claims no one can match is "98.6% accuracy per one million faces" -- an assertion Clearview's made for a couple of years now. Whether this claim is anywhere near 98.6% accurate remains to be seen. The AI has never been independently tested or audited.
We know how Clearview feels about itself. But how do the competitors in the market feel about it? Caroline Haskins spoke to other surveillance tech purveyors at the Connect:ID industry event and found other providers of facial recognition AI weren't thrilled Clearview was out there giving already-controversial tech an even worse reputation.
Several industry professionals openly expressed not liking or respecting the company. One government contractor said Clearview was "creepy." He told me he'd read about the company's extensive ties to the far right and was alarmed by that.
In one discussion, an attendee called the company "the worst facial-recognition company in the world."
Well, there's one thing surveillance tech purveyors and surveillance tech targets can agree on: Clearview took a bad thing, made it worse, and seems willing to trade paint with the internet at large to maintain its "Images in a facial recognition database [privately-owned]' lead.
While it's normal for competitors to criticize their rivals, the statements made here imply Clearview is bad for the facial recognition tech industry -- something that threatens these competitors' livelihoods. And that's not acceptable, even if the collective marketing of problematic AI to government agencies doesn't appear to bother them a bit.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: facial recognition, surveillance
Companies: clearview
Reader Comments
Subscribe: RSS
View by: Time | Thread
But there are 10 billion pictures in the database. If, at 1 million faces, there are 1.4% false positives, then at 10 billion faces there would be... 14,000% false positives?
[ link to this | view in thread ]
Re:
Drop the percent sign and you have it right. At 1.4% false positives, there would be 14,000 false positives per million faces.
[ link to this | view in thread ]
Re:
Tim quoted from Business Insider, which itself (mis)quoted from Buzzfeed, which (mis)quoted from a single image found in promotional material Clearview provided to the Atlanta police department. Said material did not define "accuracy" (true positves, true negatives, false positives, false negatives, image reject rate, there are a lot of things the number might or might not include), and was also rather ambiguous on whether the "million" refered to the number of tests run, or to the number of faces included in the search space.
So really, the number provides no information at all.
[ link to this | view in thread ]
On the other hand, the facial recognition tech industry is bad for humanity, so Clearview is making that...well, clear could be considered a net positive.
[ link to this | view in thread ]
What a load....
"Investigative Lead" is like the difference between Person of Interest and suspect, one is merely gibberish for "pre-suspect", and when it comes from dubious surveillence means, it isn't comforting. Many a case can be manufactured by sources like the FBI, etc
[ link to this | view in thread ]
[ link to this | view in thread ]
Please, start a facial recognition SEO company!
Back in the days of Altavista, when you did a web search, you got hits. Now, you get an endless torrent of spammage.
Why should facial recognition be any different?
If a cop uses Clearview's database to find a compatible face on the web, the answer he should get back is something like ...
1. Welcome! Do you know who THIS FACE belongs to? Could it be JOE BLOGGS? Please Tell Us! Try our COMPLETE DATABASE OF ALL POSSIBLE FACIAL PARAMETERS for just $9.99 a hit, and get INSTANT ACCESS to find out which faces on the web are SIMULATED!
2. Our company cannot certify if this is JOE BLOGGS or if it is the ex-husband of one of our satisfied customers. If you are a member of a SWAT team, please, go find out for us!
3. Welcome to WikiFaces! The Facial Recognition Company Anyone Can Edit. For Just $3.99 a month! Is this facial recognition data correct? Please let us know in our Subscriber Only Chat!
etcetera. The truth is 'out there' somewhere. Not on a list of search results though.
[ link to this | view in thread ]
I'm pretty sure plenty of governments like them. And anonymous shareholders lol.
[ link to this | view in thread ]
Re:
Irregardless of NISTs ranking, I can say that what Clearview is doing is illegal in multiple jurisdictions, likely including America at large due to the copyright issues with web scraping. Selling facial recognition on arbitrary datasets to the highest bidder is also ethically dubious, given the ability to deanonymize essentially anyone for any purpose. Many cops and rich people are not morally great, Eg Epstein, Weinstein, R Kelly, Cosby, and the various cops mentioned here convicted of sexual assault, murder, etc. Imagine if they can use that, just like the NSA had abuse issues with "LOVE INT".
And being the best of a dubiously effective bunch does not a reliable system make, especially as like shot spotter and hair analysis in the past, cops may lean on Clearview for adjusted results, meanwhile we'll sell this to juries as certain matches.
[ link to this | view in thread ]