Court Rejects Clearview's First Amendment, Section 230 Immunity Arguments
from the don't-allow-a-bad-company-to-generate-bad-courtroom-precedent-though dept
Back in March, facial recognition tech upstart Clearview was sued by the Vermont Attorney General. The AG alleged Clearview's scraping of sites to harvest photos (and other biometric/personal info) of Vermont residents violated state privacy laws. It also alleged Clearview had mislead residents and customers about the company's intended uses and its success in the law enforcement marketplace.
Clearview's response to the lawsuit was… interesting. It tried to invoke Section 230 immunity, claiming it was nothing more than a host for third-party content. The problem with this argument was it wasn't being sued over the content itself (which wasn't defamatory, etc.) but over its collection of the content, which did not provide Vermont residents with notice their information was being collected and gave them no way to opt out.
The company then hired a prominent (but opportunistic) First Amendment lawyer to argue it had a First Amendment right to collect and disseminate this information, even when its collection efforts routinely violated the terms of service of nearly every site it scraped to obtain photos. This argument was also interesting in its own way, but had the potential to cause complications for plenty of entities not nearly as universally-reviled as Clearview. In some ways, Clearview is the Google of faces, gathering information from all over the web and delivering search results to Clearview users.
The Vermont court has finally weighed in [PDF] on Clearview's arguments. And it doesn't like most of them. (h/t Eric Goldman)
Here's the court's take on the Section 230 argument:
Importantly, the basis for the State’s claims is not merely the photographs provided by third—party individuals and entities, or that Clearview makes those photographs available to its consumers. Instead, the claims are based on the means by which Clearview acquired the photographs, its use of facial recognition technology to allow its users to easily identify random individuals from photographs, and its allegedly deceptive statements regarding its product… This is not simply a case of Clearview republishing offensive photographs provided by someone else, and the State seeking liability because those photographs are offensive. Indeed, whether the photographs themselves are offensive or defamatory is immaterial to the State’s claims.
Instead, the claims here attempt to hold Clearview “accountable for its own unfair or deceptive acts or practices,” such as screen—scraping photographs Without the owners’ consent and in Violation of the source’s terms of service, providing inadequate data security for consumers’ data, applying facial recognition technology to allow others to easily identify persons in the photographs, and making material false or misleading statements about its product.
So, no dismissal based on Section 230 immunity for Clearview. The court then tackles the First Amendment assertions. The court says the First Amendment does not cover the commercial speech targeted by the AG's lawsuit.
The court next observes that at least some of the conduct alleged in Counts and III is largely nonexpressive in nature. The allegations that Clearview provided inadequate data security and exposed consumers’ information to theft, security breaches, and surveillance lack a communicative element. The First Amendment does not protect such conduct.
Whether the software itself is covered by the First Amendment is more difficult to answer.
Because the Clearview app’s raw code is not at issue here as in Corley, the app arguably has no expressive speech component and is more similar to the “entirely mechanical” automatic trading system in Vartuli that “induce[d] action without the intercession of the mind or the will of the recipient.” Vartuli, 228 F.3d at 111. The user simply inputs a photograph of a person, and the app automatically displays other photographs of that person with no further interaction required from the human user. In that sense, the app might not be entitled to any First Amendment protection. Complicating matters, however, is the fact that Clearview’s app is similar to a search engine, and some courts have generally recognized First Amendment protection for search engines, at least to the extent that the display and order of search results involve a degree of editorial discretion.
Whether or not it's actually speech doesn't appear to matter, at least not to this court. It says the "speech" -- protected or not -- can be regulated by the Vermont government. Since the AG isn't suing over the content of the "speech" itself but rather the use of personal information gathered from Vermont residents, the lawsuit against Clearview can continue.
Presumably, the State has no problem with Clearview operating its app so long as the Vermonters depicted in its photograph database have fully consented. The regulation sought by the State here is content-neutral and, accordingly, subject to intermediate scrutiny.
But then the court goes on to say that even if this violates Clearview's First Amendment rights, it barely violates them.
Furthermore, any incidental restriction on speech imposed by the State’s action would not burden substantially more speech than is necessary to further the State’s interest in protecting privacy. The State estimates that the reliefit requests will leave more than 99 percent of Clearview’s database intact.
That's a little more problematic. The court does go on to state that Clearview could avoid this by seeking affirmative consent from Vermont residents. It also says the court would ensure that any regulation proposed by the state would be subjected to further scrutiny to ensure the burden on Clearview is minimal. But that seems unlikely to be true if the court already believes burdensome regulation would only result in a 1% reduction in free speech.
The court also upholds all the deceptive claims allegations. Clearview's marketing has been far from honest. It has touted law enforcement successes that have been directly contradicted by the named law enforcement agencies. It has told people they can ask to be removed from its database, but then says that's only subject to laws that aren't in force in most of the nation. It has also claimed its only for legitimate law enforcement use, but has sold the software to a number of private entities and encouraged law enforcement officers to "run wild" while testing the app using faces of friends and family members. All of these claims survive.
The longer Clearview exists, the more lawsuits it will face. Its collection method -- scraping sites to obtain personal data -- is already problematic. The tech itself remains unproven, having never been tested for accuracy by an independent outside agency. If this is the best it can do in its own defense, it's going to run itself out of money before it secures any favorable precedent.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: 1st amendment, facial recognition, section 230
Companies: clearview, clearview ai
Reader Comments
Subscribe: RSS
View by: Time | Thread
As someone who has successfully requested my own supposed data from Clearview, this is a step in the right direction. They believe strongly that they are under no legal obligation in most parts of the country to provide people with the biometric data derived from photographs containing their faces. The process took about a month, a lot of back of forth between their support and a picture of my driver's license. In the end - I am non public figure and my face is not really out there - I got one match, scraped from Instagram, of a fellow who looked nothing like me (other than that he was wearing glasses).
That's not to say the tech hasn't been showcased in other respects, or that it is ineffective. I use Russia's Yandex on the regular for free face rec, since Google has a morality filter in their search results to prevent the same thing. But it just goes to show this type of stuff isn't going away any time soon unless it is outlawed or so stringently regulated it wouldn't be talked about except in the context of law enforcement.
[ link to this | view in thread ]
Apologies, but no. It is incredibly easy to answer: Yes, the software is covered by the First Amendment. All software is. All books are. All music is.
But that is the wrong question. The right question is: is its use covered by the first amendment? The other right question is: does any First Amendment coverage of the software matter in this case? I think the judge rightly concludes that the speech issues in authoring the software are not impacted.
The judge's introduction of
[ link to this | view in thread ]
Re:
Though, to play devil's advocate:
If the people depicted in the photo gave their permission to be photographed, is it still Clearview's offense in using the photo, or the copyright holder's in distributing it? This is where the terms-of-service violations come into play, I imagine.
But any public domain photos out there? What claim would the people depicted (living or dead) have?
What of "part of a crowd" photos? There generally isn't a right to limit photos of that nature. But... what if a person was later identified in that photo. Does that change the scenario?
Do paparazzi photos fall afoul of this?
This is all a bunch of What Ifs, sure. But part (only part) of this case is a rights conflict: privacy vs speech. Feel free to reply to this rabbit hole, but realize that it IS a rabbit hole.
[ link to this | view in thread ]
I'm a little perplexed that this hasn't yet turned into a copyright issue. The fair use factors, in my opinion, don't really favor Clearview.
1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes
Clearly commercial. If all they were doing was fingerprinting the pictures, that might be transformative, but I highly doubt they're throwing out the original images.
2) the nature of the copyrighted work
Might be a tough analysis there.
3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole
Don't think there's really an argument for de minimis here, and substantiality weighs against them, since they're using the significant portions of the images, ie the people.
4) the effect of the use upon the potential market for or value of the copyrighted work
Two prongs here, from different vectors. Value from the person because of potential privacy issues, and value from the host. "If your not the customer, you're the product". I know copyright infringement isn't theft, but this is probably closer to the idea of shoplifting than anything the entertainment industry has thrown down.
The average person isn't going to want to file copyright applications for their vacation pictures, but if their scraping is liberal enough, I'm sure they're also snagging pictures from models and photographers, or example, that somebody does have a financial interest in.
[ link to this | view in thread ]
Re: Re:
If the people depicted in the photo gave their permission to be photographed, is it still Clearview's offense in using the photo, or the copyright holder's in distributing it?
If the "offense" in question is a matter of copyright law, then it can only be Clearview's offense for copying without permission from the rightsholder.
If the offense in question is a matter of Vermont's privacy laws, then it is also Clearview's offense (though depending on the details the copyright holder may also be guilty of similar offenses).
In this particular case the state of Vermont has not made any claims of copyright violations (nor could they do so even if they wanted to), so for the moment this is merely a rights conflict.
There may be other plaintiffs that are advancing copyright arguments, but legally that is an entirely orthogonal line of argument. Since federal law preempted the various state laws which sometimes granted additional copyright protections to "unpublished" works, privacy rights and copyright laws no longer really interact legally in the US. You could still have a copyright v speech conflict though, which could get messy as well.
[ link to this | view in thread ]
I'm surprised that the EU hasn't brought this up, as I wonder if it breaks the GDPR rules. Or have they only scraped US databases?
[ link to this | view in thread ]
The key difference between these guys and google is no one wants these guys to scrape their website. Google has an opt out policy which only even remotely works to get permission if everyone knows you exist and have an opt out policy, if clearview had an opt out policy for scraping your site and every site owner knew about it they would just all opt out.
[ link to this | view in thread ]
Re:
Software is, ultimately, a set of instructions. The speech aspect is due to the expression of those instructions, which is often just as artistic as any poem.
But if the execution of that software's code is protected by the First Amendment, then you could write someone instructions that caused them to break the law - criminal or civil - and you would be protected from liability even though the instructions were illegal.
It would effectively abolish conspiracy laws on the spot, if valid.
[ link to this | view in thread ]