EARN ITs Big Knowledge 1st Amendment Problem
from the why-doesn't-anyone-understand-this dept
We've talked about so many problems with the EARN IT Act, but there are more! I touched on this a bit in my post about how EARN IT is worse than FOSTA, but it came up a bit in the markup last week, and it showed that the Senators pushing for this do not understand the issues around the knowledge standard required here, and how various state laws complicate things. Is it somewhat pathetic that the very senators pushing for a law that would make major changes impacting a wide variety of things don't seem to understand the underlying mechanisms at play? Sure is! But rest assured that you can be smarter than a senator.
First, let's start here: the senators supporting EARN IT seem to think that if you remove Section 230 for a type of law-violating content (in this case, child sexual abuse material, or CSAM), that magically means that website will be liable for that content -- and because of that they'll magically make it disappear. The problem is that this is not how any of this actually works. Section 230 expert and law professor Jeff Kosseff broke the details down in a great thread, but I want to make it even more clear.
Today's EARN IT Act markup had a lot of discussion about what mens rea would be necessary for platforms to face civil liability for distributing CSAM. The discussion wasn't terribly clear, so I'm going to try to break down what we do know about the legal standards.
— Jeff Kosseff (@jkosseff) February 10, 2022
As a reminder, Section 230 has never been a "get out of jail free" card, as some of its critics suggest. It's a procedural benefit that gets cases that would otherwise lose on 1st Amendment grounds tossed out at an earlier stage (when it's much less costly, and thus, much less likely to destroy a smaller company).
So, here, the senators supporting EARN IT seem to think, falsely, that if they remove Section 230 for CSAM that (1) it will make websites automatically liable for CSAM, and (2) that will somehow spur them into action to take down all CSAM because of the legal risk and that this will somehow make CSAM go away. Both of these assumptions are wrong, and wrong in such stupid ways that, again, EARN IT would likely make problems worse, not better. The real problem underlying both of these is the question of "knowledge." The legal folks like Jeff Kosseff dress this up as "mens rea" but the key thing is about whether or not a website knows about the illegal content.
This impacts everything in multiple ways. As Kosseff points out in his thread, Supreme Court precedent (which you would know if you read just the first chapter of his Section 230 book) says that for a distributor to be held liable for content that is not protected by the 1st Amendment, it needs to have knowledge of the illegal content. Supporters of EARN IT counteract with the correct, but meaningless, line that "CSAM is not protected by the 1st Amendment." And, it's not. But that's not the question when it comes to distributor liability. In Smith v. California, the Supreme Court overturned a conviction of Eleazar Smith (his bookstore sold a book the police believed was obscene), noting that even if the book's content was not protected by the 1st Amendment, the 1st Amendment cannot impose liability on a distributor, if that distributor does not have knowledge of the unprotected nature of the content. Any other result, Justice Brennan correctly noted, would lead distributors to be much more censorial, including of protected speech:
There is no specific constitutional inhibition against making the distributors of good the strictest censors of their merchandise, but the constitutional guarantees of the freedom of speech and of the press stand in the way of imposing a similar requirement on the bookseller. By dispensing with any requirement of knowledge of the contents of the book on the part of the seller, the ordinance tends to impose a severe limitation on the public's access to constitutionally protected matter. For if the bookseller is criminally liable without knowledge of the contents, and the ordinance fulfills its purpose, he will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected as well as obscene literature. It has been well observed of a statute construed as dispensing with any requirement of scienter that: 'Every bookseller would be placed under an obligation to make himself aware of the contents of every book in his shop. It would be altogether unreasonable to demand so near an approach to omniscience.' The King v. Ewart, 25 N.Z.L.R. 709, 729 (C.A.). And the bookseller's burden would become the public's burden, for by restricting him the public's access to reading matter would be restricted. If the contents of bookshops and periodical stands were restricted to material of which their proprietors had made an inspection, they might be depleted indeed. The bookseller's limitation in the amount of reading material with which he could familiarize himself, and his timidity in the face of his absolute criminal liability, thus would tend to restrict the public's access to forms of the printed word which the State could not constitutionally suppress directly. The bookseller's self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered. Through it, the distribution of all books, both obscene and not obscene, would be impeded.
While there are some other cases, this remains precedent and it's difficult to see how the courts would (or could) say that a website is strictly liable for content that it does not know about.
This creates a bunch of problems. First and foremost, removing 230 in this context then gives websites not an incentive to do more to find CSAM, but actually to do less to find CSAM, because the lack of knowledge would most likely protect them from liability. That is the opposite of what everyone should want.
Second, it creates various problems in how EARN IT interacts with various state laws. As we've pointed out in the past, EARN IT isn't just about the federal standards for CSAM, but it opens up websites to legal claims regarding state laws as well. And the knowledge standards regarding CSAM in state laws is, literally, all over the map. Many do require actual knowledge (which again, reverses the incentives here). Others, however, have much more troubling standards around "should have known" or "good reason to know" or in some cases, they set a standard of "recklessness" for not knowing.
Some of those, if challenged, might not stand up to 1st Amendment scrutiny, such as what's found in Smith v. California, which should require actual knowledge, but either way the law would create a huge mess -- with it mostly incentivizing companies not to look for this. And considering that the sponsors of the bill keep saying that the whole reason of the bill is to get companies to do more looking for CSAM, they've literally got the entire law backwards.
What's most troubling, is that when Senator Blumenthal was pushed on this point during the markup, and it was mentioned that different states have different standards, rather than realizing one of the many (many) problems with the bill, he literally suggested that he hoped more states would change their standards to a potentially unconstitutional level, in which actual knowledge is not required for liability. And that's just setting up a really dangerous confrontation with the 1st Amendment.
If Senator Blumenthal and his legislative staffers actually cared about stopping CSAM, they would be willing to engage and talk about this. Instead, they refuse to engage, and mock anyone who brings up these points. Perhaps it's fun for them to generate false headlines while fundamentally causing massive problems for the internet and speech and making the CSAM problem worse while pretending the reverse is happening. But some of us find it immensely problematic.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: 1st amendment, csam, distributor liability, earn it, knowledge, mens rea, monitoring, richard blumenthal, section 230, speech, state laws
Reader Comments
Subscribe: RSS
View by: Time | Thread
eric goldman has said that many services will eliminate user-generated content (UGC) entirely because of earn it act and the removal of section 230 for csam is it possible ? or it's unlikely ?
[ link to this | view in thread ]
Re:
For small actors, it might be beneficial to prohibit user-generated content.
Is it reasonable to expect Wordpress and Blogspot to default to not allowing comments on their posts?
[ link to this | view in thread ]
Fallacies
Two inaccuracies in the first two paragraphs.
1 - You to can be smarter than a senator.
A slug in the middle of the freeway is smarter than the average senator.
2 - senators supporting EARN IT seem to think
When has it ever been established that senators have the ability to think?
Now back to your regularly scheduled rants.
[ link to this | view in thread ]
i don't know but do you think discord myanimelist and reddit will not allowing comments on theirs posts ? and for fandom will they prohibit user generated content or not allowing comments ?
[ link to this | view in thread ]
Re: Re:
i don't know but do you think discord myanimelist and reddit will not allowing comments on theirs posts ? and for fandom will they prohibit user generated content or not allowing comments ?
[ link to this | view in thread ]
Re:
Given what we know, it's likely that they will start heavily restricting what you could post, possibly pre-screen and/or scan it before allowing it to go through. The bigger players can probably absorb the cost of hiring new moderators and deploying such technologies but smaller ones? They're more likely to close up shop.
Reddit and Discord I can see sticking around but heavily restricting what you can and cannot say. Discord already stores everything you send.
[ link to this | view in thread ]
"But rest assured that you can be smarter than a senator."
This bar is so low you could not limbo under it.
[ link to this | view in thread ]
and for fandom will they close ?
[ link to this | view in thread ]
Privacy Benefit
There was the Cubby v. Compuserve model prior to section 230's existence. It did have the side effect of keeping the busybodies away. This might discourage systems such as the controversial Apple Neuralhash scans. If it's on iCloud, and they're scanning, then they might be held liable if it doesn't work.
This isn't a good reason to support the legislation. I'm just saying that we complain about companies and government collecting and scanning data to judge whether they approve. Sometimes, others' lack of knowledge is a good thing for the privacy advocate.
[ link to this | view in thread ]
And yet, you come off as if you think it is.
[ link to this | view in thread ]
Re: Privacy Benefit
Bravely bold Sir Koby.
Rode forth from the internet.
He was not afraid to die,
Oh brave Sir Koby.
He was not at all afraid
To be killed in nasty ways.
[ link to this | view in thread ]
Re: Re:
How likely is the bill to pass? How soon could there be a full vote on the Senate floor?
[ link to this | view in thread ]
Re: Re: Re:
Its unknown if user-generated content will end if the bill were to pass but right now the bill still has long way to go before it could become law.
Do want to say quite a few senators are calling out the problems with the bill now.
[ link to this | view in thread ]
Biden / admin advise "Big Tech" often on what's "disinformation"
You've made no objection to active collusion / pressure from "Democrats" to have entirely legal 1A speech taken down.
Yourself (and fanboys) advocate removal of "mis/disinformation".
You are way over the line to censorship, just don't want CORPORATE CONTROL interfered.
[ link to this | view in thread ]
Re: Re: Privacy Benefit
THIS bit of rhyme by an "AC" is one of the BEST reasons why no one reasonable reads here: YOU LOOK CRAZY.
[ link to this | view in thread ]
Re: A. Stephen Stone doesn't just look hateful, but IS.
You fanboys STILL seem intent on making the site a cesspit, running off anyone reasonable from even reading. YOU alone have surely run off hundreds.
Keep it up. Opposition now doesn't even need to show up here. MM is an unconvincing lunatic who allows you kids to make his site disgusting.
[ link to this | view in thread ]
Hey blue, how's that Richard Liebowitz fund coming along?
[ link to this | view in thread ]
If you're so against censorship, can you let the leftists back on Parler?
[ link to this | view in thread ]
Anyone have results of the testing for lead in Congress members?
I mean its either they've been poisoned & that caused the brain damage or they've just given up trying to hide their corruption and blind ambition for more power.
This shouldn't be a thing and the fact they have been told...
If you do this, they will stop looking for CSAM because you've given them a reason to never look for it you fucking morons.
Yet they still pretend this will fix it.
We really need leadership that doesn't live in a fantasy bubble detached from reality where their magic underpants gnome thinking doesn't solve anything & makes it much worse.
[ link to this | view in thread ]
Shut up, Wallace.
[ link to this | view in thread ]
What speech are you worried will be taken down? Be specific.
[ link to this | view in thread ]
Better question to ask Brainy Smurf is how he squares his support for copyright-driven censorship with his opposition for corporate censorship, since corporations are the entities that most often employ copyright-driven censorship.
[ link to this | view in thread ]
Re: Re: Re: Privacy Benefit
blue, you once begged Shiva Ayyadurai to let you get on your knees and gobble his cock. You're in no position to complain.
[ link to this | view in thread ]
There's more than one kind of knowledge
This article and the linked thread seem to be centered on the kind of "knowledge" where you know "this piece of content over here in this file is child porn".
But there's another kind of knowledge, of the form "I run a social site with 100,000,000 users. It is a practical certainty that there's child porn going through my system.". It's not just "I'm ignoring a real possiblity". It's "I'm sure there's some here; I just don't know exactly where it is". Especially after the first few unrelated cases in which you find some.
That kind of thing really isn't captured by the normal lay idea of "recklessness". And if it falls within some legal definition of recklessness, then it's still at least an extremely strong form, way out near the boundary with actual knowledge... which is probably a boundary that can move given the right kind of bad-law-making case.
I think that the "EARN-IT" people are hoping to be able to go after the second kind of knowledge, and I'm afraid that Smith may not be protection enough.
A bookseller in 1958 who happened to have one "obscene" book could reasonably argue that they didn't know what was in it and also didn't know, or even have any reason to believe, that there was anything like that in their stock at all.
A large social site in 2022 knows there's some child porn in the mix somewhere. I suspect that the proponents are hoping that they can use that as enough scienter to get around Smith completely.
It's true that it's still just as impractical for a site to find every single bit of child porn as it would be for a bookseller to find every "obscene" book... but they can still push for the idea that the First Amendment allows them to require a site to do "everything reasonably possible". Not just because it's supposedly a "best practice". Not just because not doing it would risk not finding child porn. Because the site has actual knowledge that there's a problem on their particular system.
That means they can still try to demand scanning, whether via state law or via some other pass. Scanning, of course, means no effective encryption. They will try to get those in through the back door even if they're not in the bill, and given the subject matter I'd be really worried that they'd win in court.
The right answer, of course, is "Yeah, I'm sure there's some child porn on every major site. Tough". But nobody seems to have the guts to say that.
[ link to this | view in thread ]
Re:
I think readers have tried the "blue is a closet corporatist" angle plenty of times and all that's done is make him run away. If he's going to run away, I'm going to hit him where it hurts.
[ link to this | view in thread ]
Enough projection to reach the moon...
Senators: 'We don't care about CSAM and can't be bothered to actually do anything about it so obviously internet platforms don't care about CSAM and can't be bothered to do anything about it either. We'll blame them for not getting rid of all of it and if anyone tries to point out that we haven't done anything but blame them we'll just call them Big Tech lobbyists/shill, it's brilliant!'
And as always, A vote for EARN IT is a vote for CSAM.
[ link to this | view in thread ]
Re: Ignorant motherfucker says what?
I don’t know what’s funnier. That you of all people would have gall to call anyone else crazy. Or that you were seeming born absent a sense of humour.
[ link to this | view in thread ]
Re: Spittle flecked keyboard
It’s been a while bro. How was your last 5150 stay?
[ link to this | view in thread ]
Re:
Yours.
Yours is the first speech I'd take down. If for nothing more than to drive home the point that your personal disagreement with someone else's speech is not grounds for compulsory censorship. You twat.
[ link to this | view in thread ]
Re: Re:
Hey, Hamilton. Your boi Shiva Ayyadurai tried to have this entire site censored, so you're hardly in any position to complain.
Tip for you and blue, maybe next time wipe the Cowper's fluid off your face before you start posting.
[ link to this | view in thread ]
Re: Re:
Asked to give a specific example of which speech you/Woody are worried about being removed and your first response is to say that if you had the power you would take someone else's speech down simply out of spiteful hypocrisy.
It takes some real stupid to turn that question into an own-goal but damn if you didn't just manage it.
[ link to this | view in thread ]
Re: Fallacies
A slug in the middle of the freeway is smarter than the average senator.
Objection, while there certainly are stupid people in congress it's a mistake to treat corruption as stupidity as that plays right into their hands.
[ link to this | view in thread ]
It’s kind of hilarious in a mundane way: Given the chance to detail the kind of speech you feel is under threat from government censorship, you pivot to directly threatening my First Amendment rights because…reasons.
Did you fix up a room for me in your head before or after I started living there rent-free? 😁
[ link to this | view in thread ]
Re: Re: Re: Stay Ignorant my friend
"YOU LOOK CRAZY."
I've always said you were the best projectionist in the business.
[ link to this | view in thread ]
Re: Re: Take the hint
That's pathetic even for you. I mean, we know you've got a full chubby for Stephen, but he's repeatedly told you no.
[ link to this | view in thread ]
Re: There's more than one kind of knowledge
The right answer, of course, is "Yeah, I'm sure there's some child porn on every major site. Tough". But nobody seems to have the guts to say that.
That would be far too easy to use against the platform owner, a perhaps better phrasing would be 'Yeah that content is on our platform despite our best efforts to keep it off, if you can come up with a better way to handle it other than telling us to try harder we're all ears'.
That puts the onus on those claiming that more could be done to actually come up with a better way and allows whatever they come up with to be put under scrutiny for viability, with any blame easier to dump on them rather than the platform.
[ link to this | view in thread ]
Re: Re: There's more than one kind of knowledge
I'm not saying the "platform owners" should say that. I'm saying EVERYBODY should say it.
The problem with asking them to come up with suggestions is that they WILL. And they will claim that their suggestions are workable when they're actually not. And they'll claim that their suggestions don't force disabling security measures when they actually do. And they'll claim that their suggestions don't put people at risk when they actually do.
They will never come up with any suggestions that don't have those problems, because that is not possible. However, every time you manage to argue away one suggestion, they'll reword things a bit, come up with a slightly modified one, and claim this one is the fix. They can do this forever.
... and their message to people who are not closely engaged with the issue will be that they've tried and tried to be reasonable and address the sane people's concerns, but the sane people are unreasonable and hate compromise and won't accept anything at all.
It is incredibly bad strategy to adopt any message that suggests there's could be an acceptable way to do what those people want, because there is not.
[ link to this | view in thread ]
Re: Re: There's more than one kind of knowledge
It still crosses the line. It's punishing a platform because they did not have "knowledge" of CSAM material but "should've known". (plus as many have pointed out, some still slips though despite their best efforts)
As CDT noted: "Having a “ground to believe” something is not “knowing” it. Opening up a social media platform to liability when the platform merely has a ground to believe that CSAM is carried on the platform may “so strongly encourage” the provider to search for it that the search no longer becomes merely a private initiative."
So it seems reckless to punish a platform for CSAM it did not know about plus it feeds into the 4A issues as well.
[ link to this | view in thread ]
Re: Re: Re:
That kinda explains a lot. They can’t imagine that some people might not abuse their authority to remove speech they don’t like if given the chance, so they assume everybody would.
[ link to this | view in thread ]