For over a year now, Senator Mark Warner has been among the most vocal in saying that it's looking like Congress may need to regulate internet platforms. So it came as little surprise on Monday when he released a draft white paper listing out "potential police proposals for [the] regulation of social media and technology firms." Unlike much of what comes out of Congress, it does appear that whoever put together this paper spent a fair bit of time thinking through a wide variety of ideas, recognizing that every option has potential consequences -- both positive and negative. That is, while there's a lot in the paper I don't agree with, it is (mostly) not in the hysterical moral panic nature found around such debates as FOSTA/SESTA.
The paper lays out three major issues that it hopes to deal with:
Disinformation that undermines trust in our institutions, democracy, free press, and markets.
Consumer protection in the digital age
Antitrust issues around large platforms and the impact it may have on competition and innovation.
All of these are issues worth discussing and thinking about carefully, though I fear that bad policy-making around any of them could actually serve to make other problems even worse. Indeed, it seems that most ideas around solving the first problem might create problems for the other two. Or solving the third problem could create problems for the first one. And so on. That is not to say that we should throw up our hands and automatically say "do nothing." But, we should tread carefully, because there are also an awful lot of special interests (a la FOSTA, and Articles 11 and 13 in the EU) who are looking at any regulation of the internet as an opportunity to remake the internet in a way that brings back gatekeeper power.
On a related note, we should also think carefully about how much of a problem each of the three items listed above are. I know that there are good reasons to be concerned about all three, and there are clear examples of how each one is a problem. But just how big a problem they are, and whether or not that will remain the case is important to examine. Mike Godwin has been writing an important series for us over the last few months (part 1, part 2 and part 3) which makes a compelling case that many of the problems that everyone is focused on may be the result of a bit of moral panic, overreacting to a smaller problem and not realizing how small it is.
We'll likely take more time to analyze the various policy proposals in the white paper over time, but let's focus in on the big one that everyone is talking about: the idea of opening up Section 230 again.
Make platforms liable for state-law torts (defamation, false light, public disclosure of
private facts) for failure to take down deep fake or other manipulated audio/video content
-- Due to Section 230 of the Communications Decency Act, internet intermediaries like social
media platforms are immunized from state tort and criminal liability. However, the rise of
technology like DeepFakes -- sophisticated image and audio tools that cart generate
fake audio or video files falsely depicting someone saying or doing something -- is poised to
usher in an unprecedented wave of false and defamatory content, with state law-based torts
(dignitary torts) potentially offering the only effective redress to victims. Dignitary torts such as
defamation, invasion of privacy, false light, and public disclosure of private facts represent key
mechanisms for victims to enjoin and deter sharing of this kind of content.
Currently the onus is on victims to exhaustively search for, and report, this content to platforms
who frequently take months to respond and who are under no obligation thereafter to proactively
prevent the same content from being re-uploaded in the future. Many victims describe a
"whack-a-mole" situation. Even if a victim has successfully secured a judgment against the user
who created the offending content, the content in question in many cases will be re-uploaded by
other users. In economic terms, platforms represent "least-cost avoiders" of these harms; they are
in the best place to identify and prevent this kind of content from being propagated on their
platforms. Thus, a revision to Section 230 could provide the ability for users who have
successfully proved that sharing of particular content by another user constituted a dignitary tort
to give notice of this judgement to a platform; with this notice, platforms would be liable in
instances where they did not prevent the content in question from being re-uploaded in the future
a process made possible by existing perceptual hashing technology (e.g. the technology they
use to identify and automatically take down child pornography). Any effort on this front would
need to address the challenge of distinguishing true DeepFakes aimed at spreading
disinformation from satire or other legitimate forms of entertainment and parody.
So this seems very carefully worded and structured. Specifically, it would appear to require first a judicial ruling on the legality of the content itself, and then would require platforms to avoid having that content re-uploaded, or face liability if it were. The good part of this proposal is the requirement that the content go through a full legal adjudication before a takedown would actually happen.
That said, there are some serious concerns about this. First of all, as we've documented many times here on Techdirt, there have been many, many examples of either sketchy lawsuits that were filed solely to get a ruling on the books to try to take down perfectly legitimate content. If you don't remember the details, there were a few different variants on this, but the standard one was to file a John Doe lawsuit, then (almost immediately) claim to have identified the "John Doe" who admits to everything and agrees to a "settlement" admitting defamation. The "plaintiff" then sends this to the platforms as "proof" that the content should be taken down. If Warner's proposal goes through as is, you could see how that could become a lot more common, and you could see a series of similar tricks as well. Separately, it could potentially increase the number of sketchy and problematic defamation lawsuits filed in the hopes of getting content deleted.
One would hope that if Warner did push down this road, he would only do so in combination with a very strong federal anti-SLAPP law that would help deal with the inevitable flood of questionable defamation lawsuits that would come with it.
To his credit, Warner's white paper acknowledges at least some of the concerns that would come with this proposal:
Reforms to Section 230 are bound to elicit vigorous opposition, including from digital liberties
groups and online technology providers. Opponents of revisions to Section 230 have claimed that
the threat of liability will encourage online service providers to err on the side of content
takedown, even in non-meritorious instances. Attempting to distinguish between true
disinformation and legitimate satire could prove difficult. However, the requirement that
plaintiffs successfully obtain court judgements that the content in question constitutes a dignitary
tort which provides significantly more process than something like the Digital Millennium
Copyright Act (DMCA) notice and takedown regime for copyright-infringing works may limit
the potential for frivolous or adversarial reporting. Further, courts already must make distinctions
between satire and defamation/libel.
This is all true, but it does not take into account how these bogus defamation cases may come into play. It also fails to recognize that some of this stuff is extremely context specific. The paper points to hashing technology like those used in spotting child pornography. But such content involves a strict liability -- where there are no circumstances under which it is considered legal. Broader speech is not like that. As the paper acknowledges in determining whether or not a "deepfake" is satire, much of this is likely to be context specific. And so, even if certain content may represent a tort in one context, it might not in others. Yet under this hashing proposal, the content would be barred in all contexts.
As a separate concern, this might also make it that much harder to study content like deepfakes in ways that might prove useful in recognizing and identifying faked content.
Again, this paper is not presented in the hysterical manner found in other attempts to regulate internet platforms, but it also does very little beyond a perfunctory "digital liberties groups might not like it" to explore the potential harms, risks and downsides to this kind of approach. One hopes that if Warner and others continue to pursue such a regulatory path, that much more caution would go into the process.
So, yesterday the House Judiciary Committee did what the House Judiciary Committee seems to do best: hold a stupid, nonsensical, nearly fact-free "hearing" that serves as nothing more than an opportunity for elected members of Congress to demonstrate their ignorance of an important topic, while attempting to play to their base. This time, the topic was on the content filtering practices of Facebook, Twitter and Google. Back in May there was actually a whole one day conference in Washington DC on this topic. The Judiciary Committee would have been a lot better served attending that than holding this hearing. I'd recommend not wasting three hours of your life watching this thing, but if you must:
The shortest summary would be that some Republican members of Congress think that these websites censor too much conservative speech, and some Democratic members of Congress think that they don't censor enough other speech (including hoaxes and conspiracy theories)... and almost no one wants to admit that this is not even remotely an issue that Congress should be concerned about. There's a narrative that has been picked up by many that insist that social media platforms are unfairly censoring "conservatives." There is basically zero evidence to support this. Indeed, a thorough analysis of the data back in March by Nieman Labs and Newswhip found that conservative-leaning sites get much, much, much more engagement on Facebook than liberal-leaning sites.
But, never let facts get in the way of a narrative. Since that seems to be the way many hyperpartisan sites (at either end of the spectrum) deal with these things, Congress is helping out. The only bit of sanity, perhaps bizarrely, came from Rep. Ted Lieu, who reminded everyone of the importance of free markets, free speech and the fact that private platforms get to decide how they manage their own services. Considering that Republicans often like to claim the mantle of being the "small, limited government" party who wants the government's hands out of business regulation, the fact that most of the hearing involved Republicans screaming for regulating internet platforms and a Democrat reminding everyone about the importance of a free market, capitalism and free speech, it really was quite a hearing. Lieu's remarks were some of the rare moments of sanity during the hearing -- including defending Facebook leaving Alex Jones' conspiracy theories on its site. Let's start with that high point before we dive into the awfulness. His comments come at about 2 hours and 10 minutes into the video:
... we're having this ridiculous hearing on the content of speech of private sector companies. It's stupid because there's this thing called the First Amendment. We can't regulate content! The only thing worse than an Alex Jones video is the government trying to tell Google... to prevent people from watching the Alex Jones video. We can't even do it if we tried. We can't even do any legislation out of this committee. And we're having this ridiculous second installment hearing after the first hearing about Diamond and Silk not getting enough likes on Facebook.
He then went on to ask questions "so the American public understands what a dumb hearing this is." And those questions -- again -- seemed like the kinds more expected from supposedly "free market" conservatives. Specifically he asked the companies if they were private companies aiming to maximize profits for shareholders. And he wasn't doing that to show that companies were evil, he was doing that to show that that's how the free market works. He followed up with this:
I noticed all of you talked about your own internal rules. Because that's what this should be about. You all get to come up with your own rules. But not because government tells you what to do. Or because government says you have to rule this way or that way. And the whole notion that somehow we should be interfering with these platforms from a legislative, governmental point of view is an anathema to the First Amendment. And really it's about the marketplace of ideas.
Kudos to Rep. Lieu. This is the kind of speech that you'd normally expect to hear from a "small government" conservative who talks about respecting the Constitution. But, in this case, it's a Democrat. And it's shameful that others (on both sides of the aisle) weren't making the same point. Instead, there was a ton of pure nonsense spewed from the Republicans at the hearing. It's hard to fathom that the following statements were made by people we've actually elected to our legislative body. There were so many dumb statements made that it's difficult to pick out just a few.
Let's start with Rep. Steve King, who has made quite a name for himself saying and repeating bigoted nonsense. Starting at about an hour and five minutes in the video, King seemed particularly concerned about traffic to Gateway Pundit, a site famous for trafficking in utter nonsense.
It's a matter of Congressional record that Gateway Pundit, Mr. Jim Hoft, has introduced information into the record that in the span of time between 2016 and 2018, he saw his Facebook traffic cut by 54%. Could you render an explanation to that?
Um... what? How the hell is it of any concern to Congress whatsoever the traffic a single site gets? And, as we were just discussing recently, traffic to lots of news sites from Facebook has dropped massively as Facebook has de-prioritized news. In that post, we pointed out that Slate was self-reporting a drop in Facebook traffic over that same period of time of 87%. Based on that, why isn't King asking about Slate's traffic dropping? Perhaps because Gateway Pundit publishes the kind of nonsense King supports and Slate points out that King is a bigot?
And... isn't that, again, kind of the point of the First Amendment? To protect news sites from having Congress play favorites?
Incredibly, King then concludes his time by first claiming he's all for free speech and free enterprise, but wonders about turning social media sites into regulated utilities.
I'm all for freedom of speech and free enterprise and for competition and finding a way that we can have competition itself that does its own regulation, so government doesn't have to, but if this gets further out of hand, it appears to me that Section 230 needs to be reviewed, and one of the discussions that I'm hearing is 'what about converting the large behemoth organizations that we're talking about here into public utilities.'
Are we living in an upside down world? A Democrat is praising the free market, profits and free speech, and a Republican is advocating for limiting free speech and in favor of turning some of the most successful US companies into public utilities? What is even going on here?
Around an hour and 18 minutes, we get our old friend Rep. Louis Gohmert, who has a fairly long and extensive history of making the dumbest statements possible concerning technology issues. And he lived down to his usual reputation in this hearing as well. It starts off by him trying to play down the issue of Russian interference in elections, by claiming (?!?) that the Russians helped Truman get elected, and then claiming that Russians had helped basically every Democratic President get elected in the past 70 years. And then spent a long time trying to complain that the platforms wouldn't tell him if Chinese or North Korean intelligence services had also used their platforms. Remember, these companies were asked to come and testify specifically about Russian use of their platforms to interfere with the election and Gohmert stepped in with this insane "what about other countries, huh?" argument:
Gohmert: I need to ask each of you. You've been asked specifically about Russian use of your platforms. But did you ever find any indication of use of your platform, utilized by the Chinese, North Korea, or any other foreign country intelligence or agency of that country. First, Ms. Bickert?
Bickert/Facebook: I would note, Congressman, that we're not in North Korea or China. In terms of whether we've seen attacks on our services, we do have -- we are, of course, a big target -- we do have a robust security team that works...
Gohmert: Well, but that's not my question. It's just a very direct question. Have you found... You don't have to be in North Korea to be North Korean Intelligence and use... We have foreign government intelligence agencies IN THIS COUNTRY. So have... It seems to me you were each a little bit vague about "oh yes, we found hundreds" or whatever. I'm asking specifically, were any of those other countries besides Russia that were using your platform inappropriately? It should be a yes or no.
Actually, no, it shouldn't be a yes or no. That's a dumb and misleading question for a whole long list of reasons. Of course, lots of other intelligence agencies are using Facebook, because of course they are. But, the entire point of this line of questioning seems to be Gohmert trying to play down Russian use of the platform, which is... odd. Especially after he started out by praising the fact that maybe the Russians might help "our side" get elected going forward.
Bickert: I don't have the details. I know we work to detect and repel attacks...
Gohmert: I know that. But were any of them foreign entities other than Russia?
Bickert: I can certainly follow up with you on that.
Gohmert: SO YOU DON'T KNOW?!? You sure seemed anxious to answer the Democrats questions about RUSSIA's influence. And you don't really know of all the groups that inappropriately used your platform? You don't know which were Russians and which were other foreign entities?
No, that's not what she's saying at all. She's pretty clearly saying that this hearing was specifically about Russian influence and that's what she was prepared to testify on. She didn't say that Facebook can't tell Russians from other entities, just that the other entities aren't the ones accused of messing with the election and thus there isn't that much relevant right now. But that's quite a deflection attempt by Gohmert.
Let's move on to Rep. Tom Marino at about an hour and a half into the video. Marino seems to have a fairly bizarre understanding of the law as it concerns defamation. He focuses on the guy from Twitter, Nick Pickles, and starts out by reading a definition of "libel." Then he asks
Have any of you considered libel? Or do you think you are immune from it?
This is an incredibly stupid question. Twitter is clearly not immune from libel. Marino's line of questioning is an attempt to attack CDA 230, which provides immunity to Twitter from liability for defamatory statements made by its users. This is an important distinction that Marino conveniently ignores as he continues to bug Pickles.
Pickles: We have clear rules that governs what happens on Twitter. Some of those behaviors are deplorable and we want to remove them immediately... So, terrorist content is one example, where we now detect 95% of the terrorist accounts we remove...
Marino: Okay, I understand that sir. But how about... we in Congress, we put up with it all the time. I know we're public officials, same with people in the movies... but do you specifically look for and address... republication can be used in a defamation case. Do you look at libel and defamation content?
I don't even know what that means. Do you look at libel content? What? How does Twitter know if something is libelous? Especially against public officials? How is Twitter supposed to make that judgment when that's what courts are there to figure out? And, for what it's worth, Twitter has been known to abide by court rulings on defamatory speech in deciding to take down that content, but Marino seems to be asking if they make an independent judgment outside of the courts of what's libelous. Which is both crazy and impossible. Pickles makes a valiant effort in response, noting how Twitter focuses on its rules -- which is all that it's required to do -- but Marino clearly seems to want to attack CDA 230 and magically make Twitter liable for libelous content on its platform. After Pickles again explains that it focuses on its rules, rather than making judicial rulings that it cannot make, Marino puts on a dumb smirk and makes another dumb statement:
With all due respect, I've heard you focus on your rules about 32 times. DO. YOU. LOOK. FOR. LIBEL. OR. DEFAMATION. IN. YOUR. COMPANY'S. OPINION?
You can't "look for libel or defamation" like that. That's not how it works. Marino is a lawyer. He should know this. The Facebook and YouTube representatives neatly sidestep Marino's silly line of questioning by pointing out that when informed of legal rulings determining "illegal" speech, they take it down. Marino doesn't even seem to notice this very specific distinction and asks "where do you draw the line?"
At an hour and forty minutes, we have everyone's favorite, Rep. Lamar Smith, author of SOPA back in the day. He spews more utter nonsense claiming conservatives have been more negatively impacted by the moves of these social media companies, and then (bizarrely) argues that Google employees forcing the company not to help surveillance activity is somehow an attack on conservatives. Excuse me? Conservatives don't support the 4th Amendment any more? Say what? But the real craziness is this line:
Google has also deleted or blocked references to Jesus, Chick-Fil-A and the Catholic religion.
I'm going to call time out here and note [citation needed] on that one, Smith. Google pretty clearly shows me results on all three of those things. I've been trying to figure out what the hell he's referring to, and I'm guessing that Smith -- in his usual Smithian nonsensical way -- is confusing Google for Facebook, and Facebook's bad filter that initially blocked a page about "Chick-fil-Appreciation Day," and some Catholic church pages. The "Jesus" blocking is also Facebook and was in reference to an ad for a Catholic university.
All of these examples were not, as Smith implies, evidence of "liberal bias" on behalf of Facebook, but rather evidence of why it's so problematic that governments are putting so much pressure on Facebook to magically filter out all of the bad stuff. That's not possible without making mistakes. And what happens is that you set up guidelines and those guidelines are then handed to people who don't have nearly enough time to understand the context, and sometimes they make mistakes. It's not bias. It's the nature of trying to moderate millions of pieces of content every damn day, because if they don't, these same idiots in Congress would be screaming at them about how they're letting the bad content live on. I mean, it's doubly ridiculous for Smith to use the Jesus example as even the guy who bought the ad, the university's web communications director, specifically said that he didn't believe it had anything to do with bias, but was just a bad decision by an algorithm or a low level staffer.
Finally (and there are more, but damn, this post is getting way too long) we get to Rep. Matt Gaetz. At around an hour and 55 minutes into the hearing, he suddenly decides to weigh in that the First Amendment and CDA 230 are somehow in conflict, in another bizarre exchange between Gaetz and Twitter's Pickles.
Gaetz: Is it your testimony or is it your viewpoint today that Twitter is an interactive computer service pursuant to Section 230 sub c(1).
Pickles: I'm not a lawyer, so I won't want to speak to that. But as I understand, under Section 230, we are protected by that, yes.
Gaetz: So Section 230 covers you, and that section says "no provider of an interactive computer service shall be treated as the publisher or speaker of any information provided by another"... is it your contention that Twitter enjoys a First Amendment right under speech, while at the same time enjoying Section 230 rights?
Pickles: Well, I think we've discussed the way the First Amendment interacts with our companies. As private companies we enforce our rules, and our rules prohibit a range of activities.
Gaetz: I'm not asking about your rules. I'm asking about whether or not you believe you have First Amendment rights. You either do or you do not.
Pickles: I'd like to follow up on that, as someone who is not a lawyer... I think it's very important...
Gaetz: Well, you're the senior public policy official for Twitter before us and you will not answer the question whether or not you believe your company enjoys rights under the First Amendment?
Pickles: Well, I believe we do, but I would like to confirm with colleagues...
Gaetz: So what I want to understand is, if you say "I enjoy rights under the First Amendment" and "I'm covered by Section 230" and Section 230 itself says "no provider shall be considered the speaker" do you see the tension that creates?
There is no tension there. The only tension is between the molecules in Gaetz's brain that seemed to think this line of nonsensical argument makes any sense at all. There is no conflict. First, yes, it's obvious that Twitter is clearly protected by both the First Amendment and CDA 230. That's been established by dozens of court rulings with not a single ruling ever holding otherwise. Second, the "tension" that Gaetz sees is purely a figment of his own misreading of the law. The "no provider shall be considered a speaker" part, read in actual context (as Gaetz did earlier) does not say that platforms are not speakers. It says that they are not considered a speaker of other people's speech. In fact, this helps protect free speech by enabling internet platforms the ability to host any speech without facing liability for that speech.
That helps protect the First Amendment by ensuring that any liability is on the speaker and not on the tool they use to distribute that speech. But Twitter has its own First Amendment rights to determine what speech it decides to keep on its site -- and which speech it decides not to allow. Gaetz then, ridiculous, tries to claim that Pickle's response to that nonsensical response is somehow in conflict with what Twitter's lawyers have said in the silly Jared Taylor lawsuit. Gaetz asks Pickles if Twitter could kick someone off the platform "for being a woman or being gay." Pickles points out that that is not against Twitter's rules... and Gaetz points out that in the Taylor case, when asked the same question, Twitter's lawyers stated (1) that Twitter has the right to do so but (2) never would.
Again, both Pickles and Twitter's lawyers are correct. They do have that right (assuming it's not a violation of discrimination laws) but of course they wouldn't do that. Pickles wasn't denying that. He was pointing out that the hypothetical is silly because that's not something Twitter would do. Twitter's lawyers in the case were, correctly, pointing out that it would have the right to do such a nonsensical thing if it chose to do so, while also making it clear it would never do that. Again, that's not in conflict, but Gaetz acts as if he's "caught" Twitter in some big admission.
Gaetz falsely then claims that Pickles is misrepresenting Twitter's position:
Right but it is not in service of transparency if Twitter sends executives to Congress to say one thing -- that you would not have the right to engage in that conduct -- and then your lawyers in litigation say precisely the opposite.
Except that's not what happened at all. Pickles and the lawyers agreed. At no point did Pickles say that Twitter did not have "the right" to kick people off its platform for any reason. He just noted that it was not a part of their policy to do so, nor would it ever be. That's entirely consistent with what Twitter's lawyers said in the Taylor case. This is Gaetz making a complete ass out of himself in completely misrepresenting the law, the constitution and what Twitter said both in the hearing and in the courthouse.
Seriously, people, we need to elect better Representatives to Congress. This is embarrassing.
In 2016, Techdirt wrote about a troubling case, Hassell v. Bird, in which a court issued an injunction telling Yelp to delete a review after a lawyer won a default judgment in a defamation case. The court ignored that Section 230 of the CDA says that platforms like Yelp cannot be held liable (and thus can't be legally mandated) to remove content of third parties, and didn't seem to care that Yelp wasn't even a party in the case.
The good news is that Yelp won its appeal of the injunction. The bad news, though, is that it barely won, and the relatively elegant, cogent opinion finding that Section 230 prevented the injunction is tempered in its effect by only being a plurality decision: victorious in its ultimate holding only because of a concurring vote on different grounds that provided a less-than-full-throated endorsement of the plurality's conclusion.
This case began when someone, who the plaintiff Hassell believes to be Bird, had posted a critical review of the Hassell law firm on Yelp that Hassell claimed to be defamatory. Hassell sued Bird and ended up with a default judgment agreeing that it was defamatory. Hassell also got the trial court in San Francisco to issue an injunction ordering Yelp to delete the offending posts. Yelp appealed the injunction on several grounds, including that it never had a chance to be heard by the court before it issued a judgment against it, and because Section 230 should have barred it. After losing at the California Court of Appeals, the California Supreme Court agreed to take up its case, and this week it issued its ruling.
The plurality opinion, which garnered three votes, found it sufficient to invalidate the injunction entirely on Section 230 grounds without having to reach any due process consideration. It cited plenty of prior cases to support its Section 230 analysis, but spent some time discussing the holdings in three in particular: Zeran v. AOL, Kathleen R. v. City of Livermore, and Barrett v. Rosenthal [p. 14-20]. Zeran was an early case construing Section 230 that set forth why it was so important for speech and ecommerce that platforms have this statutory protection for liability arising from their users' content. Barrett v. Rosenthal was a subsequent California Supreme Court case, which similarly construed it. And Kathleen R. was a case where a California Court found that Section 230 precluded injunction relief. These and other cases underpinned the plurality's opinion.
It also made several other points in support of its Section 230 finding. One was the observation that if Section 230 couldn't prevent the non-party injunction against Yelp it would just prompt litigants to game the system by not even bothering trying to name platforms as defendants, since they'd have better luck getting injunctions against them if they did NOT try to sue them than if they did.
The question here is whether a different result should obtain because plaintiffs made the tactical decision not to name Yelp as a defendant. Put another way, we must decide whether plaintiffs’ litigation strategy allows them to accomplish indirectly what Congress has clearly forbidden them to achieve directly. We believe the answer is no. [p. 22]
And part of the reason the answer is no, is that Section 230 was never intended to only limit damages liability against a platform; it also was meant to prevent injunctions as well. [p. 26-27].
An injunction like the removal order plaintiffs obtained can impose substantial burdens on an Internet intermediary. Even if it would be mechanically simple to implement such an order, compliance still could interfere with and undermine the viability of an online platform. (See Noah v. AOL Time Warner, Inc., supra, 261 F.Supp.2d at p. 540 [“in some circumstances injunctive relief will be at least as burdensome to the service provider as damages, and is typically more intrusive”].) Furthermore, as this case illustrates, a seemingly straightforward removal order can generate substantial litigation over matters such as its validity or scope, or the manner in which it is implemented. (See Barrett, supra, 40 Cal.4th at p. 57.) Section 230 allows these litigation burdens to be imposed upon the originators of online speech. But the unique position of Internet intermediaries convinced Congress to spare republishers of online content, in a situation such as the one here, from this sort of ongoing entanglement with the courts. [p. 28]
And it had to prevent injunctions, in order for platforms and the online speech they facilitate to be protected:
Perhaps the dissenters’ greatest error is that they fail to fully grasp how plaintiffs’ maneuver, if accepted, could subvert a statutory scheme intended to promote online discourse and industry self-regulation. What plaintiffs did in attempting to deprive Yelp of immunity was creative, but it was not difficult. If plaintiffs’ approach were recognized as legitimate, in the future other plaintiffs could be expected to file lawsuits pressing a broad array of demands for injunctive relief against compliant or default-prone original sources of allegedly tortious online content. Injunctions entered incident to the entry of judgments in these cases then would be interposed against providers or users of interactive computer services who could not be sued directly, due to section 230 immunity. As evinced by the injunction sought in Kathleen R., supra, 87 Cal.App.4th 684, which demanded nothing less than control over what local library patrons could view on the Internet (id., at p. 691), the extension of injunctions to these otherwise immunized nonparties would be particularly conducive to stifling, skewing, or otherwise manipulating online discourse — and in ways that go far beyond the deletion of libelous material from the Internet. Congress did not intend this result, any more than it intended that Internet intermediaries be bankrupted by damages imposed through lawsuits attacking what are, at their core, only decisions regarding the publication of third party content. [p. 30-21]
Unfortunately the rest of the Court was not as amenable to the plurality's application of Section 230 as a defense against the injunction. Even the concurrence by Justice Kruger, which provided the fourth vote in favor of overturning the injunction, did so, as Eric Goldman observed, with potentially some qualification of the Section 230 analysis ("I express no view on how section 230 might apply to a different request for injunctive relief based on different justifications."). [concurrence p.1]. But both the concurrence and the plurality recognized that there were problems with trying to hold a non-party platform like Yelp responsible for complying with the injunction to take down content that had also been directed to the defendant Bird. For the plurality it was a straightforward violation of Section 230.
[I]t is also true that as a general rule, when an injunction has been obtained, certain nonparties may be required to comply with its terms. But this principle does not supplant the inquiry that section 230(c)(1) requires. Parties and nonparties alike may have the responsibility to comply with court orders, including injunctions. But an order that treats an Internet intermediary “as the publisher or speaker of any information provided by another information content provider” nevertheless falls within the parameters of section 230(c)(1). In substance, Yelp is being held to account for nothing more than its ongoing decision to publish the challenged reviews. Despite plaintiffs’ generic description of the obligation they would impose on Yelp, in this case this duty is squarely derived from “the mere existence of the very relationship that Congress immunized from suit.” [p. 24]
For the concurrence the platform's relationship with the defendant was too attenuated and not the sort of agency relationship where it may be proper to hold a third party responsible for complying with an injunction on another.
Plaintiffs, as well as [dissenting] Justice Liu, argue that the injunction naming Yelp is valid because it merely makes explicit that Yelp, as an entity “through” whom Bird acts, is obligated to carry out the injunction on her behalf. But the trial court made no finding that Bird acts, or has ever acted, “through” Yelp in the sense relevant under Berger, nor does the record contain any such indication; we have no facts before us to suggest that Yelp is Bird’s “agent” or “servant.” It is true and undisputed, as plaintiffs and Justice Liu emphasize, that Bird’s statements were posted on Yelp’s website with Yelp’s permission. And as a practical matter, Yelp has the technological ability to remove the reviews from the site. These facts might well add up (at least absent section 230) to a good argument for filing suit against Yelp and seeking an injunctive remedy in the ordinary course of litigation. But the question presented here is whether these facts establish the sort of legal identity between Bird and Yelp that would justify binding Yelp, as a nonparty, to the outcome of litigation in which it had no meaningful opportunity to participate. Without more, I do not see how they could.
[concurrence p. 7]
The plurality also rejected the theory raised by the trial court and pushed by the dissent that the platform had somehow "aided and abetted" the defamatory speech. If this argument could prevail, Section 230 would become a nullity, since every platform enables user expression, and not all that expression is necessarily entirely legal.
In his dissent, Justice Cuéllar argues that even if the injunction cannot on its face command Yelp to remove the reviews, the removal order nevertheless could run to Yelp through Bird under an aiding and abetting theory premised on conduct that remains inherently that of a publisher. (See dis. opn. of Cuéllar, J., post, at pp. 3, 20-22, 34-37.) We disagree. As applied to such behavior, Justice Cuéllar’s approach would simply substitute one end-run around section 230 immunity for another. [p. 25]
The dissenting opinions, on the other hand, were very focused on the plight of the plaintiff who had apparently been injured by these purportedly defamatory posts. (I say "purportedly," because although the Supreme Court decision does not spend much time on this issue, it's worth noting that the conclusion of the posts' defamatory nature was drawn from an ex parte default proceeding at the trial court where no defense was supplied. It is certainly easier for a court to accept a plaintiff's characterization of language as defamatory when there is no one present – even Yelp was left out – to show that it is not.) As we've seen in cases like Garcia v. Google, the operation of Section 230 can make it difficult for a legitimately aggrieved plaintiff to obtain a remedy against someone who has defamed them. But it isn't necessarily impossible, and the plurality reminded everyone that Hassell was not without any recourse:
On this last point, we observe that plaintiffs still have powerful, if uninvoked, remedies available to them. Our decision today leaves plaintiffs’ judgment intact insofar as it imposes obligations on Bird. Even though neither plaintiffs nor Bird can force Yelp to remove the challenged reviews, the judgment requires Bird to undertake, at a minimum, reasonable efforts to secure the removal of her posts. A failure to comply with a lawful court order is a form of civil contempt (Code Civ. Proc., §1209, subd. (a)(5)), the consequences of which can include imprisonment (see In re Young (1995) 9 Cal.4th 1052, 1054). Much of the dissents’ rhetoric regarding the perceived injustice of today’s decision assumes that plaintiffs’ remaining remedies will be ineffective. One might more readily conclude that the prospect of contempt sanctions would resonate with a party who, although not appearing below, has now taken the step of filing an amicus curiae brief with this court.
[p. 32]
Perhaps this is the most important passage in the whole opinion. It's become really popular especially as of late to try to make platforms responsible for everything their users do. It's good to have courts remind us that it's the people who do the things who really should be held accountable instead.
Stanford's Daphne Keller is one of the world's foremost experts on intermediary liability protections and someone we've mentioned on the website many times in the past (and have had her on the podcast a few times as well). She's just published a fantastic paper presenting lessons from making internet platforms liable for the speech of its users. As she makes clear, she is not arguing that platforms should do no moderation at all. That's a silly idea that no one who has any understanding of these issues thinks is a reasonable idea. The concern is that as many people (including regulators) keep pushing to pin liability on internet companies for the activities of their users, it creates some pretty damaging side effects. Specifically, the paper details how it harms speech, makes us less safe, and harms the innovation economy. It's actually kind of hard to see what the benefit side is on this particular cost-benefit equation.
As the paper notes, it's quite notable how the demands from people about what platforms should do keeps changing. People keep demanding that certain content gets removed, while others freak out that too much content is being removed. And sometimes it's the same people (they want the "bad" stuff -- i.e., stuff they don't like -- removed, but get really angry when the stuff they do like is removed). Perhaps even more importantly, the issues for why certain content may get taken down are the same issues that often involve long and complex court cases, with lots of nuance and detailed arguments going back and forth. And yet, many people seem to think that private companies are somehow equipped to credibly replicate that entire judicial process, without the time, knowledge or resources to do so:
As a society, we are far from consensus about
legal or social speech rules. There are still enough novel and disputed questions surrounding
even long-standing legal doctrines, like copyright and defamation, to keep law firms in business. If democratic processes and court rulings leave us with such unclear guidance, we
cannot reasonably expect private platforms to do much better. However they interpret the
law, and whatever other ethical rules they set, the outcome will be wrong by many people’s
standards.
Keller then looked at a variety of examples involving intermediary liability to see what the evidence says would happen if we legally delegate private internet platforms into the role of speech police. It doesn't look good. Free speech will suffer greatly:
The first cost of strict platform removal obligations is to internet users’ free expression
rights. We should expect over-removal to be increasingly common under laws that ratchet
up platforms’ incentives to err on the side of taking things down. Germany’s new NetzDG
law, for example, threatens platforms with fines of up to &euro'50 million for failure to remove
“obviously” unlawful content within twenty-four hours’ notice. This has already led
to embarrassing mistakes. Twitter suspended a German satirical magazine for mocking
a politician, and Facebook took down a photo of a bikini top artfully draped over a
double speed bump sign.11 We cannot know what other unnecessary deletions have passed
unnoticed.
From there, the paper explores the issue of security. Attempts to stifle terrorists' use of online services by pressuring platforms to remove terrorist content may seem like a good idea (assuming we agree that terrorism is bad), but the actual impact goes way beyond just having certain content removed. And the paper looks at what the real world impact of these programs have been in the realm of trying to "counter violent extremism."
The second cost I will discuss is to security. Online content removal is only one of many
tools experts have identified for fighting terrorism. Singular focus on the internet, and
overreliance on content purges as tools against real-world violence, may miss out on or even
undermine other interventions and policing efforts.
The cost-benefit analysis behind CVE campaigns holds that we must accept certain
downsides because the upside—preventing terrorist attacks—is so crucial. I will argue that
the upsides of these campaigns are unclear at best, and their downsides are significant.
Over-removal drives extremists into echo chambers in darker corners of the internet, chills
important public conversations, and may silence moderate voices. It also builds mistrust
and anger among entire communities. Platforms straining to go “faster and further” in
taking down Islamist extremist content in particular will systematically and unfairly
burden innocent internet users who happened to be speaking Arabic, discussing Middle
Eastern politics, or talking about Islam. Such policies add fuel to existing frustrations with
governments that enforce these policies, or platforms that appear to act as state proxies.
Lawmakers engaged in serious calculations about ways to counter real-world violence—not
just online speech—need to factor in these unintended consequences if they are to set wise
policies.
Finally, the paper looks at the impact on innovation and the economy and, again, notes that putting liability on platforms for user speech can have profound negative impacts.
The third cost is to the economy. There is a reason why the technology-driven economic
boom of recent decades happened in the United States. As publications with titles like
“How Law Made Silicon Valley” point out, our platform liability laws had a lot to do with
it. These laws also affect the economic health of ordinary businesses that find customers
through internet platforms—which, in the age of Yelp, Grubhub, and eBay, could be almost
any business. Small commercial operations are especially vulnerable when intermediary
liability laws encourage over-removal, because unscrupulous rivals routinely misuse notice
and takedown to target their competitors.
The entire paper weighs in at a neat 44 pages and it's chock full of useful information and analysis on this very important question. It should be required reading for anyone who thinks that there are easy answers to the question of what to do about "bad" content online, and it highlights that we actually have a lot of data and evidence to answer the questions that many legislators seem to be regulating based on how they "think" the world would work, rather than how the world actually works.
Current attitudes toward intermediary liability, particularly in Europe, verge on “regulate
first, ask questions later.” I have suggested here that some of the most important questions
that should inform policy in this area already have answers. We have twenty years of
experience to tell us how intermediary liability laws affect, not just platforms themselves,
but the general public that relies on them. We also have valuable analysis and sources of
law from pre-internet sources, like the Supreme Court bookstore cases. The internet raises
new issues in many areas—from competition to privacy to free expression—but none are as
novel as we are sometimes told. Lawmakers and courts are not drafting on a blank slate for
any of them.
Demands for platforms to get rid of all content in a particular category, such as “extremism,”
do not translate to meaningful policy making—unless the policy is a shotgun approach
to online speech, taking down the good with the bad. To “go further and faster” in
eliminating prohibited material, platforms can only adopt actual standards (more or less
clear, and more or less speech-protective) about the content they will allow, and establish
procedures (more or less fair to users, and more or less cumbersome for companies) for
enforcing them.
On internet speech platforms, just like anywhere else, only implementable things
happen. To make sound policy, we must take account of what real-world implementation
will look like. This includes being realistic about the capabilities of technical filters and
about the motivations and likely choices of platforms that review user content under
threat of liability.
This is an important contribution to the discussion, and highly recommended. Go check it out.
A few weeks ago we, and others, filed an amicus brief in support of Airbnb and Homeaway at the Ninth Circuit. The basic point we made there is that Section 230 applies to all sorts of platforms hosting all sorts of user expression, including transactional content offering to rent or sell something, and local jurisdictions don't get to try to impose liability on them anyway just because they don't like the effects of those transactions. It's a point that is often forgotten in Section 230 litigation, and so last week the Copia Institute, joined by EFF, filed an amicus brief at the Wisconsin Supreme Court reminding them of the statute's broad application and why that breadth so important for the preservation of online free speech.
The problem is that in Daniels v. Armslist, the Wisconsin Court of Appeals had ignored twenty-plus years of prior precedent affirming this principle in deciding otherwise. We therefore filed this brief to support Armslist in urging the Wisconsin Supreme Court to review the Court of Appeals decision.
As in so many cases involving Section 230 the case in question followed an awful tragedy: someone barred from owning a gun bought one through the online marketplace run by Armslist and then shot his estranged partner. The partner's estate sued Armslist for negligence in having constructed a site where dangerous people could buy guns. As we acknowledged up front:
Tragic events like the one at the heart of this case often challenge the proper adjudication of litigation brought against Internet platforms. Justice would seem to call for a remedy, and if it appears that some twenty-year old federal statute is all that prevents a worthy plaintiff from obtaining one, it is tempting for courts to ignore it in order to find a way to give them that remedy.
Nonetheless, there was more at stake than just the plaintiff's interest. This case might look like a gun policy case, or a negligence case, but, like with Airbnb/Homeaway, this case was really a speech case, and laws like Section 230 that help protect speech are ignored at our peril because doing so imperils all the important expression they exist to protect.
The reason it was a speech case is that, as in the Airbnb/Homeaway case where someone was using the platform to say, "I have a home to rent," here someone had used the Armslist platform to say, "I have a gun to sell." Because these platforms only facilitate these narrow topics of expression it's easy to lose sight of what's getting expressed and instead focus on the consequences of the expression. But that's the problem with these cases: someone is trying to hold an Internet platform liable for the consequences of what someone said, and that's exactly what Section 230 forbids.
Tempting though it may be to try to find exceptions to that critical statutory protection, it is important to hold the line because Section 230 only works when it can always work. It wouldn't accomplish anything if platforms were only protected from certain forms of liability but still had to monitor all their users' content anyway. Congress recognized that such monitoring would be an impossible task and crippling to platforms' ability to remain available to facilitate users' speech. A major reason Section 230 exists is to protect speech from the corrosive effects these monitoring burdens would have on it. It is also why Section 230 does not let state and local jurisdictions impose their own monitoring burdens through the threat of liability, as the Wisconsin appeals court decision would do.
Apparently having some extra free time on his schedule, he has sued Twitter, pro se of course. It's a fun read, and extra amusing as it comes just days after Chuck Johnson's lawsuit against Twitter on sorta similar grounds was tentatively tossed out of court. At least Johnson had an actual lawyer file his suit. Brittain's lawsuit, of course, cites the Packingham decision that a bunch of people have been misrepresenting to claim that it says social media can be considered a public forum. Brittain combines his misrepresentation of that opinion with a misrepresentation of the recent decision that President Trump cannot block followers, in order to claim that Twitter can't kick off any political candidate.
This lawsuit implicates Twitter's responsibility as a public forum as recently ruled in Knight First Amendment Institute v. Trump et. al... where the honorable Naomi Reice Buchwald, Judge for the Southern District of New York, ruled that President Donald J. Trump must unblock all Twitter users, regardless of the content of their messaging, and also ruled that President Trump's Twitter space is an interactive public forum. The ruling also implicates that Twitter itself is a public forum space under the US Constitution, and thus all First Amendment Protections (must) apple to its use.
Yeah, that's not what that ruling said at all, but, I guess you get points for trying?
In regards to Knight First Amendment Center v. Trump, Defendant must reasonably provide access to that public forum space by unsuspending all users who are followers of President Donald J. Trump or any other public official or candidate, as well as any/all public candidates and officials, whether they are supporters, critics, or neutral to the points of view of the President of the United States or any other candidate or elected official.
Likewise, being as President Donald J. Trump is one of many politicians whose tweets create such a public space, Twitter must extend that same public forum to followers and critics of all US politicians and subsequently all journalistic outlets, in order to protect two-way freedom of speech established by the First Amendment.
Two-way freedom of speech? That's a new one. I'm sure the court will just accept this totally made-up, nonsensical concept, especially right after you totally misrepresent the findings of the Knight Center ruling (in which Twitter wasn't even a party and in which Twitter was not required to do anything). The lawsuit also contains many paragraphs of meaningless nonsense about how Twitter is not a neutral platform, which... has no impact on anything (even if some people -- including some actual Senators -- want to pretend otherwise).
Also, this (capitalization in the original):
The loss of the Accounts is a Crippling Blow to Plaintiffs and Others, and presents a Chilling Effect to the First Amendment and other Constitutional Rights, where a Crippling Blow shall be defined as 'an unconscionable and substantial loss with no defined legal remedy or recourse', and a Chilling Effect shall be defined as 'an action which suppresses similar/related rights including but not limited to the First Amendment rights to access and utilize a public forum for speech as well as the desire of other users to speak out against similar actions, for fear of action(s) such as censorship, suspension/ban, shadowban or downranking being taken against them as well'.
That's all one sentence. Try to say it in a single breath. It's fun. Anyway, this is also not how law works, and clearly there's little need to go step by step over how wrong this is... but I'll just note that when you define your own made up tort as one "with no defined legal remedy or recourse" you've basically just admitted that your entire lawsuit is bullshit.
Not to be missed is Brittain's discussion of who he is in the "Parties" section, in which he claims that "he has committed himself to reinventing and rehabilitating his life and image" and that Twitter was a necessary component to this. He leaves out the many people who he attacked (disclaimer: including me) with his account(s) over the years. Similarly, he tries to paint himself as "lifelong champion of free and even dangerous speech as a natural right" while (yet again) ignoring his repeated attempts to abuse the law to try to silence reports of his own history running a revenge porn site, setting up a fake lawyer to demand payments to get pictures off of that site, and the eventual FTC settlement concerning that whole effort. But really, it's the next part that's the most laugh inducing:
His accumulated total followers (over 400,000) have made him the most popular anarchist/libertarian thinker in world history, where anarchism is defined as 'self-government by peaceful and voluntary interaction and exchange', governed by the Non-Aggression Principle, defined as 'to not harm anyone or their property'.
Got that? He is the most popular anarchist/libertarian thinker in world history. Because he had 400,000 followers (and that's leaving aside news reports that claimed that almost half of Brittain's followers were fake). And make sure you don't miss out on the fact that Brittain is important because some wrestlers followed him on Twitter. That's in there too. It goes on like this for a while. There's also an impressively long section in which Brittain namechecks a bunch of other accounts that Twitter suspended for no clear reason, followed by even more examples where a bunch of people freak out and claim that they've been shadowbanned (even though it's unclear if they actually were). Incredibly, there are then 17 pages (which Brittain lists as a single paragraph in his filing) that repost an EFF brief in the Knight Center case that doesn't actually say what Brittain then pretends it says. This is not an appendix or an exhibit. It's just stuck there right in the middle of Brittain's complaint. This is followed by a lengthy treatise on the fact that President Obama used Twitter, which has no bearing on... well... anything.
At this point, you're on page 60 of the filing and you finally (finally!) get to the first actual cause of action which is, incredibly, "Violation of the First Amendment of the US Constitution." Which, as we've already discussed (and other courts have already found) is nonsense. Twitter is not bound by the First Amendment. That only restricts government entities. There are a bunch of other claims as well, some more nutty than others -- but all of them pretty nutty. The antitrust claim is a personal favorite. The "proof" of monopoly power in that one? The claim that Twitter controls 25% of the US social networking market. Which, uh, is not the definition of a monopoly, but Brittain's suit claims: "Therefore, it can logically be concluded that Defendant is in possession of monopoly power." This statement is not explained any further.
Also: Brittain claims that Twitter is violating CDA 230. CDA 230, of course, being the intermediary liability protection statute that literally explains why this case is nonsense and will get tossed out. It's the part of the law that says Twitter can moderate its platform however it likes. But Brittain tries to twist that.. by claiming that because Twitter itself uses Twitter, it is now an information content provider, rather than a service provider, and therefore liable for third party content:
Defendant's protections under 47 U.S. Code §230 stem from its classification as an interactive computer service. However, the presence of @Policy and unequal treatment for its users, as well as the promotion of content it agrees with ("Moments" "Front Page") and the "downranking" of content it disagrees with (to include suspensions and shadowbanning) indicate(s) that Twitter is actually an information content provider. Thereby, Twitter should be declared liable for content which appears on its platform, until at which point it ceases to act as an information content provider, and acts solely as an interactive computer service.
Nice theory. Too bad it's been rejected by basically every court since 230 became law. Courts have (rightly) found that internet services can be both an interactive computer service and an information content provider -- such that they are liable solely for the content they produce, but not for the content third parties produce on their platform. But, Brittain apparently is unaware of the reams of caselaw on this... which I guess is not that surprising.
We'd be remiss if we didn't also mention Brittain's proposed remedies. It starts off asking for a whole long list of nonsensical injunctions and declarations, then lawsuit costs and attorney's fees (he's filed this without attorneys, of course) and then "such other and further relief as this Court deems just and proper," which is normally where these kinds of things would end. But then he seems to remember that he wants money, so after all that he adds in a demand for $1 billion dollars. Well, at least I think that's what he's demanding. He calls it an injunction, which is not what you call a monetary award, and then has some sort of weird formula in which an injunction is summary judgment and it has to do with Twitter's valuation, because [reasons].
For an injunction in the form of an additional summary judgment for the Plaintiff, against the Defendant, in accordance with Defendant's valuation of over $25 Billion US Dollars, of no less than $1,000,000,000.00 US Dollars.
An injunction in the form of summary judgment in accordance with a valuation for a billion dollars? This is a word salad of legal nonsense.
Anyway, if the past is any indication, we eagerly await this "lifelong champion of free and even dangerous speech as a natural right" to now seek to have this article deleted from Google. But, we also eagerly await the "LOLwut?" response from the poor judge assigned this case.
Back in January we wrote about infamous internet troll Chuck Johnson's absolutely ridiculous lawsuit against Twitter for kicking him off the service. As we noted at the time, the lawsuit appeared to be nearly a carbon copy of Dennis Prager's silly lawsuit against YouTube. And, if you recall, a court tossed that lawsuit earlier this year. And now it's clear that a court is about to toss Johnson's lawsuit as well on anti-SLAPP grounds.
On Tuesday, the court released a tentative ruling and lays out the many, many reasons why Johnson has no case at all, both under CDA 230 and the First Amendment.
Plaintiff further argues that Defendant is not entitled to the protection of the CDA because Defendant seeks to be treated both as a neutral content provider pursuant to the CDA, but at the same time asks for First Amendment protection for its editorial decision to terminate Plaintiff’s accounts. But this is not the standard for immunity under the CDA. (See 47 U.S.C. §230.) Plaintiff cites to 47 U.S.C. §230(c)(2), which requires a showing of good faith in order to be protected from civil liability by the CDA. Defendant, however, relies on subdivision (c)(1), which provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The heading of subdivision (c) is “Protection for ‘Good Samaritan’ blocking and screening of offensive material.” (Italics added.) Plaintiff fails to establish that Defendant is not entitled to protection under the CDA, i.e., Plaintiff fails to show that his claims are not barred by the CDA.
Plaintiff also fails to show that his claims can survive Defendant’s challenge based on Defendant’s First Amendment right. Defendant is a private sector company. Although it does invite the public to use its service, Defendant also limits this invitation by requiring users to agree to and abide by its User Rules, in an exercise of Defendant’s First Amendment right. The rules clearly state that users may not post threatening tweets, and also that Defendant may unilaterally, for any reason, terminate a user’s account. The rules reflect Defendant’s exercise of free speech. (See Hurley, supra, 515 U.S. at p. 574.) Plaintiff fails to show that his claims are not barred by Defendant’s First Amendment right to exercise independent editorial control over the content of its platform. Defendant’s choice to close Plaintiff’s account on the ground that Plaintiff’s tweet was threatening and harassing is an editorial decision regarding how to present content, i.e., an act in furtherance of Defendant’s free speech right. Defendant’s choice not to allow certain speech is a right protected by the First Amendment.
The court also laughs off the attempt by Johnson and his lawyers to get around all this by arguing that a well known Supreme Court case concerning shopping malls (Robins v. Pruneyard Shopping Center) somehow means that social media sites can't remove users. We've seen lots of people make this argument for why websites must post the speech of anyone who wants to use those websites, but no court in the land has ever agreed, and this California court certainly wasn't going to be the first.
Plaintiff’s reliance on Robins v. Pruneyard Shopping Center (1979) 23 Cal.3d 899 is misplaced and fails to defeat Defendant’s CDA and First Amendment protections. In Robins, the California Supreme Court held that the soliciting at a shopping center of signatures for a petition to the government is an activity protected by the California Constitution. The Court specifically noted that “[b]y no means do we imply that those who wish to disseminate ideas have free rein.” The Court reasoned: “A handful of additional orderly persons soliciting signatures and distributing handbills in connection therewith, under reasonable regulations adopted by defendant to assure that these activities do not interfere with normal business operations . . . would not markedly dilute defendant's property rights.” (Id. at pp. 910-911.) The case is distinguishable from the instant action, where Plaintiff’s tweet could reasonably be, and in fact was, interpreted as threatening and harassing, unlike activity that “would not markedly dilute defendant’s property rights.” (See Sprankling Decl. at Ex. D.) Moreover, Defendant’s rules were adopted to ensure that Defendant is able to maintain control over its site and to protect the experience and safety of its users.
Somewhat hilariously, Johnson's lawyer in the case, Robert Barnes, took to Twitter after the tentative ruling to not just announce a plan to appeal, but... incredibly... to claim victory.
Apparently, Chuck Johnson hired Baghdad Bob as his lawyer.
The key points that Barnes declares "victory" over both appear to involve a somewhat twisted interpretation of what the court is saying. On the first point, of the court declaring Twitter to be a "public forum," that is true, but specifically in the context of California's anti-SLAPP law. I mean, the ruling says that explicitly:
In the instant case, the parties appear to agree that (1) Twitter is a public forum for purposes of the anti-SLAPP statute...
The fact that it is a public forum for the purposes of California's anti-SLAPP statute has no bearing at all on whether or not Twitter is a "public forum" in the sense of spaces created by the government in which speech regulations are limited under the First Amendment. They both use the words "public forum" but they mean totally different things.
The second point, about Twitter's control over its platform being a "matter of public interest," is also specific to California's anti-SLAPP law, which requires the speech in question to be about a matter of public interest. That doesn't help Johnson's case at all, unless you're twisting this specific point concerning anti-SLAPP laws into believing it refers to the government having an interest in regulating how Twitter runs its website. But that would be a totally nonsense interpretation. Though it appears to be the one that Johnson's lawyer wants to go with. The fact that Twitter agreed to both of the points that Barnes is now celebrating (as is necessary under California's anti-SLAPP law) should show you why neither of these points is even remotely damaging to Twitter. And, no, this is not Barnes using 9th dimensional chess to get Twitter to admit to something that harms it elsewhere. This is just nonsense.
Either way, assuming Twitter holds on and wins the anti-SLAPP, it will mean that Johnson will be on the hook for Twitter's legal fees. One hopes that his lawyer informed him not only of this, but the fact that this would also include the additional fees from an ongoing appeal that he seems unlikely to win.
The Copia Institute was not the only party to file an amicus brief in support of Airbnb and Homeaway's Ninth Circuit appeal of a district court decision denying them Section 230 protection. For instance, a number of Internet platforms, including those like Glassdoor, which hosts specialized user expression, and those like eBay, which hosts transactional user expression, filed one pointing out how a ruling denying Airbnb and Homeaway would effectively deny it to far more platforms hosting far more kinds of user speech than just those platforms behind the instant appeal.
And then there was this brief, submitted on behalf of former Congressman Chris Cox, who, with then-Representative Ron Wyden, had been instrumental in getting Section 230 on the books in the first place. With this brief the Court does not need to guess whether Congress intended for Section 230 to apply to platforms like Airbnb and Homeaway; the statute's author confirms that it did, and why.
In giving insight into the statutory history of Section 230 the brief addresses the two main issues raised by the Airbnb appeal – issues that are continuing to come up over and over again in Section 230-related litigation in state and federal courts all over the country: does Section 230 apply to platforms intermediating transactional user expression, and does Section 230's pre-emption language preclude efforts by state and local authorities to hold these platforms liable for intermediating the consummation of the transactional speech. Cox's brief describes how Congress intended both these questions to be answered in the affirmative and thus may be relevant to these other cases. With that in mind, we are archiving – and summarizing – the brief here.
To illustrate why Section 230 should apply in these situations, first the brief explains the historical context that prompted the statute in the first place:
In 1995, on a flight from California to Washington, DC during a regular session of Congress, Representative Cox read a Wall Street Journal article about a New York Superior Court case that troubled him deeply. The case involved a bulletin board post on the Prodigy web service by an unknown user. The post said disparaging things about an investment bank. The bank filed suit for libel but couldn’t locate the individual who wrote the post. So instead, the bank sought damages from Prodigy, the site that hosted the bulletin board.
[page 3]
The Stratton Oakmont v. Prodigy decision alarmed Cox for several reasons. One, it represented a worrying change in judicial attitudes towards third party liability:
Up until then, the courts had not permitted such claims for third party liability. In 1991, a federal district court in New York held that CompuServe was not liable in circumstances like the Prodigy case. The court reasoned that CompuServe “ha[d] no opportunity to review [the] contents” of the publication at issue before it was uploaded “into CompuServe’s computer banks,” and therefore was not subject to publisher liability for the third party content."
[page 3-4]
It had also resulted in a damage award of $200 million dollars against Prodigy. [page 4]. Damage awards like these can wipe technologies off the map. If platforms had to fear the crippling effect that even one such award, arising from just one user, could have on their developing online services, it would dissuade them from being platforms at all. As the brief observes:
The accretion of burdens would be especially harmful to smaller websites. Future startups, facing massive exposure to potential liability if they do not monitor user content and take responsibility for third parties’ legal compliance, would encounter significant obstacles to capital formation. Not unreasonably, some might abjure any business model reliant on third-party content. [page 26]
Then there was also a third, related concern: according to the logic of Stratton Oakmont, which had distinguished itself from the earlier Cubby v. Compuserve case, unlike Compuserve, Prodigy had "sought to impose general rules of civility on its message boards and in its forums." [page 4].
The perverse incentive this case established was clear: Internet platforms should avoid even modest efforts to police their sites. [page 4]
The essential math was stark: Congress was worried about what was going on the Internet. It wanted platforms to be an ally in policing it. But without protection for platforms, they wouldn't be. They couldn't be. So Cox joined with Senator Wyden to craft a bill that would trump the Stratton Oakmont holding. The result was the Internet Freedom and Family Empowerment Act, H.R. 1978, 104 Cong. (1995), which, by a 420-4 vote reflecting significant bipartisan support, became an amendment to the Communications Decency Act – Congress's attempt to address the less desirable material on the Internet – which then came into force as part of the Telecommunications Act of 1996. [page 5-6]. The Supreme Court later gutted the indecency provisions of the CDA in Reno v. ACLU, but the parts of the CDA at Section 230 have stood the test of time. [page 6 note 2].
The statutory language provided necessary relief to platforms in two important ways. First, it included a "Good Samaritan" provision, meaning that "[i]f an Internet platform does review some of the content and restricts it because it is obscene or otherwise objectionable, then the platform does not thereby assume a duty to monitor all content." [page 6]. Because keeping platforms from having to monitor was the critical purpose of the statute:
All of the unique benefits the Internet provides are dependent upon platforms being able to facilitate communication among vast numbers of people without being required to review those communications individually. [page 12]
The concerns were practical. As other members of Congress noted at the time, "There is no way that any of that any of those entities, like Prodigy, can take the responsibility [for all of the] information that is going to be coming in to them from all manner of sources.” [page 14]
While the volume of users [back when Section 230 was passed] was only in the millions, not the billions as today, it was evident to almost every user of the Web even then that no group of human beings would ever be able to keep pace with the growth of user-generated content on the Web. For the Internet to function to its potential, Internet platforms could not be expected to monitor content created by website users. [page 2]
Thus Section 230 established a new rule expressly designed to spare platforms from having to attempt this impossible task in order to survive:
The rule established in the bill [...] was crystal clear: the law will recognize that it would be unreasonable to require Internet platforms to monitor
content created by website users. Correlatively, the law will impose full responsibility on the website users to comply with all laws, both civil and criminal, in connection with their user-generated content. [But i]t will not shift that responsibility to Internet platforms, because doing so would directly interfere with the essential functioning of the Internet. [page 5]
That concern for the essential functioning of the Internet also explains why Section 230 was not drawn narrowly. If Congress had only been interested in protecting platforms from liability for potentially defamatory speech (as was at issue in the Stratton Oakmont case) it could have written a law that only accomplished that end. But Section 230's language was purposefully more expansive. If it were not more expansive, while platforms would not have to monitor all the content it intermediated for defamation, they would still have to monitor it for everything else, and thus nothing would have been accomplished with this law:
The inevitable consequence of attaching platform liability to user-generated content is to force intermediaries to monitor everything posted on their sites. Congress understood that liability-driven monitoring would slow traffic on the Internet, discourage the development of Internet platforms based on third party content, and chill third-party speech as intermediaries attempt to avoid liability. Congress enacted Section 230 because the requirement to monitor and review user-generated content would degrade the vibrant online forum for speech and for e-commerce that Congress wished to embrace. [page 15]
Which returns to why Section 230 was intended to apply to transactional platforms. Congress didn't want to be selective about which types of platforms could benefit from liability protection. It wanted them all to:
[T]he very purpose of Section 230 was to obliterate any legal distinction between the CompuServe model (which lacked the e-commerce features of Prodigy and the then-emergent AOL) and more dynamically interactive platforms. … Congress intended to “promote the continued development of the Internet and other interactive computer services” and “preserve the vibrant and competitive free market” that the Internet had unleashed. Forcing web sites to a Compuserve or Craigslist model would be the antithesis of the congressional purpose to “encourage open, robust, and creative use of the internet” and the continued “development of e-commerce.” Instead, it will slow commerce on the Internet, increase costs for websites and consumers, and restrict the development of platform marketplaces. This is just what Congress hoped to avoid through Section 230. [page 23-24]
And it wanted them all to be protected everywhere because Congress also recognized that they needed to be protected everywhere in order to be protected at all:
A website […] is immediately and uninterruptedly exposed to billions of Internet users in every U.S. jurisdiction and around the planet. This makes Internet commerce uniquely vulnerable to regulatory burdens in thousands of jurisdictions. So too does the fact that the Internet is utterly indifferent to state borders. These characteristics of the Internet, Congress recognized, would subject this quintessentially interstate commerce to a confusing and burdensome patchwork of regulations by thousands of state, county, and municipal jurisdictions, unless federal policy remedied the situation. [page 27]
Congress anticipated that states and local authorities would be tempted to impose liability on platforms, and in doing so interfere with the operation of the Internet by forcing platforms to monitor after all and thus cripple their operation:
Other state, county, and local governments would no doubt find that fining websites for their users’ infractions is more convenient than fining each individual who violates local laws. Given the unlimited geographic range of the Internet, unbounded by state or local jurisdiction, the aggregate burden on an individual web platform would be multiplied exponentially. While one monitoring requirement in one city may seem a tractable compliance burden, myriad similar-but-not-identical regulations could easily damage or shut down Internet platforms. [page 25]
So, "[t]o ensure the quintessentially interstate commerce of the Internet would be governed by a uniform national policy" of sparing platforms the need to monitor, Congress deliberately foreclosed the ability of state and local authorities to interfere with that policy with Section 230's pre-emption provision. [page 10]. Without this provision, the statute would be useless:
Were every state and municipality free to adopt its own policy concerning when an Internet platform must assume duties in connection with content created by third party users, not only would compliance become oppressive, but the federal policy itself could quickly be undone. [page 13]
This pre-emption did not make the Internet a lawless place, however. Laws governing offline analogs to the services starting to flourish on the web would continue to apply; Section 230 simply prevented platforms from being held derivatively liable for user generated content that violated them. [page 9-10].
Notably, none of what Section 230 proposed was a controversial proposition:
When the bill was debated, no member from either the Republican or Democratic side could be found to speak against it. The debate time was therefore shared between Democratic and Republican supporters of the bill, a highly unusual procedure for significant legislation. [page 11]
It was popular because it advanced Congress's overall policy to foster the most beneficial content online, and the least detrimental.
Section 230 by its terms applies to legal responsibility of any type, whether under civil or criminal state statutes and municipal ordinances. But the fact that the legislation was included in the CDA, concerned with offenses including criminal pornography, is a measure of how serious Congress was about immunizing Internet platforms from state and local laws. Internet platforms were to be spared responsibility for monitoring third-party content even in these egregious cases.
A bipartisan supermajority of Congress did not support this policy because they wished to give online commerce an advantage over offline businesses. Rather, it is the inherent nature of Internet commerce that caused Congress to choose purposefully to make third parties and not Internet platforms responsible for compliance with laws generally applicable to those third parties. Platform liability for user-generated content would rob the technology of its vast interstate and indeed global capability, which Congress decided to “embrace” and “welcome” not only because of its commercial potential but also “the opportunity for education and political discourse that it offers for all of us.” [page 11-12]
As the brief explains elsewhere, Congress's legislative instincts appear to have been born out, and the Internet today is replete with valuable services and expression. [page 7-8]. Obviously not everything the Internet offers is necessarily beneficial, but the challenges the Internet's success pose don't negate the policy balance Congress struck. Section 230 has enabled those successes, and if we want its commercial and educational benefit to continue to accrue, we need to make sure that the statute's critical protection remains available to all who depend on it to realize that potential.
SESTA has done enormous damage to the critical protection Section 230 affords platforms – and by extension all the Internet speech and online services they facilitate. But it's not the only threat: courts can also often mess things up for platforms by failing to recognize situations where Section 230 should apply and instead allowing platforms to be held liable for how their users have used their services.
Which leads to the situation Airbnb, Homeaway, and other such platforms find themselves in. Jurisdictions unhappy with some of the effects short-term rentals have had on their communities have taken to passing regulations designed to curb the practice. Whether or not it is good policy to do so is beyond the scope of this post. If some local jurisdictions want to impose liability on their residents for renting out their homes – and notall of them do – it's between them and their voters.
The problem arises when the regulations they come up with don't just target people renting their homes, but also target the online platforms that facilitate these transactions. These ordinances effectively create liability for platforms arising from content generated by others, which is a regulatory practice that Section 230 prohibits.
So Airbnb and Homeaway have started pushing back on these ordinances, first in San Francisco and now in Santa Monica. Unfortunately both efforts to enjoin them have resulted in federal district court decisions saying that Section 230 does not shield them from their reach, meaning that these local jurisdictions are fully able to hold these platforms liable if people use them to rent homes they aren't supposed to. The decision about the Santa Monica ordinance is now before the Ninth Circuit, and last week I wrote a brief for the Copia Institute explaining why it should find that Section 230 indeed prevents these ordinances from imposing liability on these platforms. It was important to say so, not just to support Airbnb and Homeaway, but because if Section 230 can't apply to them, then it won't be able to apply to a lot of other platforms that depend on it.
The crux of the problem appears to stem from courts not seeing how what is at stake in these cases is actually speech, perhaps because the kind of speech sites like Airbnb and Homeaway intermediate is so specific. But even if the only expression these platforms intermediates is, "I have a home to rent," it's still speech, speech created by someone other than the platform, and Section 230 therefore still applies. There is no language in Section 230 that would require a platform to intermediate lots of different kinds of expression in order to be entitled to the statute's protection. Many platforms are extremely specialized in the type of expression they intermediate, often because that's what makes them useful and effective as services, and all are equally entitled to the statute's protection.
The fact that the specific speech being intermediated is transactional in nature seems to be what's confusing the courts, especially given that these sites often make money by taking a cut of the transactions that are successful. The court addressing the Santa Monica ordinance recognized that a site like Craigslist, which also hosts "I have a home to rent" speech (among other types of speech), would not be affected by the ordinance because it doesn't make money when "I have a home to rent" speech results in a rental. But there is no reason that these platforms should be treated any differently. Section 230 applies regardless of how a platform makes its money. There's no requirement in the statutory language that a platform profit only in certain ways – in fact, if anything the statute encourages platforms to be innovative so that the public can continue to benefit from their services. And for good reason: think about platforms like eBay, which also profit when "I have a thing to sell" speech finds an audience who wants to buy it. If Section 230 protection could be withheld from all platforms that make money from consummated transactions it would be more than just the Airbnb and Homeaway who would be in trouble.
The only relevant question to ask in considering whether Section 230 applies is who created the content that is potentially wrongful. In the case of Airbnb and Homeaway it is their users. After all, there's nothing inherently wrongful about saying, "I have a home to rent." Whether it is wrongful depends entirely on whether the user is allowed to rent it per local law. Liability should therefore remain entirely with the user who is the one who imbued it with its wrongness. Particularly because it is often not practical, or even possible, for platforms to police all the content passing through them. Even if they had the resources to examine the volume of user-generated content that passes through their systems they may not have the ability to know which, if any of it, was wrongful. Thus if platforms could be forced by any particular jurisdiction to try to police it anyway, in order to stave off potentially expensive liability, it would invariably chill their ability to provide their services – including in other jurisdictions.
Which is also why Section 230 includes a pre-emption provision, so that no particular jurisdiction can get to decide for any other one what Internet speech and services people can benefit from in these other places. Without that provision the jurisdiction with the most restrictive laws would otherwise get to impose its policy choices on any other jurisdiction the service now shaped by these policies could reach, which, in the case of an Internet service, is every single one of the thousands and thousands of state and local jurisdictions nationwide.
It's no secret at all (though, they tried to hide it) that Hollywood and various MPAA front-groups were heavily involved behind the scenes in getting FOSTA/SESTA passed and signed into law. It all goes back to Project Goliath, the plan put together by the MPAA a few years back to use any means necessary to try to attack the fundamental principles of an open internet. While there have been all sorts of attempts, SESTA (i.e., misrepresenting the problem of sex trafficking as being an internet problem, and then pushing legislation that won't stop sex trafficking, but will harm internet companies) was the first to make it through.
But it's unlikely to be the last. Immediately on the heels of everyone now hating on Facebook, various MPAA front groups led by CreativeFuture and the Content Creators Coalition -- both of whom will consistently parrot complete nonsense about how the internet is evil (amusingly, sometimes using the very platforms they seek to destroy) -- have now sent a letter to lawmakers demanding more regulation of the internet and, in particular, more chipping away at intermediary liability protections that enable the free and open internet (the letter was first reported by TorrentFreak).
Most of the letter continues to play up the exaggerated moral panic around Facebook's actions. As we've noted many times, there are reasons to complain about Facebook, but so many of the complaints are on bad solutions, and that's absolutely true with this particular letter. Specifically, this letter presents three demands:
Last week’s hearing was an important first step in ensuring that Facebook, Google, Twitter, and other internet platforms must (1) take meaningful action to protect their users’ data, (2) take appropriate responsibility for the integrity of the news and information on their platforms, and (3) prevent the distribution of unlawful and harmful content through their channels.
On number one: yes, companies should do a better job protecting data, but the real issue is that companies shouldn't be holding onto so much data in the first place. Rather individual internet users should have a lot more control and power over their own use of data, which is very different than what these Hollywood groups are demanding. Besides, given Hollywood's history of being hacked and leaking all sorts of data, it certainly seems like a glass houses sort of situation, doesn't it?
As for number two: "take appropriate responsibility for the integrity of the news and information on their platforms." Really? This is Hollywood and content creators directly calling for censorship, which is truly fucked up if you think about it. After all, for much of Hollywood's history, politicians have complained about the kind of content that it puts out, and demanded censorship in response. Is Hollywood now really claiming to call for other industries to go through the same sort of nonsense? Should we apply the same rules to the MPAA studios? When they put out movies that are a historical farce, such as the very, very wrong propaganda flick Zero Dark Thirty, should Hollywood be required to "take appropriate responsibility" for spewing pro-torture propaganda? Because if they're insisting that internet platforms have to take responsibility for what user's post, it's only reasonable to say that Hollywood studios should take responsibility when they release movies that are similar nonsense.
And, finally, number three: preventing the distribution of unlawful and "harmful" content. Again, one has to wonder what the fuck happened to the legacy entertainment industry that it would now be advocating for some sort of legal ban on "harmful content." Remember, this is the same industry that has regularly been accused of producing "harmful" TV shows, movies and music. And now they're on record speaking out against harmful content? How quickly do you think that's going to boomerang back on Hollywood concerning its own content.
It's almost as if Hollywood is so focused on its hatred of the internet that the geniuses they brought in to run these front groups have no clue how their own arguments will end up shooting content creators right in the foot. I mean, if we're going to stop "harmful" content, doesn't that just give more fodder to religious groups attacking the legacy entertainment industry over "blasphemy," sex and drugs? Won't groups advocating against loosening morals use that to demand that Hollywood stop producing films that support these kinds of activities. Or what about violence, which Hollywood has glorified for decades.
Now, some of us who actually support free speech recognize that Hollywood should be able to produce those kinds of movies and TV shows and musicians should be able to record whatever music they want. But we also think that internet platforms should be open to decide what content they allow on their platforms as well. It's a shame that Hollywood seems to think free speech is only important in special circumstances when it applies to professionally produced content. Because that's exactly what this letter is suggesting.
The letter also includes this nonsense:
The real problem is not Facebook, or Mark Zuckerberg, regardless of how sincerely he seeks to own the “mistakes” that led to the hearing last week. The problem is endemic in a system that applies a different set of rules to the internet and fails to impose ordinary norms of accountability on businesses that are built around monetizing other people’s personal information and content.
This is... wrong. There isn't a "different set of rules." CDA 230 and DMCA 512 are both rules designed to properly apply liability to the party who actually breaks the law. Both of them say that just because someone uses a platform for illegal behavior it doesn't make the platform liable (the individual is still very much liable). That's not a different set of rules. And to argue that internet companies are not "accountable" is similarly ridiculous. We have a decently long history of the internet at this point, and we see, over and over again, when companies get too powerful, they become complacent. And when they do dumb things, competitors spring up, and the older companies fade away.
Hollywood, of course, isn't quite used to that kind of creative destruction. The major studios of the MPAA are 20th Century Fox (founded: 1935), Paramount (founded: 1912), Universal Studios (founded: 1912), Warner Bros. (founded: 1923), Disney Studios (founded: 1923) and Sony Pictures (which traces its lineage back to Columbia Pictures in 1924 or CBC Film Sales Corp in 1918). In other words, these are not entities used to creative upstarts taking their place. They work on the belief that the big guys are always the big guys.
And, really, at a time when many of Hollywood's biggest names are being brought down in "me too" moments, when it's clear that they had institutional support of their abuse going back decades, is it really appropriate for Hollywood, of all places, to be arguing that the tech industry needs to take more responsibility? This seems like little more than a hypocritical attempt by the usual MPAA front groups to kick Facebook while its down and try to use the anger over Facebook's mistakes to try to chip away at the internet they've always disliked.