from the that's-not-how-any-of-this-works dept
We've seen a bunch of lawsuits of late filed by very angry people who have been kicked off of, or somehow limited by, various social media platforms. There's Dennis Prager's lawsuit against YouTube, as well as Chuck Johnson's lawsuit against Twitter. Neither of these have any likelihood of success. These platforms have every right to kick off whoever they want, and Section 230 of the CDA pretty much guarantees an easy win.
Now we have yet another one of these, Jared Taylor, a self-described "race realist" and "white advocate" (what most of us would call an out and out racist), has sued Twitter for kicking him and his organization off its platform. Taylor is represented by a few lawyers, including Marc Randazza, who I know and respect, but with whom I don't always agree -- and this is one of those cases. I think Randazza took a bad case and is making some fairly ridiculous arguments that will fail badly. Randazza declined to comment on my questions about this case, but his co-counsel -- law professor Adam Candeub and Noah Peters -- both were kind enough to discuss for quite some time their theory on the case, and to debate my concerns about why the lawsuit will so obviously fail. We'll get to their responses soon, but first let's look at the lawsuit itself.
To the credit of these lawyers, they make a valiant effort to distinguish this case from the Prager and Johnson cases, which appear to be just completely ridiculous. The Taylor case makes the most thorough argument I've seen for why Twitter can't kick someone off its platform. It's still so blatantly wrong and will almost certainly get laughed out of court, but the legal arguments are marginally better than those found in the other similar cases we've seen.
Like the other two cases we've mentioned, this case tries to twist the Supreme Court's Packingham ruling to say more than it really says. If you don't recall, that's the ruling from last summer noting that people can't be banned from the overall internet and laws requiring people be removed entirely from the internet violate their rights. All of these cases try to twist the Supreme Court's saying the government can't ban someone from the internet to also mean a private platform can't kick you off its service. Here's Taylor's version, which is used to set up the two key arguments in the case (which we'll get to shortly):
Twitter is the platform in which important political debates take place in the modern world. The U.S. Supreme Court has described social media sites such as Twitter as the “modern public square.” Packingham v. North Carolina (2017) 582 U.S. [137 S. Ct. 1730, 1737]. It is used by politicians, public intellectuals, and ordinary citizens the world over, expressing every conceivable viewpoint known to man. Unique among social media sites, Twitter allows ordinary citizens to interact directly with famous and prominent individuals in a wide variety of different fields. It has become an important communications channel for governments and heads of state. As the U.S. Supreme Court noted in Packingham, “[0]n Twitter, users can petition their elected representatives and otherwise engage with them in a direct manner. Indeed, Governors in all 50 States and almost every Member of Congress have set up accounts for this purpose. In short, social media users employ these websites to engage in a wide array of protected First Amendment activity on topics as diverse as human thought.” 137 S. Ct. at pp. 1735-36 (internal citations and quotations omitted). The Court in Packingham went on to state, in regard to social media sites like Twitter: “These websites can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard. They allow a person with an Internet connection to ‘become a town crier with a voice that resonates farther than it could from any soapbox.”’ Id. at p. 1737 (citation omitted) (quoting Reno v. American Civil Liberties Union (1997) 521 U. S. 844, 870 [117 S.Ct. 2329]).
The key to the claims here, are that Twitter's actions violate California law -- specifically both the California Constitution and the Unruh Civil Rights Act, which has become the latest "go to" of aggrieved people whining about being kicked off various internet platforms. The lawsuit argues that Taylor didn't violate Twitter's terms of service, and even though it flat out admits that Twitter's terms of service allow the company to remove users for any reason at all, it says that if done in a discriminatory manner, that violates Taylor's civil rights under the Unruh Act, a law that protects against discrimination on the basis of "sex, race, color, religion, ancestry, national origin, disability, medical condition, genetic information, marital status, or sexual orientation."
So how does kicking Taylor off Twitter run afoul of that?
Twitter has enforced its policy on “Violent Extremist Groups” in a way that discriminates against Plaintiffs on the basis of their viewpoint. It has not applied its policies fairly or consistently, targeting Mr. Taylor and American Renaissance, who do not promote violence, while allowing accounts affiliated with left-wing groups that promote violence to remain on Twitter.
Read that again. The argument is that, in effect, because Twitter has failed to ban similar "left-wing groups," this is discrimination. But, that directly runs afoul of CDA 230, which is explicit that the decision to moderate (or not!) some content, does not make you liable for other content you moderate (or fail to moderate). It says that no provider may be liable for "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected." In other words, what Twitter decides to remove is its decision alone.
Randazza is well aware of CDA 230, though I'm unclear if the two other lawyers are, but the complaint doesn't seem to bother to address why CDA 230 will almost certainly get this case dumped. In response to my question, the lawyers pointed out that they saw no reason to get around CDA 230 since (1) they don't believe it applies to this situation and (2) they'll wait to respond to those arguments if (when!) Twitter raises them in response to the complaint.
The other key argument in the case is that this violates California's Constitution by denying Taylor his right to "freely speak, write and publish." But nothing in the California Constitution says that any private platform has to host that speech. The filing bends over backwards to make Twitter be declared a digital public square / public utility, but that seems unlikely to fly in court.
Twitter is a public forum that exists to “[g]ive everyone the power to create and share ideas instantly, without barriers.” (Exh. B). The U.S. Supreme Court has described social media sites such as Twitter as the “modern public square.” Packingham, supra, 137 S. Ct. at p. 1737. Twitter is the paradigmatic example of a privately-owned space that meets all of the requirements for a Pruneyard claim under the California Constitution: It serves as a place for large groups of citizens to congregate; it seeks to induce as many people as possible to actively use its platform to post their views and discuss issues, as it “believe[s] in free expression and believe[s] every voice has the power to impact the world"... Twitter's entire business purpose is to allow the public to freely share and disseminate their views without any sort of viewpoint censorship; and no reasonable person would think Twitter was promoting or endorsing Plaintiff's speech by not censoring it--no more than a reasonable person would think Twitter was promoting or endorsing President Trump's speech or Kim Jong Un's speech by allowing it to exist on their platform. Thus, Plaintiff's speech imposes no cost on Twitter's business and no burdens on its property rights. Serving as a place where "everyone [has] the power to create and share ideas instantly, without barriers" and "every voice has the power to impact the world" is Twitter's very reason for existence. By adding to the variety of views available to the public, Plaintiffs are acting on Twitter's "belief in free speech" and fulfiling Twitter's tated mission of "sharing ideas instantly."
That's all nice and good... but completely meaningless with regards to whether or not Twitter can kick someone off its platform. The complaint goes on at great length to try to turn Twitter into something it is not:
Twitter is given over to public discussion and debate to a far greater extent than the shopping center in Pruneyard or the "streets, sidewalks and parks" that "[f]rom time immemorial... have been held in trust for the use of the public and have been used for purposes of assembly, communicating thoughts and discussing public questions." ... Unlike shopping centers, streets, sidewalks and parks, which are mostly used for functional, non-expressive purposes such as purchasing consumer goods, transportation, and private recreation, Twitter's primary purpose is to enable members of the public to engage in speech, self-expression and the communication of ideas.... In analysis that cuts to the heart of the Pruneyard public forum inquiry, the Packingham Court stated: "While in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace--the 'vast democratic forums of the Internet' in general, and social media in particular." ....
Becuase Twitter is a protected public forum under California law, Twitter may not selectively ban speakers from participating in its public forum based on disagreement with the speaker's viewpoint, just as the government may not selectively ban speech that expresses a viewpoint it disagrees with.
This all sounds good, but is basically wrong. Twitter, as a private platform, has been found repeatedly to have its own First Amendment rights to control what is displayed on its own platform. And, again, for all the high-minded language, nothing in the complaint explains how a private platform deciding it doesn't want to be associated with an individual user over his odious opinions, is even in the same ballpark as blocking someone from the entire internet. The complaint skims over all of this, but I imagine that Twitter's response briefs will hammer home the point repeatedly.
There are a few other claims in the lawsuit that we won't even bother digging into at this point, since there's a very high likelihood of them all being tossed out under CDA 230. It would be nice if that happens relatively quickly before lots of other similar lawsuits are filed and lots of time and money is wasted on this nonsense. In the meantime, Taylor and anyone else kicked off of these platforms is free to go on other platforms that would be happy to host his sort of nonsense (and there are plenty of others). But there's nothing in the law that says that Twitter must keep him there. And while I have no idea if Taylor knows this, Randazza almost certainly does.
As for Randazza's co-counsel, they were kind enough to engage in a fairly lengthy discussion on their theories of CDA 230, which I would charitably describe as "naive." They make a few different interpretations of CDA 230 that might be kind of plausible if you literally ignore hundreds and hundreds of cases about CDA 230, starting with Zeran, which quite clearly established that under CDA 230, internet platforms get broad immunity, which Taylor's lawyers claim only applies when the content moderation efforts "are connected to protecting children from essentially sexual or violent content." There are literally no cases that actually agree with that assessment. Candeub in fact argued that CDA 230 is a very narrow statute, in which any effort to curate creates liability and immunity only applies in that narrow case of protecting children. But that's not how courts have interpreted at all. Starting with Zeran, which clearly established that CDA 230 gives platforms broad immunity, especially on moderating or curating content:
The scant legislative history reflects that the "disincentive" Congress specifically had in mind was liability of the sort described in Stratton Oakmont, Inc. v. Prodigy Services Co., 1995 WL 323710 (Sup.Ct.N.Y. May 24, 1995). There, Prodigy, an interactive computer service provider, was held to have published the defamatory statements of a third party in part because Prodigy had voluntarily engaged in some content screening and editing and therefore knew or should have known of the statements. Congress, concerned that such rulings would induce interactive computer services to refrain from editing or blocking content, chose to grant immunity to interactive computer service providers from suits arising from efforts by those providers to screen or block content. Thus, Congress' clear objective in passing § 230 of the CDA was to encourage the development of technologies, procedures and techniques by which objectionable material could be blocked or deleted either by the interactive computer service provider itself or by the families and schools receiving information via the Internet. If this objective is frustrated by the imposition of distributor liability on Internet providers, then preemption is warranted. Closely examined, distributor liability has just this effect.
Internet providers subjected to distributor liability are less likely to undertake any editing or blocking efforts because such efforts can provide the basis for liability. For example, distributors of information may be held to have "reason to know" of the defamatory nature of statements made by a third party where that party "notoriously persists" in posting scandalous items.... An Internet provider's content editing policy might well generate a record of subscribers who "notoriously persist" in posting objectionable material. Such a record might well provide the basis for liability if objectionable content from a subscriber known to have posted such content in the past should slip through the editing process. Similarly, an Internet provider maintaining a hot-line or other procedure by which subscribers might report objectionable content in the provider's interactive computer system would expose itself to actual knowledge of the defamatory nature of certain postings and, thereby, expose itself to liability should the posting remain or reappear. Of course, in either example, a Internet provider can easily escape liability on this basis by refraining from blocking or reviewing any online content. This would eliminate any basis for inferring the provider's "reason to know" that a particular subscriber frequently publishes objectionable material. Similarly, by eliminating the hot-line or indeed any means for subscribers to report objectionable material, an Internet provider effectively eliminates any actual knowledge of the defamatory nature of information provided by third parties. Clearly, then, distributor liability discourages Internet providers from engaging in efforts to review online content and delete objectionable material, precisely the effort Congress sought to promote in enacting the CDA. Indeed, the most effective means by which an Internet provider could avoid the inference of a "reason to know" of objectionable material on its service would be to distance itself from any control over or knowledge of online content provided by third parties. This effect frustrates the purpose of the CDA and, thus, compels preemption of state law claims for distributor liability against interactive computer service providers.
Taylor's lawyers have a... very different interpretation of all of this. First, they argued that the mere act of curating content on a website is an act of content creation and thus not covered by CDA 230. When I pointed out that the text of basically every CDA 230 case says exactly the opposite, Candeub pointed me to three specific cases that he claims support his position. All three are lower level rulings, none of which have precedential power, as compared the litany of appeals court rulings going the other way -- and literally all three of these cases are fairly questionable. But I'll focus on the first one Candeub pointed to, Song Fi v. Google, which is one of the rare cases where a court has, in fact, ruled that CDA 230 didn't apply to YouTube's decision to take down a video. YouTube claimed that the video was getting faked views and pulled the video claiming a terms of service violation. The court -- very surprisingly -- found that CDA 230 didn't apply because the video did not fit under the category of "otherwise objectionable" material under CDA 230. As Professor Eric Goldman pointed out at the time, if the case were appealed, it would almost certainly go the other way.
But, more importantly, the case was still a loser for the plaintiffs, because the court found that since YouTube had in its terms of service the right to remove content for any reason, there was no breach of contract. It's odd that Candeub points us to the Song Fi ruling, since the Taylor complaint also includes a breach of contract claim while also repeatedly pointing out that Twitter's terms of service also say they can remove anyone for any reason. So, while this is one (lower court, non-precedential) ruling that kinda (if you squint) says what Candeub wants it to say on 230, it would still be fatal to his larger case, were it applied (and again, basically every other ruling has gone the other way, including many in the 9th Circuit which are binding on this court).
For example, in Zango v. Kaspersky, the 9th Circuit ruled that CDA 230(c)(2) applies to companies filtering content, and further notes that if people don't like the filtering choices, they're free to go elsewhere:
Zango also suggests that § 230 was not meant to immunize business torts of the sort it presses. However, we have interpreted § 230 immunity to cover business torts. See Perfect 10, Inc. v. CCBill, LLC, 488 F.3d 1102, 1108, 1118-19 (9th Cir.2007) (holding that CDA § 230 provided immunity from state unfair competition and false advertising actions). In any event, what § 230(c)(2)(B) does mean to do is to immunize any action taken to enable or make available to others the technical means to restrict access to objectionable material. If a Kaspersky user (who has bought and installed Kaspersky's software to block malware) is unhappy with the Kaspersky software's performance, he can uninstall Kaspersky and buy blocking software from another company that is less restrictive or more compatible with the user's needs. Recourse to competition is consistent with the statute's express policy of relying on the market for the development of interactive computer services
Candeub's co-counsel, Peters, offered up a different analysis of the 230 question, claiming that since they're not looking to hold Twitter liable as a publisher, CDA 230 doesn't apply. But, that's responding to the wrong part of CDA 230. That's the issue under CDA 230(c)(1). The problem for this lawsuit is under CDA 230(c)(2), which is what the Zeran court (and many, many, many other courts) established means that websites have full immunity for the choices they make in moderating content.
Either way, I ran Candeub and Peters' reasoning by Professor Goldman, who is considered one of the top experts on CDA 230, and he responded that their comments "gets the analysis precisely backwards." Given just how much caselaw is on the books about this already, it would be quite a surprise to find that Candeub, Peters and Randazza will have magically changed what many consider to be settled law. Just note the recent description of CDA 230's settled law in a recent ruling in the 1st Circuit:
There has been near-universal agreement that section 230 should not be construed grudgingly. See, e.g., Doe v. MySpace, Inc., 528 F.3d 413, 418 (5th Cir. 2008); Universal Commc'n Sys., Inc. v. Lycos, Inc., 478 F.3d 413, 419 (1st Cir. 2007); Almeida v. Amazon.com, Inc., 456 F.3d 1316, 1321-22 (11th Cir. 2006); Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1123 (9th Cir. 2003). This preference for broad construction recognizes that websites that display third-party content may have an infinite number of users generating an enormous amount of potentially harmful content, and holding website operators liable for that content "would have an obvious chilling effect" in light of the difficulty of screening posts for potential issues. Zeran, 129 F.3d at 331. The obverse of this proposition is equally salient: Congress sought to encourage websites to make efforts to screen content without fear of liability. See 47 U.S.C. § 230(b)(3)-(4); Zeran, 129 F.3d at 331; see also Lycos, 478 F.3d at 418-19. Such a hands-off approach is fully consistent with Congress's avowed desire to permit the continued development of the internet with minimal regulatory interference.
I don't envy anyone trying to convince this court that all those other courts are wrong -- and especially when their client is an avowed racist "race realist" who Twitter had every reason to wish off its platform.
Filed Under: cda 230, civil rights, content moderation, discrimination, filters, intermediary liability, jared taylor, moderation, section 230, unruh act
Companies: twitter