Famous Racist Sues Twitter Claiming It Violates His Civil Rights As A Racist To Be Kicked Off The Platform
from the that's-not-how-any-of-this-works dept
We've seen a bunch of lawsuits of late filed by very angry people who have been kicked off of, or somehow limited by, various social media platforms. There's Dennis Prager's lawsuit against YouTube, as well as Chuck Johnson's lawsuit against Twitter. Neither of these have any likelihood of success. These platforms have every right to kick off whoever they want, and Section 230 of the CDA pretty much guarantees an easy win.
Now we have yet another one of these, Jared Taylor, a self-described "race realist" and "white advocate" (what most of us would call an out and out racist), has sued Twitter for kicking him and his organization off its platform. Taylor is represented by a few lawyers, including Marc Randazza, who I know and respect, but with whom I don't always agree -- and this is one of those cases. I think Randazza took a bad case and is making some fairly ridiculous arguments that will fail badly. Randazza declined to comment on my questions about this case, but his co-counsel -- law professor Adam Candeub and Noah Peters -- both were kind enough to discuss for quite some time their theory on the case, and to debate my concerns about why the lawsuit will so obviously fail. We'll get to their responses soon, but first let's look at the lawsuit itself.
To the credit of these lawyers, they make a valiant effort to distinguish this case from the Prager and Johnson cases, which appear to be just completely ridiculous. The Taylor case makes the most thorough argument I've seen for why Twitter can't kick someone off its platform. It's still so blatantly wrong and will almost certainly get laughed out of court, but the legal arguments are marginally better than those found in the other similar cases we've seen.
Like the other two cases we've mentioned, this case tries to twist the Supreme Court's Packingham ruling to say more than it really says. If you don't recall, that's the ruling from last summer noting that people can't be banned from the overall internet and laws requiring people be removed entirely from the internet violate their rights. All of these cases try to twist the Supreme Court's saying the government can't ban someone from the internet to also mean a private platform can't kick you off its service. Here's Taylor's version, which is used to set up the two key arguments in the case (which we'll get to shortly):
Twitter is the platform in which important political debates take place in the modern world. The U.S. Supreme Court has described social media sites such as Twitter as the “modern public square.” Packingham v. North Carolina (2017) 582 U.S. [137 S. Ct. 1730, 1737]. It is used by politicians, public intellectuals, and ordinary citizens the world over, expressing every conceivable viewpoint known to man. Unique among social media sites, Twitter allows ordinary citizens to interact directly with famous and prominent individuals in a wide variety of different fields. It has become an important communications channel for governments and heads of state. As the U.S. Supreme Court noted in Packingham, “[0]n Twitter, users can petition their elected representatives and otherwise engage with them in a direct manner. Indeed, Governors in all 50 States and almost every Member of Congress have set up accounts for this purpose. In short, social media users employ these websites to engage in a wide array of protected First Amendment activity on topics as diverse as human thought.” 137 S. Ct. at pp. 1735-36 (internal citations and quotations omitted). The Court in Packingham went on to state, in regard to social media sites like Twitter: “These websites can provide perhaps the most powerful mechanisms available to a private citizen to make his or her voice heard. They allow a person with an Internet connection to ‘become a town crier with a voice that resonates farther than it could from any soapbox.”’ Id. at p. 1737 (citation omitted) (quoting Reno v. American Civil Liberties Union (1997) 521 U. S. 844, 870 [117 S.Ct. 2329]).
The key to the claims here, are that Twitter's actions violate California law -- specifically both the California Constitution and the Unruh Civil Rights Act, which has become the latest "go to" of aggrieved people whining about being kicked off various internet platforms. The lawsuit argues that Taylor didn't violate Twitter's terms of service, and even though it flat out admits that Twitter's terms of service allow the company to remove users for any reason at all, it says that if done in a discriminatory manner, that violates Taylor's civil rights under the Unruh Act, a law that protects against discrimination on the basis of "sex, race, color, religion, ancestry, national origin, disability, medical condition, genetic information, marital status, or sexual orientation."
So how does kicking Taylor off Twitter run afoul of that?
Twitter has enforced its policy on “Violent Extremist Groups” in a way that discriminates against Plaintiffs on the basis of their viewpoint. It has not applied its policies fairly or consistently, targeting Mr. Taylor and American Renaissance, who do not promote violence, while allowing accounts affiliated with left-wing groups that promote violence to remain on Twitter.
Read that again. The argument is that, in effect, because Twitter has failed to ban similar "left-wing groups," this is discrimination. But, that directly runs afoul of CDA 230, which is explicit that the decision to moderate (or not!) some content, does not make you liable for other content you moderate (or fail to moderate). It says that no provider may be liable for "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected." In other words, what Twitter decides to remove is its decision alone.
Randazza is well aware of CDA 230, though I'm unclear if the two other lawyers are, but the complaint doesn't seem to bother to address why CDA 230 will almost certainly get this case dumped. In response to my question, the lawyers pointed out that they saw no reason to get around CDA 230 since (1) they don't believe it applies to this situation and (2) they'll wait to respond to those arguments if (when!) Twitter raises them in response to the complaint.
The other key argument in the case is that this violates California's Constitution by denying Taylor his right to "freely speak, write and publish." But nothing in the California Constitution says that any private platform has to host that speech. The filing bends over backwards to make Twitter be declared a digital public square / public utility, but that seems unlikely to fly in court.
Twitter is a public forum that exists to “[g]ive everyone the power to create and share ideas instantly, without barriers.” (Exh. B). The U.S. Supreme Court has described social media sites such as Twitter as the “modern public square.” Packingham, supra, 137 S. Ct. at p. 1737. Twitter is the paradigmatic example of a privately-owned space that meets all of the requirements for a Pruneyard claim under the California Constitution: It serves as a place for large groups of citizens to congregate; it seeks to induce as many people as possible to actively use its platform to post their views and discuss issues, as it “believe[s] in free expression and believe[s] every voice has the power to impact the world"... Twitter's entire business purpose is to allow the public to freely share and disseminate their views without any sort of viewpoint censorship; and no reasonable person would think Twitter was promoting or endorsing Plaintiff's speech by not censoring it--no more than a reasonable person would think Twitter was promoting or endorsing President Trump's speech or Kim Jong Un's speech by allowing it to exist on their platform. Thus, Plaintiff's speech imposes no cost on Twitter's business and no burdens on its property rights. Serving as a place where "everyone [has] the power to create and share ideas instantly, without barriers" and "every voice has the power to impact the world" is Twitter's very reason for existence. By adding to the variety of views available to the public, Plaintiffs are acting on Twitter's "belief in free speech" and fulfiling Twitter's tated mission of "sharing ideas instantly."
That's all nice and good... but completely meaningless with regards to whether or not Twitter can kick someone off its platform. The complaint goes on at great length to try to turn Twitter into something it is not:
Twitter is given over to public discussion and debate to a far greater extent than the shopping center in Pruneyard or the "streets, sidewalks and parks" that "[f]rom time immemorial... have been held in trust for the use of the public and have been used for purposes of assembly, communicating thoughts and discussing public questions." ... Unlike shopping centers, streets, sidewalks and parks, which are mostly used for functional, non-expressive purposes such as purchasing consumer goods, transportation, and private recreation, Twitter's primary purpose is to enable members of the public to engage in speech, self-expression and the communication of ideas.... In analysis that cuts to the heart of the Pruneyard public forum inquiry, the Packingham Court stated: "While in the past there may have been difficulty in identifying the most important places (in a spatial sense) for the exchange of views, today the answer is clear. It is cyberspace--the 'vast democratic forums of the Internet' in general, and social media in particular." ....
Becuase Twitter is a protected public forum under California law, Twitter may not selectively ban speakers from participating in its public forum based on disagreement with the speaker's viewpoint, just as the government may not selectively ban speech that expresses a viewpoint it disagrees with.
This all sounds good, but is basically wrong. Twitter, as a private platform, has been found repeatedly to have its own First Amendment rights to control what is displayed on its own platform. And, again, for all the high-minded language, nothing in the complaint explains how a private platform deciding it doesn't want to be associated with an individual user over his odious opinions, is even in the same ballpark as blocking someone from the entire internet. The complaint skims over all of this, but I imagine that Twitter's response briefs will hammer home the point repeatedly.
There are a few other claims in the lawsuit that we won't even bother digging into at this point, since there's a very high likelihood of them all being tossed out under CDA 230. It would be nice if that happens relatively quickly before lots of other similar lawsuits are filed and lots of time and money is wasted on this nonsense. In the meantime, Taylor and anyone else kicked off of these platforms is free to go on other platforms that would be happy to host his sort of nonsense (and there are plenty of others). But there's nothing in the law that says that Twitter must keep him there. And while I have no idea if Taylor knows this, Randazza almost certainly does.
As for Randazza's co-counsel, they were kind enough to engage in a fairly lengthy discussion on their theories of CDA 230, which I would charitably describe as "naive." They make a few different interpretations of CDA 230 that might be kind of plausible if you literally ignore hundreds and hundreds of cases about CDA 230, starting with Zeran, which quite clearly established that under CDA 230, internet platforms get broad immunity, which Taylor's lawyers claim only applies when the content moderation efforts "are connected to protecting children from essentially sexual or violent content." There are literally no cases that actually agree with that assessment. Candeub in fact argued that CDA 230 is a very narrow statute, in which any effort to curate creates liability and immunity only applies in that narrow case of protecting children. But that's not how courts have interpreted at all. Starting with Zeran, which clearly established that CDA 230 gives platforms broad immunity, especially on moderating or curating content:
The scant legislative history reflects that the "disincentive" Congress specifically had in mind was liability of the sort described in Stratton Oakmont, Inc. v. Prodigy Services Co., 1995 WL 323710 (Sup.Ct.N.Y. May 24, 1995). There, Prodigy, an interactive computer service provider, was held to have published the defamatory statements of a third party in part because Prodigy had voluntarily engaged in some content screening and editing and therefore knew or should have known of the statements. Congress, concerned that such rulings would induce interactive computer services to refrain from editing or blocking content, chose to grant immunity to interactive computer service providers from suits arising from efforts by those providers to screen or block content. Thus, Congress' clear objective in passing § 230 of the CDA was to encourage the development of technologies, procedures and techniques by which objectionable material could be blocked or deleted either by the interactive computer service provider itself or by the families and schools receiving information via the Internet. If this objective is frustrated by the imposition of distributor liability on Internet providers, then preemption is warranted. Closely examined, distributor liability has just this effect.
Internet providers subjected to distributor liability are less likely to undertake any editing or blocking efforts because such efforts can provide the basis for liability. For example, distributors of information may be held to have "reason to know" of the defamatory nature of statements made by a third party where that party "notoriously persists" in posting scandalous items.... An Internet provider's content editing policy might well generate a record of subscribers who "notoriously persist" in posting objectionable material. Such a record might well provide the basis for liability if objectionable content from a subscriber known to have posted such content in the past should slip through the editing process. Similarly, an Internet provider maintaining a hot-line or other procedure by which subscribers might report objectionable content in the provider's interactive computer system would expose itself to actual knowledge of the defamatory nature of certain postings and, thereby, expose itself to liability should the posting remain or reappear. Of course, in either example, a Internet provider can easily escape liability on this basis by refraining from blocking or reviewing any online content. This would eliminate any basis for inferring the provider's "reason to know" that a particular subscriber frequently publishes objectionable material. Similarly, by eliminating the hot-line or indeed any means for subscribers to report objectionable material, an Internet provider effectively eliminates any actual knowledge of the defamatory nature of information provided by third parties. Clearly, then, distributor liability discourages Internet providers from engaging in efforts to review online content and delete objectionable material, precisely the effort Congress sought to promote in enacting the CDA. Indeed, the most effective means by which an Internet provider could avoid the inference of a "reason to know" of objectionable material on its service would be to distance itself from any control over or knowledge of online content provided by third parties. This effect frustrates the purpose of the CDA and, thus, compels preemption of state law claims for distributor liability against interactive computer service providers.
Taylor's lawyers have a... very different interpretation of all of this. First, they argued that the mere act of curating content on a website is an act of content creation and thus not covered by CDA 230. When I pointed out that the text of basically every CDA 230 case says exactly the opposite, Candeub pointed me to three specific cases that he claims support his position. All three are lower level rulings, none of which have precedential power, as compared the litany of appeals court rulings going the other way -- and literally all three of these cases are fairly questionable. But I'll focus on the first one Candeub pointed to, Song Fi v. Google, which is one of the rare cases where a court has, in fact, ruled that CDA 230 didn't apply to YouTube's decision to take down a video. YouTube claimed that the video was getting faked views and pulled the video claiming a terms of service violation. The court -- very surprisingly -- found that CDA 230 didn't apply because the video did not fit under the category of "otherwise objectionable" material under CDA 230. As Professor Eric Goldman pointed out at the time, if the case were appealed, it would almost certainly go the other way.
But, more importantly, the case was still a loser for the plaintiffs, because the court found that since YouTube had in its terms of service the right to remove content for any reason, there was no breach of contract. It's odd that Candeub points us to the Song Fi ruling, since the Taylor complaint also includes a breach of contract claim while also repeatedly pointing out that Twitter's terms of service also say they can remove anyone for any reason. So, while this is one (lower court, non-precedential) ruling that kinda (if you squint) says what Candeub wants it to say on 230, it would still be fatal to his larger case, were it applied (and again, basically every other ruling has gone the other way, including many in the 9th Circuit which are binding on this court).
For example, in Zango v. Kaspersky, the 9th Circuit ruled that CDA 230(c)(2) applies to companies filtering content, and further notes that if people don't like the filtering choices, they're free to go elsewhere:
Zango also suggests that § 230 was not meant to immunize business torts of the sort it presses. However, we have interpreted § 230 immunity to cover business torts. See Perfect 10, Inc. v. CCBill, LLC, 488 F.3d 1102, 1108, 1118-19 (9th Cir.2007) (holding that CDA § 230 provided immunity from state unfair competition and false advertising actions). In any event, what § 230(c)(2)(B) does mean to do is to immunize any action taken to enable or make available to others the technical means to restrict access to objectionable material. If a Kaspersky user (who has bought and installed Kaspersky's software to block malware) is unhappy with the Kaspersky software's performance, he can uninstall Kaspersky and buy blocking software from another company that is less restrictive or more compatible with the user's needs. Recourse to competition is consistent with the statute's express policy of relying on the market for the development of interactive computer services
Candeub's co-counsel, Peters, offered up a different analysis of the 230 question, claiming that since they're not looking to hold Twitter liable as a publisher, CDA 230 doesn't apply. But, that's responding to the wrong part of CDA 230. That's the issue under CDA 230(c)(1). The problem for this lawsuit is under CDA 230(c)(2), which is what the Zeran court (and many, many, many other courts) established means that websites have full immunity for the choices they make in moderating content.
Either way, I ran Candeub and Peters' reasoning by Professor Goldman, who is considered one of the top experts on CDA 230, and he responded that their comments "gets the analysis precisely backwards." Given just how much caselaw is on the books about this already, it would be quite a surprise to find that Candeub, Peters and Randazza will have magically changed what many consider to be settled law. Just note the recent description of CDA 230's settled law in a recent ruling in the 1st Circuit:
There has been near-universal agreement that section 230 should not be construed grudgingly. See, e.g., Doe v. MySpace, Inc., 528 F.3d 413, 418 (5th Cir. 2008); Universal Commc'n Sys., Inc. v. Lycos, Inc., 478 F.3d 413, 419 (1st Cir. 2007); Almeida v. Amazon.com, Inc., 456 F.3d 1316, 1321-22 (11th Cir. 2006); Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1123 (9th Cir. 2003). This preference for broad construction recognizes that websites that display third-party content may have an infinite number of users generating an enormous amount of potentially harmful content, and holding website operators liable for that content "would have an obvious chilling effect" in light of the difficulty of screening posts for potential issues. Zeran, 129 F.3d at 331. The obverse of this proposition is equally salient: Congress sought to encourage websites to make efforts to screen content without fear of liability. See 47 U.S.C. § 230(b)(3)-(4); Zeran, 129 F.3d at 331; see also Lycos, 478 F.3d at 418-19. Such a hands-off approach is fully consistent with Congress's avowed desire to permit the continued development of the internet with minimal regulatory interference.
I don't envy anyone trying to convince this court that all those other courts are wrong -- and especially when their client is an avowed racist "race realist" who Twitter had every reason to wish off its platform.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: cda 230, civil rights, content moderation, discrimination, filters, intermediary liability, jared taylor, moderation, section 230, unruh act
Companies: twitter
Reader Comments
Subscribe: RSS
View by: Time | Thread
9th Circuit notoriously "liberal"; don't rely on it. -- Then all hinges on "good faith" and "objectionable". But in NO event did Congress authorize corporations to become Censors and determine what ideas are acceptable.
But my fallback position is that Supreme Court isn't reliable, either, once upheld sending actual persons back as escaped "property", and certainly in current fascist milieu, corporations have great influence.
But tain't right, and all your Corporatism don't make it right. It's holding that mere statute is the over-riding law, where we all know that common law and common sense and humanism MUST be considered, TOO -- ELSE, all that the Nazis did was okay because "legalized" by statute.
You really can't make a case for your notions, Masnick, except that people with whom you disagree are having access to major sites removed, and you, as Corporatist-Uber-Alles, think that's good. This time.
[ link to this | view in chronology ]
Re: 9th Circuit notoriously "liberal"; don't rely on it. -- Then all hinges on "good faith" and "objectionable". But in NO event did Congress authorize corporations to become Censors and determine what ideas are acceptable.
[ link to this | view in chronology ]
Re: Re: 9th Circuit notoriously "liberal"; don't rely on it. -- Then all hinges on "good faith" and "objectionable". But in NO event did Congress authorize corporations to become Censors and determine what ideas are acceptable.
[ link to this | view in chronology ]
Re: tl;dr
[ link to this | view in chronology ]
Re: Re: Re: 9th Circuit notoriously "liberal"; don't rely on it. -- Then all hinges on "good faith" and "objectionable". But in NO event did Congress authorize corporations to become Censors and determine what ideas are acceptable.
Nothing we wrote here is inconsistent with our devotion to free speech.
[ link to this | view in chronology ]
Re: 9th Circuit notoriously "liberal"; don't rely on it. -- Then all hinges on "good faith" and "objectionable". But in NO event did Congress authorize corporations to become Censors and determine what ideas are acceptable.
[ link to this | view in chronology ]
Re: 9th Circuit notoriously
Care to explain your position, but using English, and in a way that is relevant to the article?
[ link to this | view in chronology ]
Re: 9th Circuit notoriously "liberal"; don't rely on it. -- Then all hinges on "good faith" and "objectionable". But in NO event did Congress authorize corporations to become Censors and determine what ideas are acceptable.
You managed to hit Godwin in the first comment. Color me amazed.
Unicorn network is calling, go meet your imaginary friends to fight your imaginary enemies!
[ link to this | view in chronology ]
Re:
For all your insulting bluster and SovCit lingo, you have still failed to prove how a privately-owned platform can be forced to host third-party speech that goes against the rules of said platform. What law, common or otherwise, can legally force Twitter to host the speech of a White supremacist?
[ link to this | view in chronology ]
Remember when you said you’d stop saying common law?
[ link to this | view in chronology ]
Re: 9th Circuit notoriously "liberal"; don't rely on it. -- Then all hinges on "good faith" and "objectionable". But in NO event did Congress authorize corporations to become Censors and determine what ideas are acceptable.
[ link to this | view in chronology ]
The Origins of Man
Please, nobody mention Lucy in front of Jared Taylor, it would make him sad to find out we are all from Africa, way, way back (Lucy is dated to 3.5 million years ago). It may be that he believes the Bible and the world is only 6000 or so years old, but the Bible has made some other mistakes, and this might be yet another. He may believe what he likes, but foisting his beliefs upon the rest of us...well, it's our choice whether to listen or not. And, it is Twitters choice as to whether they want to host his soapbox, they are a private company and not subject to the same speech rules that Governments are.
Hmm, who else has this problem of 'requiring' everyone else to listen to their beliefs, or believe them no less?
[ link to this | view in chronology ]
Re: The Origins of Man
[ link to this | view in chronology ]
Re: The Origins of Man
Because I was not a Christian.
Then they came for the Conspiracy theorists, and I did not speak out—
Because I was not a Conspiracy theorist.
Then they came for the Race Realists, and I did not speak out—
Because I was not a Race Realists.
Then they came for me—and there was no one left to speak for me.
#StopFeminazizm
[ link to this | view in chronology ]
Re: Re: The Origins of Man
Fun fact: The "they" in that poem are the "race realists".
[ link to this | view in chronology ]
Re: Re: Re: The Origins of Man
I'm no man-hater but usually, when the #Feminazism card is played, it's about us wanting to be treated fairly, as equals.
[ link to this | view in chronology ]
Re: Re: Re: Re: The Origins of Man
Yeah, it takes some real mental gymnastics to rail against "Feminazis" while simultaneously defending actual nazis.
[ link to this | view in chronology ]
I'm not a lawyer, but...
So if at least 1 person doesn't need Twitter to be politically connected, then Twitter isn't a "requirement" for a political discussion.
2) If we assume the guy has a novel approach to his case, let's apply his argument to the real world.
If I went into a NAACP meeting and started ranting about "white power", would they violate my civil rights for kicking me out? If not, then how is that any different than Twitter kicking people off its service?
So what about the civil rights of the people being offended and who don't want the person ranting?
[ link to this | view in chronology ]
Re: I'm not a lawyer, but...
Because I was not a Christian.
Then they came for the Conspiracy theorists, and I did not speak out—
Because I was not a Conspiracy theorist.
Then they came for the Race Realists, and I did not speak out—
Because I was not a Race Realist.
Then they came for me—and there was no one left to speak for me.
#StopFeminazizm
[ link to this | view in chronology ]
Re: Re: I'm not a lawyer, but...
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
Wake me up when you've decided what "Middle America" means. You may well find (to your surprise) that it doesn't involve marching about with tiki torches chanting (not) Nazi (we pinkie promise!) slogans.
While San Francisco has historically been on the left/liberal side of the political aisle it doesn't mean that the companies based there all have such views. Were that true there wouldn't have been such kerfuffles over diversity, etc.
The problem is that conservatism has been hijacked by the far right, who consider anyone who disagrees with them to be left of Lenin. I miss actual conservatism, which was more about common sense and good governance than soundbite-based policy-making.
[ link to this | view in chronology ]
Randazza
[ link to this | view in chronology ]
Re: Randazza
[ link to this | view in chronology ]
Re: Re: Randazza
[ link to this | view in chronology ]
Is he claiming it is a mental disability or a religion?
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re:
Probably sexual orientation, since he only wants to have sex with those of his own race.
[ link to this | view in chronology ]
I read it again...
Sorry, where does it say that a person's viewpoint is protected?
[ link to this | view in chronology ]
Re: I read it again...
[ link to this | view in chronology ]
Re: I read it again...
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Why the heck do people have so much difficulty to understand the 1st is valid for the goddamn government, not private entities?
[ link to this | view in chronology ]
Re:
When certain "classes" and "races" get protections that other "classes" and "races" do not, it is only natural for people to confuse things like this.
There is no greater minority than the individual. Democrats seek to create a Caste System by another name in a faux pursuit of equality.
Its the same lie that says feminists are for equality among the sexes when "egalitarian" is what is equality for the sexes. Feminism is simply female supremacy just like the KKK is for white supremacy, just like black panthers are for black supremacy, and just like how la raza is for brown supremacy.
They all seek it, but lie about their motivations in a way that reveals it's not really that big of a secret.
[ link to this | view in chronology ]
Re: Re:
Gee, imagine that, a country where specific people who fit into a specific mold have more rights than other people. I imagine such a country would have laws that allow for the enslavement of people based on skin color, laws that say an enslaved person only counts as three-fifths of a person for a variety of purposes, and laws that segregate non-enslaved people based on skin color such that the favored race receive more social benefits and legal rights than the unfavored race.
But that sort of thing could never happen here, right?
[ link to this | view in chronology ]
Re: Re:
That’s a fine piece of right wing nut job garbage worthy of Wing Nut Daily right there.
[ link to this | view in chronology ]
Illustrative rather than restrictive
The complaint, in ¶ 67 on p.20, cites the the California Supreme Court decision in Marina Point v Wolfson. That case, interpreting the California's Unruh Civil Rights Act, holds that the list quoted above is “illustrative rather than restrictive.”
(Alternate emphasis in source.)
I'm merely noting this narrow point, without making any broader statement, or reaching any larger conclusion.
[ link to this | view in chronology ]
Re: Illustrative rather than restrictive
With the sole exception of “marital status”, the bases of discrimination listed in the law all have to do with inherent traits—i.e., things out of the control of a given person. (You do not choose your skin color, your biological sex, your ethnic background, and so on.) Religious beliefs/membership in a given religious sect, despite also being a choice, would likely also be recognized as a basis of discrimination under this law. But I fail to see how a political belief would fit under this law such that the law would protect someone from discrimination for, say, expressing the belief that White people are inherently superior to all other peoples. I also fail to see how this law could—or should—be used to force Twitter into hosting speech that expresses such a political belief, given that whole “First Amendment” thing we have.
[ link to this | view in chronology ]
Re: Re: Illustrative rather than restrictive
The excerpt from Marina Point cites as authority In Re Cox (Cal.1970). From Cox—
(Footnotes omitted.)
The characteristics of membership in the John Birch Society, or the ACLU, are characteristics of political belief.
Almost fifty-odd years ago… long hair was too.
[ link to this | view in chronology ]
Re: Re: Re: Illustrative rather than restrictive
Association with a given group is still a choice, and a political belief in and of itself does not denote an association with any given group. I can both share an anti-capitalist viewpoint with the Democratic Socialists of America and not be a member of the DSA, after all. That said, I see your point there.
Still doesn’t actually answer the more pertinent question, though: Should this law be used to force Twitter into hosting speech which goes against the site’s TOS/its admins do not want to host?
[ link to this | view in chronology ]
Bollocks! (and I'm not even British)
Twitter's primary purpose is to make money. Same goes for all businesses (that succeed). Of course, a business may do noble work to achieve their goal of making money (think hospitals), but that's a decision to be made by the company in question, not the government. There are plenty of companies that are very successful for doing very ignoble work (think Blackwater).
The First Amendment of the good ol' 'tution says, "Congress shall make no law ... abridging the freedom of speech". Twitter ain't congress, so they can do whatever they want to abridge any Twitter user's speech.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
Lawyers are hired hands - they are hired to make the best possible argument for their client.
If a "bad assertion of law" is nonetheless the best available argument then, professionally, they are expected to make it.
Even bad people with bad cases are supposed to be able to have competent advocacy. That is how the law works.
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
California Constitution
Is that really so different that the Pruneyard center deciding they don't want to be associated with a bunch of solicitors? And can the federal government, via CDA, override the California constitution (if courts feel someone's rights are violated here).
[ link to this | view in chronology ]
Re: California Constitution
Article VI
(Emphasis added.)
[ link to this | view in chronology ]
Is Twitter allowed kick someone off by popular community vote?
Is Twitter allowed kick someone off because of who they associate with?
Is Twitter allowed kick someone off by request of advertisers?
Is Twitter allowed kick someone off due to language used (single instance or multiple?)
Is Twitter allowed kick someone off as part of a personal Vendetta?
Is Twitter allowed kick someone off because of how they smell?
Is Twitter allowed kick someone off because of how they look?
Each of those questions are a little more separated from reality than the last but they all come to one main question; Is Twitter allowed kick someone off their platform for any reason they see fit and be in the legal right? If yes, this is an open-and-shut suit. If no, well then the law does dictate who has protected Twitter access and this lawsuit has merit.
[ link to this | view in chronology ]
Re:
At some point, they need to keep some users or they wouldn't have a business. If some user is causing other users to leave, don't they have the right to keep those other users?
[ link to this | view in chronology ]
Re: provocative questions...
At what point in the process of becoming a monopoly does or should a platform like twitter (or facebook, or google, or amazon) really need to start treating everyone "equally", for whatever values of rules you like?
That is, you can't toss someone off "the internet" arbitrarily without a very strong legal reason because the internet itself has a near monopoly on a necessity -- information and communication. Surprised this hasn't come up in RIAA versus ISP Grande communications.
Twitter, google, facebook, amazon have all become huge, very public platforms...and therefore should take on some of the responsibilities on non-discriminatory public accommodations. Which responsibilities, at what threshold, and why?
The law seems to have come down in other areas as deciding which types of discrimination are unreasonable (skin color, etc) and which are reasonable (had to be escorted off the property last time). Where *should* it come down here?
[ link to this | view in chronology ]
Re: Re: provocative questions...
The issue there lies in that whole “public platform” designation. Say you somehow convince lawmakers and the Supreme Court that Twitter, Facebook, etc. have a legal obligation to host any and all speech, even speech the platforms do not want to host, because they are “public platforms”. What, then, do you do with every open-to-public-registration Mastodon instance running right now? Such Masto instances are technically “public platforms”, even if the entire “fediverse” does not come anywhere close to matching the userbase size of Twitter, Facebook, etc. Plenty of Masto instances have Codes of Conduct that forbid the use of bigoted language (e.g., racial slurs, anti-queer slurs). If a “public platform” must take on the obligations of acting like a non-discriminatory public accomodation, what must those Mastodon instances do about their Codes of Conduct and the resulting cultural norms arising from those rules? And if you say “well this only applies to large social media networks”, at what arbitrary point does a network become large enough to qualify for those obligations?
[ link to this | view in chronology ]
Re: Re: Re: provocative questions...
The Government, maybe at all levels, open a Town Square talking point forum, on the Internet. No...two Town Squares talking point forums. One would be any anonymous (insert your choice of prerogative here) and the second being you absolutely give over who you are and where your coming from (location information) and subject yourself to the rules of the forum. Those rules would be some level of decorum, civility and on topic. The former being a wide open, let it all hang out venue, and the latter for some serious discussion.
The results would be the former open forum much disparaging of anyone not agreeing to ones viewpoint. The latter would be a forum that might or might not adhere to the levels of decorum, civility and on topic as the Government would not be able, Constitutionally, to diminish in any way the comments that were not decorous, civil or on topic as they might be subject to how decorous or civil or on topic might be interpreted.
Which leaves us with privately maintained forums that might or might not agree with your standing. Thereby giving them the right to censor you.
Which in turn leaves us with the old standard. Go to some public square and set up your soapbox. Whether anyone wants to listen is up to them, not you.
[ link to this | view in chronology ]
Re: Re: provocative questions...
Here's one specific, identifiable point in the historical process…
Act of July 2, 1864, being entitled in full, “AN ACT to amend an act entitled ‘An act to aid in the construction of a railroad and telegraph line from the Missouri river to the Pacific ocean, and to secure to the government the use of the same for postal, military, and other purposes,” approved July first, eighteen hundred and sixty-two.”
Perhaps not the exact answer you were looking for. But nevertheless, a point in time: July 2, 1864.
[ link to this | view in chronology ]
Re: Re: Re: provocative questions...
This was in reference to specific telegraph lines built with a specific federal grant. I don't know if Twitter has received much federal funding or not, but I feel pretty sure Congress didn't pass a law to act as a kickstarter for it.
It's also for "news and messages of like character" -- not all messages. I'm not certain what sort of information this would leave unqualified as "news", but presumably something that is mere conversation, or based on opinion rather than reporting a fact, would not be protected.
This is much more closely related to oversight of utilities (thus the net neutrality debate) than oversight of the internet (thus the CDA 230 debate), in my opinion.
[ link to this | view in chronology ]
Re: Re: Re: Re: provocative questions...
I would hazard a guess that the section of the Act was mostly directed to limit monopoly powers from hamstringing competition in markets.
[ link to this | view in chronology ]
Re: Re: Re: provocative questions...
[ link to this | view in chronology ]
Re: Re: Re: Re: provocative questions...
I'm not a mad socialist myself but I hate to see terms misused. How can you debate the merits or demerits of any political position when you can't be honest about what it really is?
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
You appear to be conflating "content" with "person who produces content."
What, not who. There's a difference between moderation and prior restraint, and ISTM that Twitter is on the wrong side of that difference. Just as a matter of principle, if prior restraint is something that we don't even trust our democratically-elected government, which is accountable to the people, to do except under the most exigent circumstances, why in the world should we entrust an unaccountable private entity with that power?
[ link to this | view in chronology ]
Re:
“Any action” is an extremely broad category.
Suppose that Twitter were to decide (in good faith!) that shooting this plaintiff dead would “restrict access to or availability of material that the provider … considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable”.
Would shooting this plaintiff dead comfortably fit under “any action”? It'd certainly be effective, but any action at all? That can't be right.
[ link to this | view in chronology ]
Re: Re:
Reductio ad absurdum—Twitter’s ability to moderate the platform does not give it the right to commit murder, and you damn well know it. Pull a better argument out of your ass or put your head back up it.
[ link to this | view in chronology ]
Re: Re:
Would shooting this plaintiff dead comfortably fit under “any action”? It'd certainly be effective, but any action at all? That can't be right.
Caselaw on 230 is pretty clear that if the action itself is illegal (see: Roommates.com ruling) then you are not immune from liability for those actions. Shooting someone is illegal. So Twitter would be responsible for the shooting. But it would still not be responsible for content moderation.
[ link to this | view in chronology ]
Re: Re: Re:
That's interesting. Do you mean "illegal under federal law"? Otherwise what stops a court from claming it's illegal to kick people off, under California precedent (Pruneyard)?
[ link to this | view in chronology ]
Re: Re: Re:
The text of 47 USC §230(e)(3), of course, provides:
And I notice that you (perhaps wisely) didn't respond to the other poster's query—
I think it's pretty clear from the hypothetical that no reasonable judge (not even a textualist like Gorsuch!) is going to fully ascribe a bare, literal meaning to the words “any action” in 47 USC § 230(c)(2)(A). “Any” is just unthinkably broad.
That phrase “any action” must be qualified and restricted.
When Riggs v MySpace (9th Cir.2011) cited Roommates for the proposition—
—well, the court just hadn't really thought through all the activities that the can be boiled down to excluding material online. The court was speaking carelessly.
Tentatively, after thinking about it a few days, to resolve the textual problem with the least violence to the statute, my legal preference would be to say that the “action” must be directly —not indirectly— tied to restricting access to the material. That is, “action” immediately directed against the person of the poster would fall outside the statutory immunity.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
https://xkcd.com/1357/
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
Just as a matter of principle, if prior restraint is something that we don't even trust our democratically-elected government, which is accountable to the people, to do except under the most exigent circumstances, why in the world should we entrust an unaccountable private entity with that power?
I disagree. Why should Twitter be forced to host content it doesn't want up there? That is the argument.
Prior restraint refers to the government, not to private entities, however popular they are. https://en.wikipedia.org/wiki/Prior_restraint
Notice that there's nothing in that article about social media. Please can you provide a citation of some kind if you still disagree?
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
Twitter would be saying “we don’t do that here”, which means they don’t care at all whether you do it on Tumblr, Facebook, or anywhere else you can post your stuff. The government would be saying “you can’t do that anywhere”, which means they care like goddamn whether you do it on Twitter, Tumblr, Facebook, and anywhere else. If you fail to see the difference there, that is your issue.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
Technically I think they're both saying "you can't do that in a place which we control"; it's just that Twitter only controls its own platform, whereas the scope of what the government controls is much broader.
Does the difference you see still hold up when the issue is viewed through that lens?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re:
If Twitter can't control its platform it can't moderate it. If it can't moderate its platform it can't remove illegal content without checking with the government first.
[ link to this | view in chronology ]
Re: Re: Re: Re:
Because it's an important distinction if it's a government doing it. For example, we call it arrest if a police officer does it and kidnapping if a private citizen does it.
[ link to this | view in chronology ]
If I own a bar...
[ link to this | view in chronology ]
Re: If I own a bar...
Maybe. I doubt your bar would stay open long, though.
[ link to this | view in chronology ]
Re: If I own a bar...
[ link to this | view in chronology ]
Re: Re: If I own a bar...
/s
[ link to this | view in chronology ]
Are there similar left wing groups? (There are some to be sure, but again they are more a minority than the white racist groups, who are also apparently right-wing by their own claims here.) Pretty sure the actual counterpart in a sensible argument is, do they kick off other hate extremists like extremists, terrorists, and their sympathizers? They sure as hell do that is they are Muslim or associated with some other "unamerican" polity.
[ link to this | view in chronology ]
Re:
Right wing groups, however, have been closely associated with violence, though I cannot claim with any factual assertion that they are.
The problem is, left wing...probably non violent...right wing, maybe violent. What is a droid walking down the street to believe? Whomever is loudest?
No. They should believe that whatever action is proposed, by whomever is speaking, if applied to 'their own faction' is also perceived as correct. If both consequences are the same, then they need to resolve the quandary. Observers should be wary of brain farts during this process.
This hardly seems the case...most times.
[ link to this | view in chronology ]
Re: Re:
Well, yes. There are some quite prominent ones. Antifa is the most notable. While not all Black Lives Matter groups are violent, a loud minority certainly are (like the Ferguson riots, and the four teens that beat up the autistic man). There's also the recent college riots against Milo Yiannopolos (I'm pretty sure I failed to spell that correctly) which were allegedly funded and supported by George Soros.
Now, the threat from right-wing violence does seem a lot more prevalent to me, but that may just be because I live in the part of the country where Roy Moore can almost win a third election after being removed from office twice, and where the battle flag of the Confederacy is more often seen than the golden arches.
[ link to this | view in chronology ]
Re: Re: Re:
Actually, labeling is in itself a bad thing. Who really knows what lurks in the hearts of people? If some behavior is bad, then the law has cures for that (though the law and their application needs some serious contemplation), but if some thought is bad, then there needs to be some serious discussion. Discuss until illegal actions are precluded. The thought is not necessarily wrong (though at some point might be indicative of underlying psychotic issues), action taken thereof is.
Because some who claim to be members of a 'left wing' faction do some bad things does not make the 'left wing' bad, or violent. When the 'left wing' becomes violent, then their whole raisons d'être goes nil.
[ link to this | view in chronology ]
Re: Re: Re: Re:
It's only accurate to say that the acts of violence and discrimination need to be addressed. And only if a group explicitly condones such violence and discrimination (such as the KKK, ISIS, or Antifa) should any actions be taken against the group as a whole.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
I'm no keener on t'other side and as you rightly point out, it's the desire to be violent that's the problem, not the politics per se.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
Just as a nit, "[desperate for] an opportunity to bash a Nazi" does not imply "only in it for the violence"; someone who's only in it for the violence would be equally glad of an opportunity to bash anyone, Nazi or otherwise.
Agreed otherwise, though.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re:
The trouble is, any bad actor can use the exact same arguments in the exact same way, which is why I'm opposed to violence towards anyone even if they are Nazis. Mock 'em, it's more fun.
[ link to this | view in chronology ]
Re: Re: Re: Re:
...huh?
Do you know where the basic concept of a "left wing" originated? It came from the events surrounding the French Revolution and its aftermath--one of the bloodiest, ugliest, most violent periods of barbarity, inhumanity, and senseless waste in human history.
(Preemptive note: please note that I said exactly what I said here, and did not say anything that I didn't actually say. For example, I did not say anything implying that the history of the right is praiseworthy simply because the history of the left is ugly.)
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
I'll never understand that, which is why I often mock the left for bashing its class boogeyman. Class is not the problem, privilege is. They're different things because class is fluid. Try telling a leftie that. He'll agree for five minutes then snap back to default like an elastic band. It's creepy when they do that. It's creepy when anybody does that — I see that on the right where healthcare provision is concerned, too.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Wedding Cakes
[ link to this | view in chronology ]
Re: Wedding Cakes
It's not. It's very much the same.
However, the only time that bakery could possibly get in trouble is when they do it for a protected reason -- i.e., in the act claimed by the lawsuit, "sex, race, color, religion, ancestry, national origin, disability, medical condition, genetic information, marital status, or sexual orientation." That does not include political views, choice of vocabulary, or behavior towards other customers.
So you would have a case if they refuse to make you a Star of David cake but made a cross cake for someone else. You would have a case if they refuse to make you a cake because you wore a Star of David but made a cake for someone wearing a cross.
But if they refuse to make you a swastika cake or a Hillary cake, or they refuse to make you any cake because you made racist remarks towards another customer, tough luck, bud. They can do that.
[ link to this | view in chronology ]
Re: Re: Wedding Cakes
Umm, you do know that the swastika is a religious symbol, right, bud?
[ link to this | view in chronology ]
Re: Re: Re: Wedding Cakes
If anyone is successful in getting the Nazi version recognised as a religious symbol we'll end up with racist gits as a protected class. This is why I tend to be wary of the idea of protected classes in the first place; discrimination, however well-intended, has a nasty habit of coming back to bite us on the bum.
[ link to this | view in chronology ]
Re: Wedding Cakes
[ link to this | view in chronology ]
All freedom of speech means is that I can't call the cops to stop you from screaming your idiocy in the streets. If you come into my house or my place of business and start spouting that same nonsense, I'm well within my rights to tell you to take a hike. How hard is this to understand?
[ link to this | view in chronology ]
Re:
But I can disconnect your water, gas, electricity or internet connection. Or refuse to sell you food. Anything doesn't involve calling cops, right?
[ link to this | view in chronology ]
Re:
Sure you can. Screaming is disturbing the peace; standing in the street is blocking traffic.
The cops cannot, however, arrest someone for speaking, at a reasonable volume and at an appropriate time of day, on a street corner.
[ link to this | view in chronology ]
"sex, race, color, religion, ancestry, national origin, disability, medical condition, genetic information, marital status, or sexual orientation."
Complaint:
"discriminates against Plaintiffs on the basis of their viewpoint."
IANAL, but not matter how many times I read the list for the Unruh Act, I can't find any mention of 'viewpoint'.
So unless the complaint is also trying to extend the reach of the Unruh act to include viewpoints, this argument would seem like a complete non-starter.
[ link to this | view in chronology ]
1. There is a virtually zero chance of success even if I try my best so there will be no actual harm.
2. One more idiot racist is now poorer and I am now richer. I can take on more pro bono cases to crush other racists/bad ideas.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
What is a racist?
[ link to this | view in chronology ]
Re: What is a racist?
You obviously have internet access. If you want to know what a word means, look it up.
[ link to this | view in chronology ]