Clarence Thomas Doesn't Like Section 230, Adding To His Anti-Free Speech Legacy
from the really-now? dept
I'm not quite sure what has gotten into Supreme Court Justice Clarence Thomas lately, but he's been on quite a roll in terms of deciding he wants to toss out all sorts of well-settled precedents (including, in at least one case, his very own precedent). What's alarming, though, is that he seems particularly focused on hacking away at free speech and the 1st Amendment. Back in 2016, when people were discussing whether or Donald Trump could "open up" libel laws, lawyer Ken White noted that there was no real appetite among judges to attack free speech.
However, it certainly looks like Thomas has that appetite, and is trying to inspire others.
It started a year and a half ago, when he (basically out of nowhere) suggested that NYT v. Sullivan was no longer good law. That's the case that set up the well-established and well-recognized standards for defamation of a public figure. It's a key 1st Amendment case, because it sets the bar quite high in an effort to protect free speech about public figures -- saying that it can only be defamation if the speaker saying it knows that the statement is false, or says it with "reckless disregard" for whether or not it is false. While this makes it difficult for a public figure to win a defamation lawsuit, that's the point. If you believe in the 1st Amendment, then that standard needs to be quite high.
Today, Thomas decided to also suggest he believes that Section 230's 1st Amendment protecting elements have been interpreted too broadly, and suggests that he'd like to overturn nearly 25 years of "settled" law about how broadly 230 should be applied. He did this as part of the Court rejecting the petition in the Malwarebytes case. We'll have more on this case later, but as we've written in the past, it involves a troubling interpretation that says if moderation is used in a way deemed anti-competitive, 230 does not protect that moderation.
Thomas agrees with the decision to reject hearing that case, but then decides to signal his desire to basically undermine the original Section 230 ruling in Zeran v. AOL that set the bar, by noting that Section 230 provided a very broad immunity. That ruling was in the 4th Circuit, but basically every other appeals court that has ruled on 230 has adopted the Zeran standard. There is no circuit split, and the the Supreme Court has never directly examined the issue. Thomas suggests they should.
To be clear, while there are dozens (or perhaps more than that) of kooky and crazy interpretations out there of Section 230, Thomas's critique of the interpretation is much more measured. That doesn't mean that it's correct. Indeed, I think it's wrong on multiple accounts. But it's not wrong in the completely nonsense sort of ways that so much 230 analysis is these days. First, he discusses what 230 is and how it came about, including a discussion about historical distributor liability (much of which we discussed in our recent Greenhouse post about online liability before 230).
In short, pre-230, there was publisher liability and distributor liability -- which were two separate concepts. Under distributor liability, you could be held liable if you had knowledge of illegal products that you were distributing. The Zeran ruling more or less said that the concept of distributor liability is gone on the internet. It ruled that Section 230 created a broad immunity for internet distributors. For what it's worth, the authors of Section 230, Chris Cox and Ron Wyden, have long said that this was the correct interpretation of the law they wrote.
The key argument that Thomas makes is that Section 230 was not designed to completely eliminate the concept of "distributor liability." He argues that a strict reading of 230 would retain a separate form of distributor liability, and that Zeran went too far:
Courts have discarded the longstanding distinction between “publisher” liability and “distributor” liability. Although the text of §230(c)(1) grants immunity only from “publisher” or “speaker” liability, the first appellate court to consider the statute held that it eliminates distributor liability too—that is, §230 confers immunity even when a company distributes content that it knows is illegal. Zeran v. America Online, Inc., 129 F. 3d 327, 331–334 (CA4 1997). In reaching this conclusion, the court stressed that permitting distributor liability “would defeat the two primary purposes of the statute,” namely, “immuniz[ing] service providers” and encouraging “selfregulation.” Id., at 331, 334. And subsequent decisions, citing Zeran, have adopted this holding as a categorical rule across all contexts....
To be sure, recognizing some overlap between publishers and distributors is not unheard of. Sources sometimes use language that arguably blurs the distinction between publishers and distributors. One source respectively refers to them as “primary publishers” and “secondary publishersor disseminators,” explaining that distributors can be “charged with publication.”
But he disagrees with this interpretation and gives three reasons to question the prevailing understanding of 230:
First, Congress expressly imposed distributor liability in the very same Act that included §230. Section 502 of the Communications Decency Act makes it a crime to “knowingly . . . display” obscene material to children, even if a third party created that content. 110 Stat. 133–134 (codified at 47 U. S. C. §223(d)). This section is enforceable by civil remedy. 47 U. S. C. §207. It is odd to hold, as courts have, that Congress implicitly eliminated distributor liability in the very Act in which Congress explicitly imposed it.
This is... an ahistorical reading of the situation. As Chris Cox has explained in great detail, 230 was never meant to be understood in connection with the rest of the Communications Decency Act (all of which the Supreme Court tossed out as unconstitutional in Reno v. ACLU). 230 was meant as an alternative approach to the clearly unconstitutional approach that Senator Exon wanted with the CDA, which was a plan to try to ban all pornographic and offensive material on the internet. Cox and Wyden realized that approach would not work and would be problematic, and presented 230 as an alternative. Through some maneuvering during the conference to align the House and Senate bills, the two approaches got mashed together.
But -- and this is kind of important -- the Exon approach of creating a ridiculous strict form of distributor liability was thrown out as unconstitutional, leaving just the Cox/Wyden approach which says there is no distributor liability. Thomas ignores this aspect of the history, and acts as if the intention all along was that 230 was meant to somehow work together with the rest of the CDA. That was not the intention.
Second, Congress enacted §230 just one year after Stratton Oakmont used the terms “publisher” and “distributor,” instead of “primary publisher” and “secondary publisher.” If, as courts suggest, Stratton Oakmont was the legal backdrop on which Congress legislated, e.g., FTC v. Accusearch Inc., 570 F. 3d 1187, 1195 (CA10 2009), one might expect Congress to use the same terms Stratton Oakmont used.
Again, this is a very weird statement. It suggests that 230 wasn't designed to overturn Stratton Oakmont. But here we don't even need to ask the authors of 230 to explain why that's wrong, because the Congressional Report regarding the law said explicitly that it was written to overturn Stratton Oakmont. For Thomas to suggest this was not the case is just... odd? From the Congressional Report:
This section provides ``Good Samaritan'' protections from civil liability for providers or users of an interactive computer service for actions to restrict or to enable restriction of access to objectionable online material. One of the specific purposes of this section is to overrule Stratton- Oakmont v. Prodigy and any other similar decisions which have treated such providers and users as publishers or speakers of content that is not their own because they have restricted access to objectionable material. The conferees believe that such decisions create serious obstacles to the important federal policy of empowering parents to determine the content of communications their children receive through interactive computer services.
I recognize that Justices like Thomas like to ignore the Congressional record in reading laws, but it does seem weird for him to suggest that it wasn't meant to overturn Stratton Oakmont when the record literally says the exact opposite.
Third, had Congress wanted to eliminate both publisher and distributor liability, it could have simply created a categorical immunity in §230(c)(1): No provider “shall be held liable” for information provided by a third party. After all, it used that exact categorical language in the very next subsection, which governs removal of content. §230(c)(2).Where Congress uses a particular phrase in one subsection and a different phrase in another, we ordinarily presume that the difference is meaningful.
This is the one point that is actually the strongest argument from Thomas, though it's still not good. Most people recognize that the drafting differences between (c)(1) and (c)(2) of Section 230 are awkward in that they are not parallel. But Thomas tries to read way too much into that awkwardness, and ignores (again) what the Congressional record says, what the authors of the statute say, and how the various courts have interpreted the law.
Separately, this seems to ignore the intent of (c)(2), which is more targeted at tool makers/filter creators than the companies that host content. As law professor Derek Bambauer highlights in a useful thread, (c)(1) was to protect websites and (c)(2) was designed to protect tool makers.
8) 230(c)(2) essentially allows 230(c)(1) providers / users to outsource decisionmaking. It’s thus in an entirely different context – one that disappears if you apply a myopic text-only approach to statutory interpretation.
— Derek Bambauer (@dbambauer) October 13, 2020
For what it's worth, as Neil Chilson notes, while Thomas is busy trying to strictly interpret the law based on what he sees in the text, he inadvertently slips in his own language into the statute that is not there. Specifically, Thomas claims that 230 was only meant to apply to companies that "unknowingly" leave up illegal third-party content. But "unknowingly" is nowhere in the statute. So it's a bit odd for him to insist that he can only interpret strictly based on what's in the law, when he's also out there adding his own words to the statute.
Thomas then suggests that the courts could take a much broader reading of (f)(3) of the law (which is a key element of the NTIA's petition to the FCC), which says that a platform can still be liable for content that it "in whole or in part" helps in the "creation or development." He highlights two cases where many believe that language was stretched (Batzel and Dirty World). In both of those cases, someone deliberately chose to pass along potentially defamatory content (one to a mailing list and the other to a blog) with minimal commentary. In both cases, 230 was deemed to protect it.
But from the beginning, courts have held that §230(c)(1) protects the “exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content.” E.g., Zeran, 129 F. 3d, at 330 (emphasis added); cf. id., at 332 (stating also that §230(c)(1) protects the decision to “edit”). Only later did courts wrestle with the language in §230(f )(3) suggesting providers areliable for content they help develop “in part.” To harmonize that text with the interpretation that §230(c)(1) protects “traditional editorial functions,” courts relied on policy arguments to narrowly construe §230(f )(3) to cover only substantial or material edits and additions. E.g., Batzel v. Smith, 333 F. 3d 1018, 1031, and n. 18 (CA9 2003) (“[A] central purpose of the Act was to protect from liability service providers and users who take some affirmative steps to edit the material posted”).
Under this interpretation, a company can solicit thousands of potentially defamatory statements, “selec[t] and edi[t] . . . for publication” several of those statements, add commentary, and then feature the final product prominently over other submissions—all while enjoying immunity. Jones v. Dirty World Entertainment Recordings LLC, 755 F. 3d 398, 403, 410, 416 (CA6 2014) (interpreting “development” narrowly to “preserv[e] the broad immunity th[at §230] provides for website operators’ exercise of traditional publisher functions”). To say that editing a statement and adding commentary in this context does not“creat[e] or develo[p]” the final product, even in part, isdubious.
There are, in fact, even some supporters of Section 230 who argue that Batzel and Dirty World were perhaps decided incorrectly, though I disagree. The key issue is that in neither Batzel nor Dirty World did the defendants in those cases create or develop the allegedly defamatory content. They did add commentary -- and would and should be liable if that commentary itself were defamatory. But in passing along the content, they took no part in the creation or development of it. Curation is different that creation or development.
And, indeed, this is why the Roommates case (which Thomas also mentions in passing) turns out to be important. While I initially disagreed with the ruling, in retrospect, I think it was exactly right. In Roommates, the company did not have 230 protections specifically on the content it developed (in that case, a pull down menu about race, that it was argued could violate fair housing laws). The pull down itself was created by the company, and thus it was liable for it. That standard makes a lot of sense, even if Thomas brushes it off. With regards to the Batzel or Dirty World cases, and the interpretation regarding "editing," the story would be different if the editing was what introduced the allegedly defamatory content. But if that content was from the 3rd party, 230 protects the service provider or user.
From there, Thomas argues that the broad interpretation of (c)(1) has more or less obliterated (c)(2):
The decisions that broadly interpret §230(c)(1) to protect traditional publisher functions also eviscerated the narrower liability shield Congress included in the statute. Section 230(c)(2)(A) encourages companies to create content guidelines and protects those companies that “in good faith . . . restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Taken together, both provisions in §230(c) most naturally read to protect companies when they unknowingly decline to exercise editorial functions to edit or remove third-party content, §230(c)(1), and when they decide to exercise those editorial functions in good faith, §230(c)(2)(A).
Notice, again, that Thomas inserts an "unknowingly" where it does not exist.
But by construing §230(c)(1) to protect any decision to edit or remove content, Barnes v. Yahoo!, Inc., 570 F. 3d 1096, 1105 (CA9 2009), courts have curtailed the limits Congress placed on decisions to remove content, see e-ventures Worldwide, LLC v. Google, Inc., 2017 WL 2210029, *3 (MD Fla., Feb. 8, 2017) (rejecting the interpretation that §230(c)(1) protects removal decisions because it would “swallo[w] the more specific immunity in (c)(2)”). With no limits on an Internet company’s discretion to take down material, §230 now apparently protects companies who racially discriminate in removing content. Sikhs for Justice, Inc. v. Facebook, Inc., 697 Fed. Appx. 526 (CA9 2017), aff ’g144 F. Supp. 3d 1088, 1094 (ND Cal. 2015) (concluding that “‘any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune’” under §230(c)(1)).
Except that courts have regularly declined to read the law quite as broadly as Thomas suggests they have here.
Courts also have extended §230 to protect companies from a broad array of traditional product-defect claims. In one case, for example, several victims of human trafficking alleged that an Internet company that allowed users to post classified ads for “Escorts” deliberately structured its website to facilitate illegal human trafficking. Among other things, the company “tailored its posting requirements to make sex trafficking easier,” accepted anonymous payments, failed to verify e-mails, and stripped metadata from photographs to make crimes harder to track. Jane Doe No. 1 v. Backpage.com, LLC, 817 F. 3d 12, 16–21 (CA1 2016).Bound by precedent creating a “capacious conception of what it means to treat a website operator as the publisher or speaker,” the court held that §230 protected these website design decisions and thus barred these claims. Id., at 19; see also M. A. v. Village Voice Media Holdings, LLC, 809 F. Supp. 2d 1041, 1048 (ED Mo. 2011).
This somewhat salacious interpretation of what happened is also not entirely accurate. Under a Roommates standard, if the actions of Backpage were actually directly a part of the development of illegal content, then it could have been found liable. The issue here is that it was a lot more complicated, and the moderation decisions had non-nefarious, reasonable interpretations that Thomas has chosen to ignore. And, of course, it also leaves out that Backpage was eventually taken down and its execs are facing (somewhat complicated) criminal charges.
Consider also a recent decision granting full immunity to a company for recommending content by terrorists. Force v. Facebook, Inc., 934 F. 3d 53, 65 (CA2 2019), cert. denied,590 U. S. —— (2020). The court first pressed the policy argument that, to pursue “Congress’s objectives, . . . the text of Section 230(c)(1) should be construed broadly in favor of immunity.” 934 F. 3d, at 64. It then granted immunity, reasoning that recommending content “is an essential result of publishing.” Id., at 66. Unconvinced, the dissent noted that, even if all publisher conduct is protected by §230(c)(1), it “strains the English language to say that in targeting and recommending these writings to users . . . Facebook is acting as ‘the publisher of . . . information provided by another information content provider.’” Id., at 76– 77 (Katzmann, C. J., concurring in part and dissenting inpart) (quoting §230(c)(1)).
Remember, the Force case involves the family member of a person killed by terrorists who blamed Facebook, despite there being no connection at all between Facebook and the terrorists who killed the family member. It was just "this person was killed by terrorists" and "some terrorists have at times used Facebook." Section 230 was designed for exactly these kinds of cases.
More troubling, of course, is that Thomas ignores the interplay of the 1st Amendment and Section 230 in cases like this. Facebook's recommendation algorithm is still protected under the 1st Amendment. Section 230 acts as a procedural shield to help get bad 1st Amendment cases dismissed early.
Other examples abound. One court granted immunity on a design-defect claim concerning a dating application that allegedly lacked basic safety features to prevent harassment and impersonation. Herrick v. Grindr LLC, 765 Fed. Appx. 586, 591 (CA2 2019), cert. denied, 589 U. S. —— (2019). Another granted immunity on a claim that a social media company defectively designed its product by creating a feature that encouraged reckless driving. Lemmon v. Snap, Inc., 440 F. Supp. 3d 1103, 1107, 1113 (CD Cal. 2020).
Again, Thomas is selectively choosing these examples. He ignores the Doe v. Internet Brands case that went the other way, suggesting that courts have not read the law quite as broadly as he insists they have. It also ignores the facts of the two cases he describes, which are not at all how he's presented them (we've explained how both the Herrick case and the Lemmon case have been mis-portrayed, and their facts show why the 230 rulings were proper).
A common thread through all these cases is that the plaintiffs were not necessarily trying to hold the defendants liable “as the publisher or speaker” of third-party content. §230(c)(1). Nor did their claims seek to hold defendants liable for removing content in good faith. §230(c)(2). Their claims rested instead on alleged product design flaws—that is, the defendant’s own misconduct. Cf. Accusearch, 570 F. 3d, at 1204 (Tymkovich, J., concurring) (stating that §230 should not apply when the plaintiff sues over a defendant’s “conduct rather than for the content of the information”). Yet courts, filtering their decisions through the policy argument that “Section 230(c)(1) should be construed broadly,” Force, 934 F. 3d, at 64, give defendants immunity.
This is an odd statement for Thomas to be making, as the supposedly conservative wing of the Supreme Court, of which Thomas is a key member, has been the one that has pulled the court further and further into arguing that conduct can be expression and thus protected under the 1st Amendment. Yet here, he's now suddenly complaining that the courts have been ruling similarly with regards to 230. Indeed, this is so odd a position that I'd guess he might find more support from the supposedly "left wing" of the Court for this particular argument.
None of this means that Section 230 will be picked apart by the Supreme Court. Thomas is just one Justice, and it's unclear if any of the others agree with him. And, to date, the court has shown little interest exploring these issues. You need more than one Justice on board to do very much. Still, this is a concerning argument, and again suggests that Thomas is, perhaps, one of the least free-expression-supportive Justices on the Supreme Court.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: cda 230, clarence thomas, free speech, intermediary liability, section 230, supreme court
Reader Comments
Subscribe: RSS
View by: Time | Thread
If you have to lie to make your argument...
It would seem that even US supreme court justices aren't immune from the trend of attacking 230 with lies and misrepresentations, what a surprise.
If 230 really was so terrible you'd think that there would be at least one honest argument against it by this point, making the fact that to date all arguments have either been based upon lies, misrepresentations or misreading either a stunning coincidence or a strong indicator that even those attacking the law realize they can't do so honestly.
[ link to this | view in chronology ]
Re: If you have to lie to make your argument...
"It would seem that even US supreme court justices aren't immune from the trend of attacking 230 with lies and misrepresentations, what a surprise. "
Well, I'm honestly not too surprised to see that particular judge make that particular call. For a black judge so invested in equal rights cases he's certainly put in the extra mile for the cause of making owning people a practical option.
I guess that's what happens when you combine a Randist with a fundamentalist archconservative.
[ link to this | view in chronology ]
Old Man Yells at Cloud
[ link to this | view in chronology ]
I wonder what kompromat they have on him...
[ link to this | view in chronology ]
Re:
Mosy likely he came to those conclusions on his own. Clarence Thomas is a big fan of government power über alles and quite eager to invoke in loco parentis whenever there's a force of authority he can support over the individual.
[ link to this | view in chronology ]
@"This is... an ahistorical reading of the situation." -- SO?
You're stuck on a few prior decisions that aren't necessarily wrong but don't apply to today's actualities.
We now KNOW how corporations will abuse "immunity" to gain money and power, to defund and discriminate against viewpoints. YOU must present a case for continuing that abuse -- and not require me to show exact harm, the evidence for which is locked up by the very miscreants behind corporate barriers manned by armies of lawyers.
Thomas has the correct view in light of all American history: FREE SPEECH as in NOT controlled by anyone EXCEPT well-known Common Law / court decisions -- which are ALWAYS to be in accord with Common Law, because that's the standard which American courts are to use!
No one can be set up as censors. Most of all not corporations which as condition of existence have agreed to SERVE The Public according to the whole commercial law.
[ link to this | view in chronology ]
Re: @"This is... an ahistorical reading of the situation." -- SO
Bruh.
[ link to this | view in chronology ]
Don't mind Woody, 'batshit crazy and ranting' is their default state, just flag and ignore so they can go back to frothing at the mouth while yelling at inanimate objects while the adults have real conversations.
[ link to this | view in chronology ]
How can a corporation control and enforce a copyright when you believe corporations have no legal rights, and how do you feel about corporations using copyright to censor speech?
[ link to this | view in chronology ]
Re: @"This is... an ahistorical reading of the situation.&q
Fun fact: CDA 230's protections codify the very common law court decisions Woody lies about supporting.
[ link to this | view in chronology ]
Re: Re: @"This is... an ahistorical reading of the situatio
Ugh, I hate it when my phone logs me out on its own.
[ link to this | view in chronology ]
Maz supports GOOGLE even though it has defunded him
because of the "dangerous and derogatory" comments by fanboys here. Masnick is so rabid of a corporatist that will even accept himself abitrarily and unaccountably harmed.
And the wall of text above leaves out the VERY broad interpretation that Masnick wants of "otherwise objectionable" to empower corporate censorship. That's exactly addressed in the below PDF.
If anyone here besides me is actually interested in the emergent view of S230 supporting Public Forums with The Public to benefit, NOT corporate walled-gardens, READ ALL OF:
https://assets.documentcloud.org/documents/6405555/Enigma-Malwarebytes.pdf
[ link to this | view in chronology ]
Re: Maz supports GOOGLE even though it has defunded him
By the way, as I state but will no doubt be cause for quibbling: YES, my focus is different, not on the defamation bits, but on what Maz leaves out though always wants.
[ link to this | view in chronology ]
And what, pray tell, is that? Be specific.
[ link to this | view in chronology ]
47 U.S.C. § 230 protects more than “corporate walled gardens”. You’d know that If you weren’t invested in your own intentional ignorance.
[ link to this | view in chronology ]
Re: Maz supports GOOGLE even though it has defunded him
Seriously, can you let us know what happened to stop you raving like a lunatic for a few months? Was it a change in medication? Solitary confinement? Being transferred to another facility? What can possibly have stopped your obsessive fictions from darkening the door here, then make you resume as if sanity had not entered?
[ link to this | view in chronology ]
Sorry, but saying that "Clarence Thomas doesn't like something" is a bit like saying "Kermit the Frog doesn't like something." Justice Thomas is just expressing the view of whatever high-paying hand happens to be up his backside.
[ link to this | view in chronology ]
Kermit the Frog doesn’t have the power to help decide the outcome of a court case involving Section 230.
[ link to this | view in chronology ]
Please go into more detail on MalwareBytes case...
This seems a bit weird, in that I've never heard of Enigma, but HitMan Pro's website has never had problems. This is one of the things I would definitely like details on. As far as Section 230 and anti-competitive practices, imagine if Internet Explorer had blocked Mozilla or Chrome back in the day, they got sued by just including it in the OS package, so common law might be already set to say that's leading to monopolistic behavior. Obviously, INAL, but as a company I wouldn't want to take the risk unless it was community driven to block said site. Even if this is just US related, I could see issues in the EU for said practices.
[ link to this | view in chronology ]
Re: Please go into more detail on MalwareBytes case...
So... just because a registered company, rather than an obscure criminal organization, produces content that could be invasive, malware, or a potentially-unwanted-program, another company with products designed to filter such code should not be able to filter it? Or is it just because the company with the code subject to filtering by another is operating in the same space?
What details are you looking for? There are links to a ruling and another article with plenty of links. Is there something specific not covered?
P.S. Malwarebytes did indeed have flagging for Enigma precisely because their customers/users were flagging it.
[ link to this | view in chronology ]
As if section 230 protected free speech... Dude, you're as delusional as you are arrogant.
[ link to this | view in chronology ]
230 does protect free speech, though. It extends the protections of speech and association offered by the First Amendment to interactive web services and their users. The ability of service admins to moderate speech they find offensive enough to warrant deletion can provide more space to people whose voices would be silenced by a deluge of such speech. Or would you prefer to see racial slurs, anti-queer propaganda, spam, and other such bullshit overrun Twitter?
[ link to this | view in chronology ]
Re:
You do realize, when you reduce it down to "racial slurs" you're just straw manning it like a mofo?
Of course you do.
People are getting suspended or warned on Facebook for posting images as innocuous as onions. I once received a suspension from Facebook, because they claimed i posted nudity, when in fact i posted an animated gif of The Dude from The Big Lebowski.
Facebook provides no real appeal system, and the rules are as arbitrarily enforced as they are on Twitter. In top of that, Facebook denies users, whom they've permanently banned, access to images, comments, and posts. As well, the suspended or banned user is denied login rights to any 3rd party site that uses Facebook Login... which is wrong on so many levels. Who is Facebook to say who is and isn't allowed to access a 3rd party site?
Any service that opaquely gathers info on people, while it arbitrarily enforces rules to ban actual dissent with no chance to appeal, is as far from "free speech" as prostitutes are from virginity.
[ link to this | view in chronology ]
No, I’m not. Racial slurs are offensive; they’re also protected by the First Amendment. By pointing out that 230 allows for the moderation of racial slurs, I’m also pointing out that removing 230 would remove the ability of services to moderate that (legally protected) speech.
So what? Neither you nor those people have a legally guaranteed right to use Facebook — even if Facebook fucks up its moderation.
Facebook is, if you used Facebook as a middleman for accessing that site. As I said: You don’t have a right to use Facebook. That extends to any service Facebook provides, which includes login credentials for other sites. You don’t have to like it; you need only learn to live with it.
And if Facebook admins were bound by the First Amendment to host all legally protected speech or compelled by law to give everyone a space on Facebook regardless of their behavior, you might have a point about which I could care. They’re not; you don’t.
[ link to this | view in chronology ]
Re:
So what?
Racial slurs?
So what. Sticks and stones.
Besides, both Facebook and Twitter allows users to block people whose content they find objectionable, which makes banning a rather moot point. At that point its not meant to protect, but pure retribution by the thought police.
Section 230 protects no one, because the lack of enforcement allows social media companies to push approved narratives by acting as an editor to curate the content, which by definition makes them a publisher.
In the end, you and your cohorts are hypocrites. If you were true to your left wing rhetoric, your ilk would have a serious issues with the fuedal nature of social media. It uses surreptitiously self administered dopamine hits to keep the dumb-masses filling social media pages with content... for free...
You'd think defending the masses from capitalist getting rich off the labor of those masses would resonate with the left, but power as way more important, so exploit away.
[ link to this | view in chronology ]
And nooses…
And I’m sure you’ll be bitching about Gab and Parler doing that to left-leaning speech/ideas any minute now~.
Section 230 makes no mention of the publisher/platform dynamic you (mistakenly) seem to believe invalidates 230. Facebook, Twitter, Gab, Parler, a Mastodon instance, 4chan…hell, basically any service that hosts third-party content can “push approved narratives” all they like. They’re under no legal obligation to be “neutral”; neither the First Amendment nor Section 230 require “neutrality” as a condition for access to their protections.
I have several issues with social media, starting with its intentionally addictive design. To assume I don’t is your problem, not mine.
[ link to this | view in chronology ]
Re:
Offensiveness is not a property of words or phrases, but of the person reading them. Centralized moderation is neither necessary nor sufficient to prevent people being offended, and it's disingenuous to imply the internet would turn into a cesspool without it. It may hold us over for now, till we get better technology that allows individuals to decide what to see. But it's also likely that certain policies will be seen as harmful censorship in retrospect. You imply anti-queer propaganda is bad, but 60 years ago, the USA had mostly agreed (via the Hays Code) that queerness was perverse and never to be mentioned on film.
[ link to this | view in chronology ]
But it is necessary to prevent the people using that speech from harassing the targets of that speech off the service. Please explain why you think Facebook, Twitter, etc. shouldn’t have the right to do that.
Counterpoint: 8chan.
Additional counterpoint: I was a regular on 4chan’s /b/ many, many, many years ago. I speak from experience when I say a lack of moderation will turn any web service into a cesspool.
And if Facebook could censor people outside of Facebook (feel free to replace “Facebook” with any other web service), you might have a point.
Why, it’s almost as if the Hays Code — which was created to prevent government censorship, and by designed acted as a censorship bureau — censored any attempts to have overt queerness portrayed on film in any way. Imagine that~.
Facebook can’t do that because Facebook can’t stop TERFs from going to some other website and fawning over J.K. Rowling’s latest anti-trans missive. If you have evidence to the contrary, feel free to point it out. Otherwise:
Moderation is a platform/service owner or operator saying “we don’t do that here”. Personal discretion is an individual telling themselves “I won’t do that here”. Editorial discretion is an editor saying “we won’t print that here”, either to themselves or to a writer. Censorship is someone saying “you won’t do that anywhere” alongside threats or actions meant to suppress speech.
[ link to this | view in chronology ]
Re: Re:
"Centralized moderation is neither necessary nor sufficient to prevent people being offended, and it's disingenuous to imply the internet would turn into a cesspool without it."
I went through high school and college while the internet was tottering around with little to no centralized moderation by the various platforms. Finding anything worthwhile was always an exercise of wading through a cesspool looking for golden nuggets while learning to ignore the soft debris floating on the top of it.
Bluntly put I'm quite happy not to have every other link clicked toss up a purple monkey popup along with some nazi imagery and/or pics of suspiciously young girls in various states of undress.
The internet back then was more or less what you expect to find on the darknet today - and I'm not keen on going back to those days.
[ link to this | view in chronology ]
Re: Re:I agree
After reading what you keep saying I would ban you too.
[ link to this | view in chronology ]
Re:
[Projects facts not in evidence]
[ link to this | view in chronology ]
Re:
As if section 230 protected free speech... Dude, you're as delusional as you are arrogant.
It does. Quite impressively. Without it, most sites would not host any user generated content at all. The important free speech supporting aspect of 230 is that the combination of c1 and c2 have made it so that sites are actually willing to host user content, and to create specialized communities. That is what is so free speech affirming about it.
This has been explained multiples times.
[ link to this | view in chronology ]
It seem like every time someone is referring to "illegal content", they actually mean "maybe potential defamation, or at least someone is going to make highly spurious claims that a thing is defamation because they don't like it". That's some highly sophisticated (/s) framing right there.
[ link to this | view in chronology ]
Figures
The wrong justice crossed the river.
[ link to this | view in chronology ]
Any website or forum that has comments or user uploads content is protected by section 230 this means Facebook reddit tumblr YouTube newspapers , community local non profits . Basically free speech on the Internet would almost be non existant without section 230.
He is twisting various legal opinions and ignoring others in order to say section 230 needs to be limited.
The only people who want section. 230 gone are Conservative s or stupid politicians who do not understand it
and complain about moderators who block users who post fake news extreme racist content
or spam
Section 230 is the reason why websites like YouTube and reddit tumblr exist in the USA but are used by billions of users worldwide
In many country's they would be limited or simply too expensive to run due to the cost of random
legal actions taken by people offended by the comments or user content on those websites
[ link to this | view in chronology ]
I've come to expect it
Clarence Thomas is an authoritarian, misogynist and pervert.
[ link to this | view in chronology ]
Thomas
That's because conservative "textualism" is a smokescreen for coming to the conclusions that they want.
https://attorneyatlawmagazine.com/truth-about-textualist-judging
[ link to this | view in chronology ]
Hmmmmmm..... looks like the judge is gonna retire soon and someone, 'cough' Donald Buck 'cough', has offered to buy him that chalet in the Alps he's always dreamed of, in return for a little, errr... legal assistance. :)
Gotta love that crony capitalism.
[ link to this | view in chronology ]
Re:
"Hmmmmmm..... looks like the judge is gonna retire soon and someone, 'cough' Donald Buck 'cough', has offered to buy him that chalet in the Alps he's always dreamed of..."
Clarence Thomas is a US supreme court judge. It's a lifetime appointment from which you normally "retire" when they carry you out feet first.
It's unlikely that he'll accept bribes. He's more of an ultra-authoritarian of the kind who invariably interprets the law to favor authority over the individual.
[ link to this | view in chronology ]
Free speech includes wrong speech or "hate speech" .....
So I'm not gonna cry if rascals who claimed 230 protection - then denied free speech to others are burned.
When Twitter, Reddit and Youtube go out of business - we'll just use other platforms.
[ link to this | view in chronology ]
Re: Free speech includes wrong speech or "hate speech" .....
"rascals who claimed 230 protection - then denied free speech to others are burned"
Wait, a web platform denied speech everywhere in the world outside their platform? That sounds like a problem. Do you have an example?
"When Twitter, Reddit and Youtube go out of business - we'll just use other platforms."
What's stopping you from doing that now?
[ link to this | view in chronology ]