It's One Thing For Trolls And Grandstanding Politicians To Get CDA 230 Wrong, But The Press Shouldn't Help Them
from the stop-this-nonsense dept
There's an unfortunate belief among some internet trolls and grandstanding politicians that Section 230 of the Communications Decency Act requires platforms to be "neutral" and that any attempt to moderate content or to have any form of bias in a platform's moderation focus somehow removes 230 protections. Unfortunately, it appears that many in the press are incorrectly buying into this flat out incorrect analysis of CDA 230. We first saw it last year, in Wired's giant cover story about Facebook's battles, in which it twice suggested that too much moderation might lose Facebook its CDA 230 protections:
But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.
This is not just wrong, it's literally backwards from reality. As we've pointed out, anyone who actually reads the law should know that it was written to encourage moderation. Section (b)(4) directly says that one of the policy goals of the law is "to remove disincentives for the development and utilization of blocking and filtering technologies." And (more importantly), section (c)(2) makes it clear that Section 230's intent was to encourage moderation by taking away liability for any moderation decisions:
No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected...
In short: if a site decides to remove content that it believes is "objectionable" (including content it finds to be harassing), there is no liability for the platform even if the content blocked is "constitutionally protected."
Indeed, this was the core point of CDA 230 and the key reason why Rep. Chris Cox wrote the law in the first place. As was detailed in Jeff Kosseff's new book on the history of Section 230, Cox was spurred into action after reading about the awful ruling in the Stratton Oakmont v. Prodigy case, in which a judge decided that since Prodigy did some moderation of its forums, it was liable for any content that was left up. This was the opposite finding from another lawsuit, Cubby v. CompuServe, which found CompuServe not liable, since it didn't do any moderation.
However, part of Prodigy's pitch was that it was to be the more "family friendly" internet service compared to the anything goes nature of CompuServe. The ruling in the Stratton Oakmont case would have made that effectively impossible -- and thus Section 230 was created explicitly to encourage different platforms to experiment with different models of moderation, so that there could be different platforms who chose to treat content differently.
Unfortunately, it seems that this myth that CDA 230 requires "neutrality" is leaking out beyond just the trolls and grandstanding politicians -- and into the more mainstream media as well. Last weekend, the Washington Post ran a column by Megan McArdle about Facebook's recent decision to ban a bunch of high profile users. Whether or not you agree with Facebook's decision, hopefully everyone can agree that this description here gets Section 230 exactly backwards:
The platforms are immune from such suits under Section 230 of the Communications Decency Act of 1996. The law treats them as a neutral pass-through — something like a community bulletin board — and doesn’t hold them responsible for what users post there. That is eminently practical given the sheer volume of material the platforms have to deal with. But it creates a certain tension when a company such as Facebook argues that it has every right to kick off people who say things it considers abhorrent.
Facebook is acting more and more like a media company, with a media company’s editorial oversight (not to mention an increasing share of the industry’s ad revenue). If Facebook is going to behave like a media provider, picking and choosing what viewpoints to represent, then it’s hard to argue that the company should still have immunity from the legal constraints that old-media organizations live with.
It's not hard to argue that at all. Once again, the entire point of Section 230 was to encourage moderation, not to insist on neutrality. The Washington Post, of all newspapers, should know better than to misrepresent Section 230.
But it wasn't the only one. Just days later, Vox posted one of its "explainer" pieces also about Facebook's recent bans. The Vox piece, at least, quotes Section 230, but only section (c)(1) (the part that gets more attention), ignoring (c)(2), which is what makes it clear that it's encouraging moderation. Instead, Vox's Jane Coaston falsely suggests that Section 230 has a distinction between "media" companies and "platform" companies. It does not.
But if Facebook is a publisher, then it can exercise editorial control over its content — and for Facebook, its content is your posts, photos, and videos. That would give Facebook carte blanche to monitor, edit, and even delete content (and users) it considered offensive or unwelcome according to its terms of service — which, to be clear, the company already does — but would make it vulnerable to same types of lawsuits as media companies are more generally.
If the New York Times or the Washington Post published a violent screed aimed at me or published blatantly false information about me, I could hypothetically sue the New York Times for doing so (and some people have).
So instead, Facebook has tried to thread an almost impossible needle: performing the same content moderation tasks as a media company might, while arguing that it isn’t a media company at all.
This "publisher" v. "platform" concept is a totally artificial distinction that has no basis in the law. News publishers are also protected by Section 230 of the CDA. All CDA 230 does is protect a website from being held liable for user content or moderation choices. It does not cover content created by the company itself. In short, the distinction is not "platform" or "publisher" it's "content creator" or "content intermediary." Contrary to Coaston's claims, Section 230 equally protects the NY Times and the Washington Post if it chooses to host and/or moderate user comments. It does not protect content produced by those companies itself, but similarly, Section 230 does not protect content produced by Facebook itself.
There are enough issues to be concerned about regarding the internet and big platforms these days, that having the media repeatedly misrepresenting Section 230 of the CDA and suggesting -- falsely -- that it's a special gift to internet platforms doesn't help matters at all. CDA 230 protects platforms that host user speech -- including from any moderation choices they make. It does not require them to be neutral, and it does not require them to define themselves as a "platform" instead of a "publisher." News organizations should know better and should stop repeating this myth.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: cda 230, jane coaston, liability, megan mcardle, platforms, reporting, section 230
Reader Comments
The First Word
“Re: ...Section 230...
When people talk about section 230, they’re usually talking about a law written at Title 47 US Code, Section 230 (or 47 USC 230 for short). You can read it here, if you’d like.
What happened is, traditionally there was a rule that if you wrote something, or published something, then you were legally responsible for it. If you sold or otherwise distributed copies of it, then you might be responsible for it, depending on certain issues that aren’t important here. So if you wrote a book, you’d want to be careful not to defame someone in it such that you could be sued for damage to the reputation of someone that might sue you. Publishers would not publish a book unless they’d gone through it to ensure that such statements couldn’t be used to sue them (for example, if the statement is true, then it’s not defamatory, even if it causes reputational harm, so a fact-checker would verify it). Mere bookstores, for example, couldn’t be expected to carefully review everything they sold, so they got a bit more leeway, but if they were on notice, they might get in trouble for continuing to carry certain books.
Everyone could more or less live with that, but by the early 1990s, online services were becoming popular. One of them was CompuServe, and there was a court case about defamation that one of the users of the service posted publicly. No one cared much about suing the user, but CompuServe was owned by H&R Block at the time (as I recall) and they had money, so they were sued on the basis that they were a publisher and were responsible for the user’s post. The Court decided that CompuServe was more like a public xerox machine — anyone could use it, and it was automatic, and no one supervised it, so only the individual user should be responsible, not the service.
Unfortunately a few years later, there was another similar lawsuit involving Prodigy (another online service), and there, the court found that Prodigy was responsible because it had moderators who could remove posts and who monitored boards. Of course, the moderators were just ensuring that people kept things civil and didn’t curse, since Prodigy was meant to be a family-friendly place, but the court there felt that if they were in for a penny they were in for a pound, and it was their fault they were not going over everything users posted with a fine-toothed comb.
So this meant that in the mid-90s, just as the Internet was taking off, everyone who provided a place for users to post their own comments or other materials knew that they had better not moderate anything, or else they’d be responsible for moderating everything for every possible offense, which was totally impractical.
At that time, spurred on by the usual sort of hysteria, Congress decided it needed to do something about all the pornography showing up online, and they began work on a part of a major telecommunications law reform which included a part requiring online services to remove or limit access to indecent material.
Then someone told them that the reason that no online company was ever going to remove indecent material was because of these court cases that would expose the services to total liability if they removed so much as a single dirty picture. Exasperated, Congress added section 230 as a part of their reforms.
What the law does is, it says that no one is responsible for material posted online that originates from some other source. And that this is true no matter how much or how little effort is undertaken to remove any other material online.
While some of the other provisions were found unconstitutional (since it violated free speech to compel people to limit access to the internet to adults, basically), this part has not been challenged and has turned out to be fundamentally important to the development and continued viability of the internet. Without it, sites would either cease moderating (letting spam, hate speech, porn, etc. proliferate) or would cease allowing users to post anything.
Recently, some people have objected to this protection because they find it politically or commercially expedient to do so — hateful politicians don’t like being identified as such, or having their speech or their supporters’ speech removed by sites that find it unacceptable, and media outlets that operate offline and remain bound by the traditional rule would find it easier to compete if there were no services like Facebook or YouTube.
They’re making a big mistake though, because this law really is a key foundation for the internet, and without it, things will not go well.
Subscribe: RSS
View by: Time | Thread
To be fair, upon reading the quotation, I don't believe they're misrepresenting or misinterpreting 230 at all. I believe they understand it correctly, but that they're disagreeing with it, saying that it's a bad principle that ought to be changed.
[ link to this | view in chronology ]
Re:
While I would agree, I am certain I can find a different article from WaPo claiming how Facebook needs to do more to address hate speech or terrorist content or fake news or whatever hot button content we are blaming social media for this week, which somewhat undermines that idea.
[ link to this | view in chronology ]
Re: Re:
How much would you bet that they're by the same person? It wouldn't surprise me at all, given the cognitive dissonance that abounds where freedom of speech and moderation demands collide.
[ link to this | view in chronology ]
Re:
Having seen the sites in question over the years go into a decent into political games like everyone else has these days that is not unexpected “except vox they were always shady”
Predictable and sad. But not really surprising.
[ link to this | view in chronology ]
Re:
Well, there's really two parts to the quote:
This part gets things right, mostly, except for including the "neutral" part. Then there's this part:
This is where the author loses track. It only creates tension if you're trying to conflate what Facebook does with what WaPo does. WaPo would be liable for anything illegal in this article because WaPo posted it. However, if the content of the article was posted in a user comment by someone not part of WaPo, they wouldn't be liable for the content and they also wouldn't be liable for choosing to remove it for violating their standards. That's what CDA 230 protection does, and that's what I think the author doesn't fully understand.
If WaPo stopped posting its own original content and just said "Hello public, post your news stories here. Liberals only though. We'll remove it if it's conservative," they'd still get CDA 230 protection, as they should. So what's the author's problem? Is it that Facebook doesn't announce its bias? Is it that they're getting "an increasing share of the industry's ad revenue?"
[ link to this | view in chronology ]
Re: Re:
Most of the problem come from the lack of clarity on the writing.
Ideas that propose to modify (or repeal) section 230, as McArdle argues are conflated with the trolls that say that it currently require political neutrality.
Her point is that as facebook has a more specific editorial and political line of what is allowed and what is banned, it resembles more and more the organizations (and operations) that are not protected by Section 230 like media organizations.
And, it could mean that the law could/should be modified to remove their protection.
The last paragraph from that article, which immediately follows the quoted part on this article is:
Those grey areas are exactly the problem.
If an user/columnist was payed just by a fraction of the ad revenue they bring, and had not work contract with the news organization, but merely adhere to its ToS, should Section 230 still apply?
If you keep stretching it making the users more like a hired reporter by keeping a list of "top" users that are promoted by the news organization, would it still apply?
[ link to this | view in chronology ]
Re: Re: Re:
Except that the media organizations are protected by section 230 when it comes to user comments, just like Facebook.
Yes. I see this as no different from YouTubers getting paid based on views. The platform isn't involved in the content.
That depends on what you mean by "promoted." As far as I'm aware, nobody thinks YouTube "promotes" PewDiePie, although his videos do show up on my YouTube home page from time to time. So if he said something defamatory, I wouldn't hold YouTube responsible. Now, if YouTube somehow contributed to his video or took specific action to increase his viewership that wasn't available to others (I'm not talking an arbitrary "most views" list, but more like a banner at the top of the page that said "A message from YouTube: Watch PewDiePie. He's awesome.") then I think that could cross the line from passive platform to liable contributor.
[ link to this | view in chronology ]
...Section 230...
BTW, I almost misspelled Section 230 on accident. I almost typed it as Sextion 230. I apologize
Can anyone explain to me what this “Section 230” is all about without trying to cause my brain to feel like it’s on acid (even though I never took Acid and I never want to take Acid because it sounds weird) because I need to hear it from a neutral point of view to be able to understand this.
[ link to this | view in chronology ]
Re: ...Section 230...
It's a law that basically just says if someone says something online that could be deemed illegal, you are not allowed to punish the site they posted their speech on or anyone else who wasn't directly involved in actually making the statement. You have to punish the person that broke the law.
[ link to this | view in chronology ]
Re: Re: ...Section 230...
It's a law that basically just says if someone says something online that could be deemed illegal, you are not allowed to punish the site they posted their speech on or anyone else who wasn't directly involved in actually making the statement. You have to punish the person that broke the law.
So This basically prevents the site from being caught in the crossfire? Interesting.
Unfortunately it doesn’t protect me from a full-force soccer ball bullseye to the nuts during a soccer game which occurred during high school because the other person couldn’t kick it anywhere else.
Trust me, Cdaragorn, my high school years are a novel in itself.
[ link to this | view in chronology ]
Re: Re: Re: ...Section 230...
So .. internet forums are like soccer.
[ link to this | view in chronology ]
Re: Re: Re: Re: ...Section 230...
So...internet forums are like soccer
No that’s not what I meant. That whole section about my soccerball-to-the-nuts experience was just a silly story that spawned from reading Cdaragorn’s explanation of Section 230. Because I was joking when I had said previously:
Unfortunately it doesn’t protect me from a full-force soccer ball bullseye to the nuts during a soccer game
Yeah, I suck at trying to find something funny to say.
[ link to this | view in chronology ]
Re: ...Section 230...
It’s a law that people with political ambition opportunist and several other foul scoundrels throughout all realms love when it’s helping them hate when it’s not call for its change when it’s the latter and are silent as a nights field when it’s the former. It was made back during a unusual period of common sense “or they just were not paying attention because of other stuff take a pick” and the internet was new and magical.
That’s a quite neutral gist as I’m capable of doing as the political cynic I am.
[ link to this | view in chronology ]
Re: ...Section 230...
When people talk about section 230, they’re usually talking about a law written at Title 47 US Code, Section 230 (or 47 USC 230 for short). You can read it here, if you’d like.
What happened is, traditionally there was a rule that if you wrote something, or published something, then you were legally responsible for it. If you sold or otherwise distributed copies of it, then you might be responsible for it, depending on certain issues that aren’t important here. So if you wrote a book, you’d want to be careful not to defame someone in it such that you could be sued for damage to the reputation of someone that might sue you. Publishers would not publish a book unless they’d gone through it to ensure that such statements couldn’t be used to sue them (for example, if the statement is true, then it’s not defamatory, even if it causes reputational harm, so a fact-checker would verify it). Mere bookstores, for example, couldn’t be expected to carefully review everything they sold, so they got a bit more leeway, but if they were on notice, they might get in trouble for continuing to carry certain books.
Everyone could more or less live with that, but by the early 1990s, online services were becoming popular. One of them was CompuServe, and there was a court case about defamation that one of the users of the service posted publicly. No one cared much about suing the user, but CompuServe was owned by H&R Block at the time (as I recall) and they had money, so they were sued on the basis that they were a publisher and were responsible for the user’s post. The Court decided that CompuServe was more like a public xerox machine — anyone could use it, and it was automatic, and no one supervised it, so only the individual user should be responsible, not the service.
Unfortunately a few years later, there was another similar lawsuit involving Prodigy (another online service), and there, the court found that Prodigy was responsible because it had moderators who could remove posts and who monitored boards. Of course, the moderators were just ensuring that people kept things civil and didn’t curse, since Prodigy was meant to be a family-friendly place, but the court there felt that if they were in for a penny they were in for a pound, and it was their fault they were not going over everything users posted with a fine-toothed comb.
So this meant that in the mid-90s, just as the Internet was taking off, everyone who provided a place for users to post their own comments or other materials knew that they had better not moderate anything, or else they’d be responsible for moderating everything for every possible offense, which was totally impractical.
At that time, spurred on by the usual sort of hysteria, Congress decided it needed to do something about all the pornography showing up online, and they began work on a part of a major telecommunications law reform which included a part requiring online services to remove or limit access to indecent material.
Then someone told them that the reason that no online company was ever going to remove indecent material was because of these court cases that would expose the services to total liability if they removed so much as a single dirty picture. Exasperated, Congress added section 230 as a part of their reforms.
What the law does is, it says that no one is responsible for material posted online that originates from some other source. And that this is true no matter how much or how little effort is undertaken to remove any other material online.
While some of the other provisions were found unconstitutional (since it violated free speech to compel people to limit access to the internet to adults, basically), this part has not been challenged and has turned out to be fundamentally important to the development and continued viability of the internet. Without it, sites would either cease moderating (letting spam, hate speech, porn, etc. proliferate) or would cease allowing users to post anything.
Recently, some people have objected to this protection because they find it politically or commercially expedient to do so — hateful politicians don’t like being identified as such, or having their speech or their supporters’ speech removed by sites that find it unacceptable, and media outlets that operate offline and remain bound by the traditional rule would find it easier to compete if there were no services like Facebook or YouTube.
They’re making a big mistake though, because this law really is a key foundation for the internet, and without it, things will not go well.
[ link to this | view in chronology ]
Re: Re: ...Section 230...
They’re making a big mistake though, because this law really is a key foundation for the internet, and without it, things will not go well.
As it stands now, people are defenseless against being defamed by search engines that amplify defamatory content which otherwise would remain in the corners of the internet for a small audience, rather than become part of a dossier used by just about everyone. Moreover, a defamed person will have to sue not the original publisher, or the search engines, but anyone who finds the lies in the search engine and then repeats them, as many will inevitably do.
Australia held Google liable for search results in a 2012 case. There is no 230 immunity in Australia for the obvious harm inflicted. Americans ignore cases like this because it shows that search engines and other intermediares inflict harm by spreading defamation in both countries, but are liable only in one.
Section 230 is what politicans can repeal if they want to encourage platforms to remain netural. Section 230 itself does not require neturality, but the law itself can be a bargain that requires it.
[ link to this | view in chronology ]
Which they will be…because a platform cannot be biased against specific content if it has no content to be biased against.
[ link to this | view in chronology ]
Re: Re: Re: ...Section 230...
So basically John here has admitted his aim is to censor people by suing the phone company.
See what I did? You never sued the phone company have you John? Lol
[ link to this | view in chronology ]
Re: Re: Re: ...Section 230...
Well, first of all, the search engines aren't defaming them. They're reporting facts. The search engine says "This other website says X."
If a reporter says in an article, "I asked <person> for comment and he said <something defamatory>" the reporter isn't guilty of defamation. They're accurately reporting what was said.
But it would still be out there.
A defamed person doesn't have to sue anyone. They choose to, and in most cases that I'm aware of, that only happens after less drastic methods of resolving the issue fail.
We aren't really discussing Australian law, but since you brought it up (again), I looked at that case (I believe you're still referring to the Duffy case) and I didn't see where it mentioned she sued Ripoff Report, where the defamatory content was actually posted. I only read that she was unsuccessful in getting it removed from there. I wonder if her choice of target has more to do with the size of Google's wallet than with their relative liability?
Also, Google did remove the search results from google.com.au. That just wasn't satisfactory to the Court, who apparently want Australian law to apply globally.
It wouldn't result in neutrality. It would result in either a complete lack of any kind of moderation which would cause any UGC site to be overwhelmed with spam and porn, or a shutdown of the UGC site altogether.
[ link to this | view in chronology ]
Re: Re: Re: ...Section 230...
"Australia held Google liable for search results in a 2012 case. There is no 230 immunity in Australia for the obvious harm inflicted."
Australia is a blatant shit-show in general when it comes to online legislation. I'm not sure we want to uphold a country as a good example when said country has laws on the tablets which render every program written in the country unusable or outright illegal in the rest of the world.
And that's even before we go into that australian case specifically and notice that the complaints are factually wrong. Google does not "serve" information. At the most it tells you where to find addresses corresponding to keywords entered in user queries.
The australian case makes the argument that a library index can be illegal. Though I understand why you'd be ecstatic to grasp at any straw offering itself in favor of your general hate-boner on neutral platforms, Baghdad Bob.
[ link to this | view in chronology ]
Re: Re: Re: Re: ...Section 230...
The library would have to be put on notice that one of the books is defamatory, and that same liability exists in the US.
Australia is a blatant shit-show in general when it comes to online legislation. I'm not sure we want to uphold a country as a good example when said country has laws on the tablets which render every program written in the country unusable or outright illegal in the rest of the world.
The UK, India, Canada, and most every country in the world has similar laws. The US is out of step with Section 230.
[ link to this | view in chronology ]
Re: Re: Re: ...Section 230...
As it stands now, people are defenseless against being defamed by search engines that amplify defamatory content
Search engines don't defame, people do. They don't amplify them either, people do by sharing the content. Try not using a search engine to find what you want on the internet. Good luck with that.
...which otherwise would remain in the corners of the internet for a small audience, rather than become part of a dossier used by just about everyone. Moreover, a defamed person will have to sue not the original publisher, or the search engines, but anyone who finds the lies in the search engine and then repeats them, as many will inevitably do.
What? don't talk rubbish, AC. Nobody has to sue anyone over anything, not even defamation. When some lying troll actually tried to get me fired from my job and published multiple defamatory posts about me I couldn't go after the troll but I could refute the lies, which I did. I also was able to persuade some of the platforms on which the lies were hosted to remove them on the grounds that they were troll posts, which could impact on the credibility of those sites as information sources. Result: but one remains on a site that has little credibility as an information source due to their willingness to host troll posts. Honestly, I'm not even bovvered about that. Nobody takes them seriously.
Australia held Google liable for search results in a 2012 case. There is no 230 immunity in Australia for the obvious harm inflicted.
The obvious harm was unfortunately self-inflicted due to a flame war that got out of hand. Had the person involved acted more circumspectly she would never have ended up in that situation. We've discussed this and are now on friendly terms, so anyone who takes a pop at her will be told to knock it off.
Americans ignore cases like this because it shows that search engines and other intermediares inflict harm by spreading defamation in both countries, but are liable only in one.
Search engines spread nothing; people spread all kinds of things by repeating them. Try living without search engines if you think they're so evil. Let us know how you get on.
Section 230 is what politicans can repeal if they want to encourage platforms to remain netural. Section 230 itself does not require neturality, but the law itself can be a bargain that requires it.
Or the law can cause platforms to shut down altogether rather than bear the burden of enforcing your idea of neutrality. This is why we mock you, AC. This is all kinds of stupid and wrong.
[ link to this | view in chronology ]
Re: ...Section 230...
I use acid regularly
pH 6.3 if I recall correctly
[ link to this | view in chronology ]
Re: Re: ...Section 230...
Baldi says: Now it’s time for everybody’s favorite subject. MATH
6.3 x 2 = 12.6
pH 12.6 is the acidic level of bleach.
Conclusion: You drink a half a cup of bleach a day?
...
(I’ll see myself out)
[ link to this | view in chronology ]
Re: Re: Re: ...Section 230...
For the sake of the world, never pour liquid from one beaker into another. Ever.
[ link to this | view in chronology ]
Re: Re: Re: Re: ...Section 230...
That’s a fact. You never know what monstrosity will be created.
[ link to this | view in chronology ]
Re: ...Section 230...
No need to paraphrase really:
https://en.wikipedia.org/wiki/Section_230_of_the_Communications_Decency_Act
"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
Basically - if something bad is hosted on a site, you have to go after the person who actually posted it, not the platform it was posted on. You have to go after the person who committed the act, not the nearest innocent but cash-rich target.
[ link to this | view in chronology ]
I object to "Encourage"
It seems to me it doesn't encourage moderation, so much avoid discouraging it. That's not exactly the same thing.
[ link to this | view in chronology ]
Re: I object to "Encourage"
From the plain language of the statute itself, yes. But have a look at the record of the congressional discussion when that language was drafted - there is much talk about the intent being largely about "encouraging" them to do so:
https://www.congress.gov/congressional-record/1995/8/4/house-section/article/h8460-1
[ link to this | view in chronology ]
Re: Re: I object to "Encourage"
Non sequitur.
The discussions have zero bearing on what was actually written into law.
"Spirit of the Law" won't get you far when faced with Letter of the Law in a courtroom.
[ link to this | view in chronology ]
Re: Re: Re: I object to "Encourage"
Comments in the legislative record are often cited to support particular interpretations of a law where there are several possibilities.
Do you ever get tired of being wrong all the time?
[ link to this | view in chronology ]
Re: I object to "Encourage"
Could you explain further? As has been noted many times, including in this article, Sec 230 was written in response to a legal ruling that made content hosts liable for the content users post based on the greater level of moderation prodigy performed compared to Compuserve. This leads to the conclusion that the way to avoid content liability is to not moderate
230 has 2 sides, the first provides that you can only hold the user accountable for the user's actions, not the content provider. The second part encourages moderation by clearly stating that moderation does not trigger liability.
Please explain how that encouragement serves to discourage moderation that is not also discouraged by the legal president in Prodigy.
[ link to this | view in chronology ]
Facebook, and Twitter, etc, this is where people get their news, no matter how BAD that it. It's the town square of the Internet. These Leftists companies are booting off people on the right. Really anyone that doesn't agree with them is a Radical right winger. Pretty much anyone and everyone. You point out where is the hate, the violence, there is none. it's mostly ALL on the left which they ignore. What you end up with is an all leftist view of the world, which is what they want. Everything else is Hate Speech, etc. It's beyond laughable.
These Leftist have such a hard time trying to find any hate on the right, they keep trying to create it on their own. generally failing at it. Jussie Smollet is just one of many. If the right is just all this hate, they should easily be able to point it out. Instead, they keep faking right hate. They do such a poor job generally that they get caught. It's just not realistic what they do.
Here's a list of 30 hate crimes.
https://twitter.com/MrAndyNgo/status/1097020092791934976
[ link to this | view in chronology ]
Re:
such a hard time trying to find any hate on the right
Hahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahaha haha....
[ link to this | view in chronology ]
Re:
"These Leftist have such a hard time trying to find any hate on the right..."
Oh, right, because conservatives calcmly and quite reasonably describing certain subsets of the population as not being quite human or deserving of the same protection as everyone else isn't "hate".
"These Leftists companies are booting off people on the right. Really anyone that doesn't agree with them is a Radical right winger."
The same way anyone not agreeing with you is a..."Leftist". A word you used so often in your little diatribe you'd think the guy who wrote your little commentary might have noticed a bit of blatant hypocrisy by now.
"You point out where is the hate, the violence, there is none."
Ah. The Ferguson, Milwaukee and Charlotte riots didn't swamp the internet with frequent online rallies to kill more black people or try to portray them as subhuman?
Baghdad Bob, your ability to rewrite factual history is, as usual, unsurpassed.
[ link to this | view in chronology ]
Re: Re:
One presumes that cheering on shooting up a church because it had black people in it as a good thing =/= hate in your book, AC. Good grief!
[ link to this | view in chronology ]
The politicians are using neutrality as a condition of keeping section 230, not saying that 230 requires neutrality.
[ link to this | view in chronology ]
Re:
So the politicians are attempting extortion?
[ link to this | view in chronology ]
This means, in essence, that they want to force a platform into hosting content/users it does not want to host. A “content-neutral” platform would be forced to host both pro- and anti-LGBT propaganda even if the platform’s owners did not want their platform associated with the anti-LGBT bullshit. Nothing about that outcome is good…or constitutional, for that matter, given the First Amendment’s protections for association.
[ link to this | view in chronology ]
Thought Control Legislation
CDA 230 so far appears to be used almost exclusively to punish folks with politics different than the platform owners.
[ link to this | view in chronology ]
Re: Thought Control Legislation
Considering they are the platform owners, they have every right to remove folks with different politics than they hold.
This is simply false for many reasons, though. It's used for more often as way for platforms to allow their users to speak more openly than anything else. It's just that banning users presents a much more high profile case then all the speech going on as a direct result of having this protection.
[ link to this | view in chronology ]
Re: Thought Control Legislation
...what?
Wikipedia's article on CDA 230 lists several cases that have nothing at all to do with political views.
The idea that a platform owner "punishes" users with differing politics by deleting their content, and is then immune from liability because of CDA 230, does not at all appear to be a major use of the law.
[ link to this | view in chronology ]
It is technically “moderation”, which is encouraged by the grant of immunity from legal liability for moderating content found in CDA 230.
[ link to this | view in chronology ]
Re:
Ok I'll grant that, but I'm still not seeing a vast amount of "I don't like what these people have to say so I'm going to delete it and because of CDA 230, there's nothing they can do about it! MWA HA HA"
Has that happened? I'm sure it has. Is it what section 230 is "almost exclusively" used for? I don't think so.
[ link to this | view in chronology ]
Re:
Of course, it's not at all clear what liability they would face for politically-motivated moderation in the absence of section 230 anyway.
[ link to this | view in chronology ]
Re: Re:
If they were state actors it would be a constitutional violation.
[ link to this | view in chronology ]
But they are not state actors, nor will they be at any point in the near future. The First Amendment does not apply to their moderation choices.
[ link to this | view in chronology ]
Re: Re: Re:
And if they were extraterrestrials it would be a science fiction story. So what?
[ link to this | view in chronology ]
Re: Thought Con Troll
CDA 230 so far appears to be used almost exclusively to punish folks with politics different than the platform owners.
That is flat out incorrect. Sounds like a false claim being repeated by the alt-right. (A group of "Snow Flakes" that suffer mightily at every imagined slight, I understand.)
Without CDA 230 TD couldn't even delete spam or scam posts. They would have to leave the board unmoderated or hold every post for inspection first.
Or, not have posts.
[ link to this | view in chronology ]
Re: Re: Thought Con Troll
Section 230 has nothing to do with moderation one way or the other, except to say that they can't be held directly liable if they don't moderate.
How idiots have been fooled into thinking section 230 is the reason their Nazi friends are being told they're not welcome on private property is beyond me, but then people were fooled into thinking that removing net neutrality would make things more neutral, so...
[ link to this | view in chronology ]
Re: Re: Re: Thought Con Troll
"How idiots have been fooled into thinking section 230 is the reason their Nazi friends are being told they're not welcome on private property is beyond me..."
Because at the end of the day, irrespective of country or continent, the biggest card the bigoted minority can consistently pull is that of crying loudly over how the majority simply doesn't want to listen to them.
We're talking about a minority of people so generally offensive no one wants anything to do with them being offered the convenient excuse that the reason they are thrown out of any forum is because of a conspiracy and has nothing to do with them being anthropomorphic manifestations of malice and incompetence.
[ link to this | view in chronology ]
Re: Re: Re: Re: Thought Con Troll
Wrong, you do not hear them complaining that GAB does not attract a large audience, but rather complaining that the social media sites with the audience are banning them, which in their minds is censorship. They do not accept that their viewpoints are not that popular, but rather blame 'censorship' for their views not spreading into wider society.
[ link to this | view in chronology ]
Re: Re: Re: Thought Con Troll
How idiots have been fooled into thinking section 230 is the reason their Nazi friends are being told they're not welcome on private property is beyond me,
I'm not sure if it's 'fooled into thinking' so much as denial, a refusal to accept that maybe they/their buddies are being kicked because they're terrible people since that would just be silly, and since that's wrong clearly they're being given the boot because people are just unfairly mean to them for absolutely no reason.
[ link to this | view in chronology ]
Re: Re: Re: Re: Thought Con Troll
I understand why they're in denial, I just don't get why section 230 seems to specifically get blamed recently. It has nothing to do with the situation.
I'm probably just confused because I'm again looking at it from the point of view of what the law actually says and does rather than whether distortion is being pushed in certain echo chambers.
[ link to this | view in chronology ]
Re: Re: Re: Thought Con Troll
Do you think misogyny and misandry are defined and enforced equally?
[ link to this | view in chronology ]
Re: Thought Control Legislation
It's ok for a brick and mortar to not bake,
but those websites gotta display all mah sillness!
Wipppeeeeeee
[ link to this | view in chronology ]
Re: Thought Control Legislation
Why is it that so many people who oppose section 230 have no idea what it says?
[ link to this | view in chronology ]
Re: Re: Thought Control Legislation
Populism. Why think for yourself when you can just do whatever is most popular among your thought leaders?
[ link to this | view in chronology ]
Well that's handy
Hosting articles that get basic facts that wrong nicely shows how dedicated the sites/news agencies aren't to fact-checking and making sure they're not just repeating long-debunked lies and falsehoods.
As well if they're going to repeat the lie that 230 requires 'neutrality' then they'd better hope they don't have comments sections on their sites, because I could easily see that coming back to bite them hard.
[ link to this | view in chronology ]
Doesn't matter what the LAW says
It only matters what the politicians, judges, lobbiests, PR wanks, the media, and corporations TELL YOU THE LAW SAYS...
If it only matters what a police officer 'thinks' the law says (aka qualified immunity), and that's the only standard they have to be held to, then does it really matter what the law says, or even if the law exists?
Honest Judge, I thought it was illegal to eat an ice cream cone in public, so I blew his head off (along with the ice cream cone). Sorry judge, it won't happen again (in this novel fashion), next time I'll be hopping on one leg and holding my nose while shooting the 'suspect' to ensure that nobody has ever violated the law in THIS SPECIFIC WAY before. Now give me my 2 month paid vacation and I'll be on my way, see you again in about 3 months for the next case.
[ link to this | view in chronology ]
Re: Doesn't matter what the LAW says
No, only SCOTUS gets to "interpret" what the law says.
Every other level is open to Appeals Court.
Judges do NOT like being reversed on Appeal.
Appellate Judges do NOT like judges below them needing reversal.
Yes, there are always going to be a few that have their own unique interpretation of a Law.
And you may not agree with an Appellate decision.
But after going through several iterations where they all find against you, you really can be quite sure that the letter of the law is against you.
[ link to this | view in chronology ]
Re: Re: Doesn't matter what the LAW says
The SCOTUS has never ruled on distributor liability questions.
[ link to this | view in chronology ]
Here's a tip for the usual Techdirt trolls, blue boy, Hamilton and John Herrick Smith:
If you honestly, sincerely believe that fake comments are a priority scourge that must be dealt with by the swift removal of Section 230... sure.
Just be aware that when that happens, the FCC and your hero Ajit Pai can and will be held responsible for the fake comments submitted to their platform. Once Pai has been dealt with for criminally allowing the fake comments net neutrality can be un-repealed.
I'll be not holding my breath in anticipation for the fuck all you're going to do.
[ link to this | view in chronology ]
If a reporter says in an article, "I asked <person> for comment and he said <something defamatory>" the reporter isn't guilty of defamation. They're accurately reporting what was said.
Actually they would be guilty of defamation in that scenario, though being put on notice would also increase damages. Gossip is not a defense against defamation except online.
*which otherwise would remain in the corners of the internet for a small audience, rather than become part of a dossier used by just about everyone.
But it would still be out there.
Yet it wouldn't be found by anyone beyond the original audience.
a defamed person will have to sue not the original publisher, or the search engines, but anyone who finds the lies in the search engine and then repeats them, as many will inevitably do.
A defamed person doesn't have to sue anyone. They choose to, and in most cases that I'm aware of, that only happens after less drastic methods of resolving the issue fail.
I'm talking about situations where the original publisher couldn't be sued, but the defamation lingers. People who find it later on through search engines, then repeat it (because they were looking for "dirt" on someone they didn't like that they were predisposed to believe), and they will be the ones sued as NEW publishers of the lies they found online. It is their own fault for believing and spreading what they find in Google, but they were also helped in making this mistake.
[ link to this | view in chronology ]
Re:
Hey, John Smith!
Thomas Goolnik.
[ link to this | view in chronology ]
Re:
Actually they would be guilty of defamation in that scenario, though being put on notice would also increase damages. Gossip is not a defense against defamation except online.
Congratulations, you've just told every mainstream media tabloid they now have no right to exist. Way to shut off one of your heroes' main income streams, Jhon.
Yet it wouldn't be found by anyone beyond the original audience.
And Techdirt is regularly cited as having no reach or readership. You yourself claim that Mike Masnick is never taken seriously by mainstream media. Yet apparently several Anonymous Cowards making fun of another unnamed Anonymous Coward is so damning you have to order a SWAT team to sweep the globe for your enemies?
I'm talking about situations where the original publisher couldn't be sued, but the defamation lingers.
Why is this "doctor gets defamed by Russian bots" scenario such an obsession for you? You honestly think people looking for a suitable local doctor are going to believe throwaway Russian bots named "Ivankovich Sascha Heavyweaponsguy"? Is everyone in your universe inbred with an intelligence quotient to match?
Actually that might explain a LOT...
It is their own fault for believing and spreading what they find in Google, but they were also helped in making this mistake.
Sue any road builders lately, Jhon? What about knife manufacturers? Power plants?
Herrick lost to Grindr, get over it.
[ link to this | view in chronology ]
Re: Re:
Why is this "doctor gets defamed by Russian bots" scenario such an obsession for you? You honestly think people looking for a suitable local doctor are going to believe throwaway Russian bots named "Ivankovich Sascha Heavyweaponsguy"?
People here have even commented that they trust online reputations, as do many, particularly the average internet user, who won't know if it's a bot or someone being paid, or someone with an axe to grind, and yes, many people DO believe this. If people already don't like someone they're ripe to be misled this way, and wind up sued for something they shouldn't have done, but for which they were set up.
Even one individual's reputation should be more important than any search engine's. Some have committed suicide over bullying or lies and 230 enabled the mobs to form online.
Other countries don't recognize 230 for a reason.
[ link to this | view in chronology ]
Re: Re: Re:
People here have even commented that they trust online reputations, as do many
And people say the same thing about politicians. If your point is to say that online reviews and discussions deserve the same scrutiny as face-to-face talking, instant messaging, telephone conversations... where was it said that they don't?
particularly the average internet user, who won't know if it's a bot or someone being paid, or someone with an axe to grind
Again, gossip and tabloids exist. Your solution is nobody says anything at all. Have fun enforcing that.
If people already don't like someone they're ripe to be misled this way, and wind up sued for something they shouldn't have done, but for which they were set up.
Oh, so it's to protect people from expressing opinions that make another person look bad. Because the old "I'm fucking you up for your own good" argument hasn't already been done to death for the sake of hidden agendas, obviously.
Pull the other one.
Even one individual's reputation should be more important than any search engine's
I like how you didn't answer the previous post's questions. Road manufacturers, mobile phone companies, supermarkets are not suddenly liable because someone in another country made you look bad.
Some have committed suicide over bullying or lies
Nobody disputes that. Care to point out where Section 230 was listed as being instrumental to the suicide, or are you going to file a suit against kitchen knife manufacturers for facilitating murder? (To be fair, Tero Pulkinnen's already got the copyright on that moronic angle.)
Other countries don't recognize 230 for a reason
Other countries don't recognize gun ownership. Some don't even recognize gay marriage. Why the sudden concern about what other countries' laws are? Didn't you literally say "Australian law is always comparative with American law" just yesterday? If Australia lacks Section 230, shouldn't the solution be for Australia to adopt Section 230 instead of asking the US to trash theirs?
You're just flat out not very good at this "thinking things through" business are you?
[ link to this | view in chronology ]
Re: Re: Re:
Even one individual's reputation should be more important than any search engine's.
Wrong, wrong, wrong.
An individual's reputation, as I've stated many times before, is based on their own personal conduct. People telling lies about them only makes other people curious about them and causes them to check them out. When liars told lies about me, people found a gobby Irishwoman, not a mad criminal, when they checked me out.
Some have committed suicide over bullying or lies and 230 enabled the mobs to form online
230 did nothing of the sort. Nasty, evil weevils of the kind who bully other kids to suicide are solely and completely responsible for posting and spreading the nastiness. The unfortunate victim's act is on the victim and the people around him or her. I don't blame suicides for their own actions because I understand the desperation that drives them to it.
Mobs form with or without 230 being in place, that being the nature of mobs and mobbing in general.
[ link to this | view in chronology ]
Um...
Hello, Mr Masnick. :)
There's something I'm just not sure I understand about TechDirt.
It seems to be the case that every time some politician or journalist says something that's clearly deliberate nonsense, TD bends over backwards to pretend that it must be a mistake or an accident, or that the person concerned has been misled by third-party falsehoods and somehow failed to realise it.
Even when said person has been repeatedly told that they're wrong, in excruciating detail, by numerous independent parties, we're apparently still expected by TechDirt to believe that those politicians and reporters - who almost always continue to ignore what they've been told and repeat their earlier false claims - are somehow just having a bad day / month / year / decade, when it comes to what is really the very easy art of duly-diligent basic fact-checking.
It can't possibly be a case of deliberate lying by those concerned, oh no. That never happens.
Moreover, we're to assume the lies-that-shall-not-be-named can't possibly ever be the product of bribery or corruption or the fact that the entire damn publication is wholly-owned by a giant media conglomerate with a legislative agenda and industrial talking points to sell. Nope, nu-uh, not a chance.
For fuck's sake.
Look, I can appreciate that TD wants to avoid having to deal with harassment and litigation by the rich and powerful. If the Indian email fraudster can bring fear enough to TD's door to keep you awake at night, I can only imagine what the prospect of facing off against Condé Nast, Vox Media or Jeff Bezos' lawyers in court must feel like. As someone who loves TD enough to back you with a little of my hard-earned cash back then, I really don't want to see you walk into more fire than you can handle, but...
... Mr Masnick, where in God's name do you draw the line?
Pelosi, Wired, WaPo, Vox - and I don't know how many others - are plainly pushing industrial propaganda, with the very clear intention of undermining the entire premise of Section 230. Multi-pronged US legislation to that effect will no doubt follow in due course.
We saw very similar things in the recent legislative campaigns here in Europe, where the publishers' lobbyists and talking heads all went out of their way to emphasise their position that sites like YouTube, FB and Google were really publishers and not platforms.
This is very evidently a major multi-industrial Talking Point and - to judge by our example in the EU - one that plays all too well with uninformed or wilfully-blind legislators. It's a good lie, well sold. There's every chance it will win new, hostile US legislation, designed to cause great harm to the public good.
You can see it, I'm sure, they know it, undoubtedly and even a Joe Cunt like me can join the dots, with little enough effort.
So why be all coy about it, hmm? Why not call it out for what it is? Why praise all our enemies with such faint and weak-hearted damnation? Surely even a US judge won't break a sweat dismissing any kind of SWAT case over this in shortest order.
No offense intended, Mr Masnick, but...
... for big, black and bald fuck's sake, Mr Masnick, why continue to be an industrial stenographer, when you could be doing journalism? Aren't you better than this?
[ link to this | view in chronology ]
What you say is often less important than how you say it
As someone who has called them out on that sort of thing multiple times in the past, and who has likewise been frustrated by their, as you put it 'playing coy' at times, if I had to guess I'd say it's probably a tactical choice.
By avoiding calling people out on blatant lies unless it can be conclusively proven that they are lying, they also avoid the subjects of any articles from being able to just dismiss anything they say as hyperbole and overly emotional, which would be counter-productive. If they let their emotions lead them to making flat out 'they're lying' claims, or worse try to ascribe motive, then it would be all too easy for the other side to simply say 'that's not my motive, and as you're clearly letting your emotions overrule reason then nothing you said is worth paying attention to' even if TD was right.
(I personally don't bother to pull my punches nearly as much and will happily call someone out on dishonesty and/or what seems to be obvious motives as I'm merely voicing my opinions on what's being written up about, and as such I don't worry overly much about being held to the same standard or undercutting the article with emotional statements of frustration/anger/disgust, given I try to make clear why a given comment might lean that way.)
By instead presenting things in a calm, concise manner('here's where they're wrong, here's evidence demonstrating they're wrong...') they make it much harder to argue against their points, unless the other side wants to try to pull the emotional/ascribing motives argument themselves, which as noted above drastically undercuts them and leaves them open to being dismissed.
This of course is pure speculation on my part, and I could be completely wrong, if Mike and/or one of the other TD writers were to chime in and explain why they write as they do that would be great.
[ link to this | view in chronology ]
Re: Um...
Pelosi, Wired, WaPo, Vox - and I don't know how many others - are plainly pushing industrial propaganda, with the very clear intention of undermining the entire premise of Section 230. Multi-pronged US legislation to that effect will no doubt follow in due course.
Because I don't think that's true for any of them.
I think it's true for Ted Cruz -- he's heard this a bunch and continues to repeat the false descriptions. The others did not. I've spoken with the Wired reporters and they explained how they made the mistake. On Twitter, the Vox reporter has admitted she made a mistake and is going to speak to Jonathan Zittrain at Harvard to get up to speed. So I doubt she'll make the same mistake again. Mcardle is often thoughtful, and I think she just messed up. Pelosi... I'm not sure about, but seeing as this was the first time she made this mistake, we'll see.
If any of the above continue to repeat these false claims, THEN I'll likely call them out for knowingly pushing propaganda and lies. But, most of the evidence on all of these suggest otherwise. I think they've heard the false spin and bought it without investigating. It's sloppy, not deliberate.
I have no problem calling people out when I believe they're deliberately lying. But I wait for more evidence than a single mistake.
[ link to this | view in chronology ]
Re: Re: Um...
Ignorance rather than malice, makes sense, guess I was overthinking things above.
[ link to this | view in chronology ]
Re: Re: Um...
Fair enough, so, Mr Masnick. :)
[ link to this | view in chronology ]
People here have even commented that they trust online reputations, as do many
And people say the same thing about politicians. If your point is to say that online reviews and discussions deserve the same scrutiny as face-to-face talking, instant messaging, telephone conversations... where was it said that they don't?
People act out based on what they read online all the time. Lies harm those lied about, and 230 protects the liars (or those who distribute lies) in certain situations.
particularly the average internet user, who won't know if it's a bot or someone being paid, or someone with an axe to grind
Again, gossip and tabloids exist. Your solution is nobody says anything at all. Have fun enforcing that.
It's called distributor liability, for a separate harm from publisher liability. What the publisher does is damaging, as is what the distributor does. Without 230, distributors would be liable for defamation once put on notice. This would not lead mass censorship, just common sense in not leaving defamatory content online.
If people already don't like someone they're ripe to be misled this way, and wind up sued for something they shouldn't have done, but for which they were set up.
Oh, so it's to protect people from expressing opinions that make another person look bad. Because the old "I'm fucking you up for your own good" argument hasn't already been done to death for the sake of hidden agendas, obviously.
More like take the example of the woman who was called a hooker five years ago and can no longer sue any search engine that archives it. She turns her waiter down for a date. The waiter Googles her off her CC then says "I'd have thought a HOOKER wouldn't be so choosy." The waiter is not protected under Section 230. Now she applies for a job as a teacher. At the interview, the school administrator googles her. "Sorry, we can't hire an EX-HOOKER," she (the administrator) says, in a way that someone overhears it. She is now liable for slander. In a perfect world, no one would believe these lies, but people do. Those who believe lies and "reiterate" them are original publishers who are separately liable, and not protected by Section 230.
Even one individual's reputation should be more important than any search engine's
I like how you didn't answer the previous post's questions. Road manufacturers, mobile phone companies, supermarkets are not suddenly liable because someone in another country made you look bad.
None of them have the power to remove defamatory content from the web. A supermarket would be liable if it sold a publication it was put on notice about containing libelous content, as would be a bookstore.
Some have committed suicide over bullying or lies
Nobody disputes that. Care to point out where Section 230 was listed as being instrumental to the suicide, or are you going to file a suit against kitchen knife manufacturers for facilitating murder? (To be fair, Tero Pulkinnen's already got the copyright on that moronic angle.)
Without 230, the platforms would be liable under numerous theories of negligence etc, wouldn't allow bullying, and the suicides related to cyberbullying wouldn't have occurred.
Other countries don't recognize 230 for a reason
Other countries don't recognize gun ownership. Some don't even recognize gay marriage. Why the sudden concern about what other countries' laws are? Didn't you literally say "Australian law is always comparative with American law" just yesterday? If Australia lacks Section 230, shouldn't the solution be for Australia to adopt Section 230 instead of asking the US to trash theirs?
Depends on the law. The point is that Australia recognizes that a platform can inflict harm, as does the US, only the US immunizes that harm so that those harmed cannot sue.
You're just flat out not very good at this "thinking things through" business are you?
If you define that as agreeing with you...most of the world rejects platform immunity. The US is unique in that regard.
[ link to this | view in chronology ]
Technically, neither does Google, unless said content is hosted on their servers. A link to defamatory content on the Google search engine is what it is: a factual record of the URL where you can find a certain page on a certain website. Google can be asked to remove that link, but that does not address Bing, Yahoo, DuckDuckGo, and every other search engine on the market — nor does it explain how any search engine, Google included, can be ordered to remove the original defamatory content if it exists on a third-party server/service.
The whole point of referencing “[r]oad manufacturers, mobile phone companies, [and] supermarkets” was to show how they compare to Google: They are not responsible for someone else’s words and deeds, even if they are informed of any bare-minimum third-hand participation in them. The blame should be placed where it truly lies — on the person[s] who spoke or printed the defamatory content — instead of on the deepest wallet.
[ link to this | view in chronology ]
Re:
Exactly. Google can't remove something from the web. Only from its own records. That works for people dumb enough to think that Google is the web, but not enough for reality. Removing that bad Yelp review from Google doesn't help if people look for your business on Yelp directly, nor does it help if someone sends them the link on Facebook or takes a screenshot and emails it or shows them in person. Only a fool thinks that removing a Google result is a panacea - at best, it moderately reduces the likelihood people will see what is available 100 other ways. But, it's at a massive cost that these people wish to ignore.
If you're comforted by that lie, so be it, but most would rather not see the useless collateral damage that remove section 230 would guarantee just to keep you feeling warm and fuzzy with your fiction.
[ link to this | view in chronology ]
Re: Re:
Not only that, this is the same asshole who whined about sex workers minimizing the role of the honest working male by wooing assholes to fuck them for money. You'd think that outing a hooker would be a good thing to shame sex workers, which is what he used to justify the existence of SESTA and FOSTA.
But because an ex-hooker might possibly be disadvantaged by an asshole (who, for some reason, is not considered reprehensible for trying to pick up ladies on the job), suddenly hooker rights matter to him.
What a scumbag!
[ link to this | view in chronology ]
Re:
You have this strange obsession with hookers, don't you Herrick?
[ link to this | view in chronology ]
Re:
Here's a point for you to mull on: while, classically, "equity does not enjoin a libel", in today's era where most speakers (whether libelous or not) are practically judgement-proof (and thus undeterred by the prospect of damages), perhaps it is possible to change that rule in a fashion that still comports with the First Amendment...
[ link to this | view in chronology ]
Re:
More like take the example of the woman who was called a hooker five years ago and can no longer sue any search engine that archives it. She turns her waiter down for a date. The waiter Googles her off her CC then says "I'd have thought a HOOKER wouldn't be so choosy." The waiter is not protected under Section 230. Now she applies for a job as a teacher. At the interview, the school administrator googles her. "Sorry, we can't hire an EX-HOOKER," she (the administrator) says, in a way that someone overhears it. She is now liable for slander. In a perfect world, no one would believe these lies, but people do. Those who believe lies and "reiterate" them are original publishers who are separately liable, and not protected by Section 230.
Okay... that doesn't answer the question. The woman in your example is entirely entitled to sue the waiter for being a predatory asshole. Same goes for the school administrator whose examination of character references seems to be inexplicably based on researching ancient shitposting. You even say Section 230 doesn't protect either of those two.
Here's the thing: why does the law need to pre-emptively protect idiots who believe and repeat anything they say? You even say they're "separately liable". Nothing in Section 230 prevents the woman from suing those who wrong her. In fact, you directly contradict your first reply in this post:
230 protects the liars (or those who distribute lies) in certain situations
Look, if you're going to claim that 230 protects liars, bringing up an example that doesn't include a liar or how he's protected by Section 230 doesn't do jack shit. Oh, but that's right, you've already said on multiple occasions you don't care if anyone believes you or not. Thank goodness for that, I can go back to not believing you and do so completely guilt-free.
Without 230, the platforms would be liable under numerous theories of negligence etc, wouldn't allow bullying, and the suicides related to cyberbullying wouldn't have occurred.
Hook, line and sinker. It's funny how none of these theories have ever made it to or survived or court case. Lori Drew comes to mind. It's almost like all these "theories" you rely on are nothing more than the kind of self-masturbatory fantasies from "I wrote a book so now you can fund my retirement" self-help motivational seminars.
Oops, I'm sorry, was that a little too close to home for you John boi? Were those fighting words? You going to subpoena my backside next?
Depends on the law
Therefore not always, indicating that you were bullshitting. I'm shivering in my shoes from the revelation, here.
[ link to this | view in chronology ]
"The Washington Post, of all newspapers, should know better than to misrepresent Section 230."
Ha, ha, ha. The MSM (translate: purveyors of mainstream narrative or CIA talking points) misrepresent shit all the damn time.
[ link to this | view in chronology ]
Re:
I'm always intrigued as to why people like you always parrot the same stupid talking points while pretending you're above them, and which sources are both not mainstream and reliable in your mind.
[ link to this | view in chronology ]
Re: Re:
Well, given that one of his main criticisms of Masnick is that he'd never be taken seriously by mainstream media, he effectively just complimented Masnick.
Genius move, there.
[ link to this | view in chronology ]
Re: Re: Re:
I ask the question because usually these people will retort with something along the lines of Fox. Then, when asked why Fox should be trusted, they talk about high rating numbers. Then, sometimes it's fun to try nudging them to see if they notice what's wrong with making both claims simultaneously.
[ link to this | view in chronology ]
The politicians are using neutrality as a condition of keeping section 230, not saying that 230 requires neutrality. so what can you say
[ link to this | view in chronology ]
Re:
'Have fun with implementing a Fairness Doctrine to the internet' and/or 'Does that apply to all sites, or just ones that keep giving your buddies the boot?'
[ link to this | view in chronology ]
Re: Re:
Will this Fairness Doctrine be similar to our Rule of Law in that there will be two sets of rules?
[ link to this | view in chronology ]