Appeals Court Issues Strong CDA 230 Ruling, But It Will Be Misleadingly Quoted By Those Misrepresenting CDA 230
from the mostly-good,-but-a-bit-of-bad dept
Last Friday, the DC circuit appeals court issued a mostly good and mostly straightforward ruling applying Section 230 of the Communications Decency Act (CDA 230) in a perfectly expected way. However, the case is notable on a few grounds, partly because it clarifies a few key aspects of CDA 230 (which is good), and partly because of some sloppy language that is almost certainly going to be misquoted and misrepresented by those who (incorrectly) keep insisting that CDA 230 requires "neutrality" by the platform in order to retain the protections of the law.
Let's just start by highlighting that there is no "neutrality" rule in CDA 230 -- and (importantly) the opposite is actually true. Not only does the law not require neutrality, it explicitly states that it's goal is for there to be more content moderation. The law explicitly notes that it is designed:
to remove disincentives for the development and utilization of blocking and filtering technologies that empower parents to restrict their children’s access to objectionable or inappropriate online material
In short, the law was designed to encourage moderation, which, by definition, cannot be "neutral."
Now, onto the case. It involved a bunch of locksmiths who claim that there are "scam locksmiths" claiming to be local in areas they are not, and the various search engines (Google, Microsoft and Yahoo are the defendants here) are putting those fake locksmiths in their search results, meaning that the real locksmiths have to spend extra on advertising to get noticed above the scam locksmiths.
You might think that if these legit locksmiths have been wronged by anyone, it's the scam locksmiths, but because everyone wants to blame platforms, they've sued the platforms -- claiming antitrust violations, false advertising, and a conspiracy in restraint of trade. The lower court, and now the appeals court, easily finds that Section 230 protects the search engines from these claims, as the content from the scam locksmiths is from the scam locksmiths, and not from the platforms.
The attempt to get around CDA 230 is mainly by focusing on one aspect of how the local search services work -- in that most of the services try to take the address of the local business and place a "pin" on a map to show where the business is physically located. The locksmiths argue that creating this map and pin involves content that the search engines create, and therefore it is not immune under CDA 230. The appeals court says... that's not how it works, since the information is clearly "derived" directly from the third parties:
The first question we must address is whether the defendants’ translation of information that comes from the scam locksmiths’ webpages -- in particular, exact street addresses -- into map pinpoints takes the defendants beyond the scope of § 230 immunity. In considering this question, it is helpful to begin with the simple scenario in which a search engine receives GPS data from a user’s device and converts that information into a map pinpoint showing the user’s geographic location. The decision to present this third-party data in a particular format -- a map -- does not constitute the “creation” or “development” of information for purposes of § 230(f)(3). The underlying information is entirely provided by the third party, and the choice of presentation does not itself convert the search engine into an information content provider. Indeed, were the display of this kind of information not immunized, nothing would be: every representation by a search engine of another party’s information requires the translation of a digital transmission into textual or pictorial form. Although the plaintiffs resisted this conclusion in their briefs, see Locksmiths’ Reply Br. 3 (declaring that the “location of the inquiring consumer . . . is determined entirely by the search engines”), they acknowledged at oral argument that a search engine has immunity if all it does is translate a user’s geolocation into map form, see Recording of Oral Arg. at 12:07-12:10.
With this concession, it is difficult to draw any principled distinction between that translation and the translation of exact street addresses from scam-locksmith websites into map pinpoints. At oral argument, the plaintiffs could offer no distinction, and we see none. In both instances, data is collected from a third party and re-presented in a different format. At best, the plaintiffs suggested that a line could be drawn between the placement of “good” and “bad” locksmith information onto the defendants’ maps. See id. at 12:43-12:58 (accepting that, “to the extent that the search engine simply depicts the exact information they obtained from the good locksmith and the consumer on a map, that appears to be covered by the [Act]”). But that line is untenable because, as discussed above, Congress has immunized the re-publication of even false information.
That's a nice, clean ruling on what should be an obvious point. But having such clean language could be useful for citations in future cases. It is notable, at least, (and useful) that the court clearly states: "Congress has immunized the re-publication of even false information." Other courts have made this clear, but having it in such a nice, compact form that is highly quotable is certainly handy.
There are a few other attempts to get around CDA 230 that all fail -- including using the fact that the "false advertising" claim is under the Lanham Act (which is often associated with trademark law), and CDA 230 explicitly excludes "intellectual property" law. But that doesn't magically make the false advertising claims "intellectual property," nor does it exclude them from CDA 230 protections.
But, as noted up top, there is something in the ruling that could be problematic going forward concerning the still very incorrect argument that CDA 230 requires the platforms be "neutral." The locksmiths' lawyers argued that even if the above case (of putting a pin on a map) didn't make the search engines "content creators," perhaps they were content creators when they effectively made up the location. In short: when these (and some other) local search engines don't know the actual exact location of a business, they might put in what is effectively a guesstimate, usually placing it in a central location of an expected range. As the court explains:
The plaintiffs describe a situation in which the defendants create a map pinpoint based on a scam locksmith’s website that says the locksmith “provides service in the Washington, D.C. metropolitan area” and “lists a phone number with a ‘202’ area code.” Locksmiths’ Br. 8; see also Locksmiths’ Reply Br. 4-5. According to the plaintiffs, the defendants’ search engines use this information to “arbitrarily” assign a map location within the geographic scope indicated by the third party.
Legally, that does represent a slightly different question -- and (if you squint) you can kinda see how someone could maybe, possibly, make the argument that if the local search engines take that generalized info and create a pin that appears specific to end users, that it has somehow "created" that content. But the court (correctly, in my opinion) says "nope," and that since that pin is still derived from the information provided by a 3rd party, 230 protects. This is good and right.
The problem is that the court went a bit overboard in using the word "neutral" in describing this, using the word in a very different way than most people mean when they say "neutral" (and in a different way than previous court rulings -- including those cited in the case -- have used the word neutral):
We conclude that these translations are also protected. First, as the plaintiffs do not dispute, the location of the map pinpoint is derived from scam-locksmith information: its location is constrained by the underlying third-party information. In this sense, the defendants are publishing “information provided by another information content provider.” Cf. Kimzey v. Yelp!, Inc., 836 F.3d 1263, 1270 (9th Cir. 2016) (holding that Yelp’s star rating system, which is based on receiving customer service ratings from third parties and “reduc[ing] this information into a single, aggregate metric” of one to five stars could not be “anything other than usergenerated data”). It is true that the location algorithm is not completely constrained, but that is merely a consequence of a website design that portrays all search results pictorially, with the maximum precision possible from third-party content of varying precision. Cf. Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1125 (9th Cir. 2003) (“Without standardized, easily encoded answers, [Matchmaker.com] might not be able to offer these services and certainly not to the same degree.”).
Second, and also key, the defendants’ translation of thirdparty information into map pinpoints does not convert them into “information content providers” because defendants use a neutral algorithm to make that translation. We have previously held that “a website does not create or develop content when it merely provides a neutral means by which third parties can post information of their own independent choosing online.” Klayman, 753 F.3d at 1358; accord Bennett, 882 F.3d at 1167; see Kimzey, 836 F.3d at 1270 (holding that Yelp’s “star-rating system is best characterized as the kind of neutral tool[] operating on voluntary inputs that . . . [does] not amount to content development or creation” (internal quotation marks omitted) (citing Klayman, 753 F.3d at 1358)). And the Sixth Circuit has held that the “automated editorial act[s]” of search engines are generally immunized under the Act. O’Kroley v. Fastcase, Inc., 831 F.3d 352, 355 (6th Cir. 2016).
Here, the defendants use automated algorithms to convert third-party indicia of location into pictorial form. See supra note 4. Those algorithms are “neutral means” that do not distinguish between legitimate and scam locksmiths in the translation process. The plaintiffs’ amended complaint effectively acknowledges that the defendants’ algorithms operate in this fashion: it alleges that the words and numbers the scam locksmiths use to give the appearance of locality have “tricked Google” into placing the pinpoints in the geographic regions that the scam locksmiths desire. Am. Compl. ¶ 61B. To recognize that Google has been “tricked” is to acknowledge that its algorithm neutrally translates both legitimate and scam information in the same manner. Because the defendants employ a “neutral means” and an “automated editorial act” to convert third-party location and area-code information into map pinpoints, those pinpoints come within the protection of § 230.6
See all those "neutral means" lines? What the court means is really automated and not designed to check for truth or falsity of the information. It does not mean "unbiased," because any algorithm that is making decisions is inherently and absolutely "biased" towards trying to choose what it feels is the "best" solution -- in this case, it is "biased" towards approximating where to put the pin.
The court is not, in any way, saying that a platform need be "neutral" in how it applies moderation choices, but I would bet a fair bit of money that many of the trolls (and potentially grandstanding politicians) will use this part of the court ruling to pretend 230 does require "neutrality." The only thing I'm not sure about is how quickly this line will be cited in a bogus lawsuit, but I wouldn't expect it to take very long.
For what it's worth, after I finished writing this, I saw that professor Eric Goldman had also written up his analysis of the case, which is pretty similar, and includes a few other key points as well, but also expects the "neutral means" line to be abused:
I can see Sen. Cruz seizing on an opinion like this to say that neutrality indeed is a prerequisite of Section 230. That would be less of a lie than his current claim that Section 230 only protects “neutral public forums,” but only slightly. The “neutrality” required in this case relates only to the balance between legal and illegal content. Still, even when defined narrowly and precisely like in the Daniel v. Armslist case, the term “neutrality” is far more likely to mislead than help judges. In contrast, the opinion cites the O’Kroley case for the proposition that Section 230 protects “automated editorial act[s],” and that phrasing (though still odd) is much better than the term “neutrality.”
The overall ruling is good and clear -- with just this one regrettable bit of language.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: cda 230, intermediary liability, local search, locksmiths, neutrality, scam locksmiths, section 230
Companies: google, microsoft, yahoo
Reader Comments
The First Word
“Cox said (emphasis mine):
That doesn't sound like "I'll allow it;" to me. It sounds like "We want them to do it."
made the First Word by Gary
Subscribe: RSS
View by: Time | Thread
"Congress has immunized the re-publication of even false information."
Exactly: you can't trust anything you read online, including advertising (or reviews), which is why it's wise to ignore ALL online advertising.
Newspapers are held to a different standard, as are websites in other countries.
It's up to the public to demand truth from the internet, and until now, it has not.
As for there being no neutrality requirement, the moderation language was based mostly on pornography, not lies or political bias. Even though there is no explicity "neutrality" requirement at present in 230, that doesn't mean Congress can now choose to impose one. Laws are changed all the time.
Those who claim 230 and neutrality should be linked are making an argument based on principle, not law, and arguing that the principle should become the law.
[ link to this | view in thread ]
"Congress has immunized the re-publication of even false information."
"But...but...I read it in GOOGLE! Why am I being sued and not the person who put it there?"
[ link to this | view in thread ]
Because the person who is choosing to sue you (they are not required to do so) wants money and you're an easy target?
[ link to this | view in thread ]
That's a misreading. It says
That talks about user control, not site-operator control. It doesn't support the view that sites should moderate, but leans toward supporting the opposite: people should use software to remove stuff they don't want to see. At best it's arguing for a "free market" between unmoderated and centrally-moderated sites.
Saying that people should be free to do something is not the same as saying they should do it.
[ link to this | view in thread ]
Re:
You might be arguing that it should become the law. Those quoted in the article have clearly said on multiple occasions that it is already the law. You can't pretend that they somehow meant "this is what the law should be". There is no way to twist their statements to mean that without breaking the English language.
As for the idea that it should become the law, aside from the fact that that would be pretty unconstitutional from my point of view it would also be an impossible law to obey. Expression is an act of opinion. Even stating facts is expressing your opinion of what is factual. Nevermind the issue that anyone will think that any expression they disagree with is not neutral purely because they don't agree with what was said.
If you honestly think that platforms should be blamed for the acts of their users then I want a law that throws the President in prison for every crime committed in the US. That idea is no dumber than what you're asking for.
[ link to this | view in thread ]
"keep insisting that CDA 230 requires "neutrality" by the platform in order to retain the protections of the law. "
Those who demand this actually think a neutral position is possible?
How can one claim they have every voice represented on every item in every article/post/ad
[ link to this | view in thread ]
Re:
Not at all, they want their views promoted by third parties, while they shout down opposing views.
[ link to this | view in thread ]
Re:
That's a misreading. It says
You are misrepresenting 230. Your opinion isn't supported by caselaw, the stated intentions of the law, and the actual reading to the statue.
[ link to this | view in thread ]
Re: Re:
The statement in the article was that the law was designed to encourage moderation. It may have been, but the given quote doesn't support that. Caselaw about 230 has nothing to do with it, because that came after the law was designed. As for "the stated intentions", can you link to a relevant statement of intent?
[ link to this | view in thread ]
Re:
Of course it's possible. You can let anyone provide an article/post/ad under the same terms (pricing etc). That produces a neutral site, but probably not a useful site.
[ link to this | view in thread ]
Re: Re: Re:
The law was intended to allow companies to facilitate moderation without fear of liability. At the time, that meant blocking software that a parent could purchase, but it also referred to curated services like Prodigy vs. an open pipe like modern ISPs. A parent might choose Prodigy for their kids over an open pipe because they like how Prodigy chooses to moderate, and 230 would allow Prodigy to do so without the risk of being liable as they were in Stratton.
Fast forward to now, and platforms like Facebook can moderate content like Prodigy could, thanks to 230.
For a statement of intent, check https://www.congress.gov/congressional-record/1995/08/04/house-section/article/H8460-1. The relevant section starts with "amendment offered by mr. cox of California"
[ link to this | view in thread ]
Re: Re: Re: Re:
Thanks for the link. After reading Cox's argument, I still don't see that he wanted to encourage moderation per se, as a government policy. He wanted to allow it for sure, and that was clear from Mike's quotes; but any encouragement from him would be as a private citizen, a parent acting in a free market (choosing between "family-friendly" areas and others). There's quite a bit of libertarian subtext really.
[ link to this | view in thread ]
I want you to think about this hypothetical. I want you to consider every ramification of it. Then I want you to answer the question I pose at the end.
A privately-owned political forum bans the promotion of specific ideologies — one of which is, say, White supremacy. One day, the admins of that forum hear how Congress has altered 230 to require “neutrality” in content moderation. This change means a platform cannot moderate any legally protected speech. (That change is the situation for which you appear to advocate. If I am wrong about that, blame your lack of clarity on the matter.)
Promotion of White supremacy is legal in the United States. So how can those admins ban speech they do not want hosted on their forum if the altered CDA 230 forces them into hosting that speech?
[ link to this | view in thread ]
Allowing moderation to even happen in the first place is encouraging it.
[ link to this | view in thread ]
Re: Re:
In that case their appeal for neutrality is bs
[ link to this | view in thread ]
Re: Re: Re:
Dun-dun duuuunnnnnnnnnn!
[ link to this | view in thread ]
Re:
Congress can attempt to impose lots of things, but the bill of rights would get in the way of any such imposition. Who, after all, is to say what is "neutral"?
CDA 230 is there in the first place because of a horrendously-bad court decision.
It is not inconceivable that, even in the absence of CDA 230, the courts (led by a Supreme Court supportive of speech) would have eventually recognized that free speech implies that people who are not speaking (but merely selling amplifiers) can't justly be accused of the speech others make.
[ link to this | view in thread ]
Re: Re: Re: Re: Re:
Cox said (emphasis mine):
That doesn't sound like "I'll allow it;" to me. It sounds like "We want them to do it."
[ link to this | view in thread ]
Fast forward to now, and platforms like Facebook can moderate content like Prodigy could, thanks to 230.
When moderation can influence the outcome of elections, Congress can decide to tie 230 immunity to political neutrality.
[ link to this | view in thread ]
Re:
There is no such thing.
[ link to this | view in thread ]
Re: Re: Re: Re:
drum roll please
[ link to this | view in thread ]
And until they have a Supreme Court willing to overlook the First Amendment, such a decision means nothing.
[ link to this | view in thread ]
Re: Re:
Having been in the usenet trenches - I would like to be able to choose to visit useful sites instead of ones forced to host spam, and nazis.
[ link to this | view in thread ]
Re:
When moderation can influence the outcome of elections, Congress can decide to tie 230 immunity to political neutrality.
Ya got my "LOL" vote AC!
Good thing the rest of us have a body of caselaw (Commonlaw) and the bill of rights that say otherwise.
[ link to this | view in thread ]
Re:
Television and print news outlets can also influence elections. Are you suggesting that Congress should pass a law forcing their coverage to be neutral, too? Fox sure as hell wouldn't like that.
[ link to this | view in thread ]
Pandora called, she wants her box back
Why yes, congress could pass a blatantly unconstitional law like that, however assuming the supreme court had any respect for the document it would quickly be struck down, and on the off-chance that it didn't I can all but guarantee that it would not go the way you think it would.
If the choice is between 'no political discussion/content is allowed, including the good stuff', and 'all political discussion/content is required to be allowed, even stuff the company/platform strongly disagrees with', what makes you think they wouldn't block all of it, if only to avoid having their platform filled with deplorable individuals?
In addition, if the ability to influence election results is grounds for enforcing political neutrality, well, hope you're not a fan of any platform/company that isn't politically neutral(like, oh say, Trump's cheerleading squad on Fox...), because that can of worms you just opened will swallow them right up unless you hypocritically only want 'political neutrality' enforced against platforms you don't agree with.
[ link to this | view in thread ]
Re: Re:
'... and now to balance out the gushing praise we just heaped on Trump, we will now have a thirty minute segment talking about how bad he is, and/or how great and more qualified his opponents are.'
Oh yeah, I'm sure that'd go over great, though strangely enough I suspect that when it comes to enforced political neutrality there would be nary a mention of the likes of Fox for some strange reason...
[ link to this | view in thread ]
Re: Re: Re:
Oh yeah, I'm sure that'd go over great, though strangely enough I suspect that when it comes to enforced political neutrality there would be nary a mention of the likes of Fox for some strange reason...
Well obviously anyone pushing for this kind of viewpoint neutrality is a snowflake that melts when someone criticizes El Cheetos. Therefore their solution is to let the White House determine what is fair and balanced reporting!
[ link to this | view in thread ]
Re: Re: Re: Re: Re: Re:
I see your point, but don't you think that "We want them to help us do it" is a more appropriate reading? Quoting: "to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in"
That may seem like hairsplitting but I find the distinction important and feel like he's going out of his way to make it. When Facebook etc. block content for us—applying their policy for the whole world—we lose control. Or rather, our control is limited to selecting the site whose moderation policy aligns most closely with our own. In that sense, perhaps he did intend sites to moderate differently (i.e. he intended them to moderate) to cater to different groups. Still, he hints at a grander vision that hasn't come to pass in any significant way.
[ link to this | view in thread ]
'We have always been against that sort of presidential control!'
Therefore their solution is to let the White House determine what is fair and balanced reporting!
... Right until the other party is in power, at which point even the suggestion that the White House should make that sort of determination would be decried as tyrannical and utterly unconstitutional.
[ link to this | view in thread ]
Not handed down by God on mile-high titanium blocks. Changeable.
First, let's all look for the "Decency" in "Communications Decency Act"! Where the HELL did that go? ... Oh, right: long since ruled UN-Constitutional! Any reference to the intent of legislators already proven FLATLY WRONG on the major point is STUPID! Most of the Act is already THROWN OUT.
Even if Masnick were right with his Clintonian / Red Queen assertion that "neutral" doesn't mean neutral when he doesn't want it to, is not the last decision and can be changed at any time by The People's representatives.
"Moderation" by definition is NOT and CANNOT be partisan. Period. If not objective and for good cause with The Public as beneficiary under common law -- which is what the section actually states with "Good Samaritan" title and "in good faith" requirement -- then it's not allowed.
By attacking the word "neutral" Masnick is trying to meld "moderation" and "censorship" to empower corporations to control ALL speech.
Masnick is a partisan for corporations and though talks up 1A "free speech" that's just for cover: in this crucial point for corporate power he's against the clear Constitutional rights of "natural" persons. He's going to slant what writes to play up government-conferred power of corporations to control what YOU write.
His duplicity on this Section is shown by that when arguing with me, he simply DELETED the "in good faith" requirement! -- And then blows it off as not important:
https://www.techdirt.com/articles/20190201/00025041506/us-newspapers-now-salivating-over- bringing-google-snippet-tax-stateside.shtml#c530
Now, WHERE did Masnick get that exact text other than by himself manually deleting characters? -- Go ahead. Search teh internets with his precious Google to find that exact phrase. I'll wait. ... It appears nowhere else, which means that Masnick deliberately falsified the very law under discussion trying to keep me from pointing out that for Section 230 to be valid defense of hosts, they must act "in good faith" to The Public, NOT as partisans discriminating against those they decide are foes.
Masnick deliberately falsified when supposedly quoting, re-defines a common word, and holds a corporatist view against YOUR interests! And you clowns still believe he's right? Sheesh.
-m-a-s-n-i-c-k-s -h-a-t-e -r-u-l-e-s -e-s-p -h-o-r-i-z-o-n-t-a-l-s
PS: Intentionally late because who cares? This tiny little site has almost no influence, not least because WRONG! There's no one here to convince. All you FEW remaining kids will do is gainsay, and then the site -- "the community" doing it is just another lie -- but an Administrator of the site will decide to censor this, falsely calling it "hiding" -- because a key point of the new censorship is to keep their SNEAKY CHEATS from becoming known.
[ link to this | view in thread ]
Re: Re: Re: Re: So, "Gary": I see you're being GENEROUS again!
Unprecedented generosity of making others "First Word". How much do you pay for that? Or are you Timothy Geigner, aka "Dark Helmet" with Admin privileges and it's therefore free to you?
You first came to my notice for criticizing Techdirt! I predicted you wouldn't last here, remember? But you turned into one of the most proific commenters: 1328 now. And yet in first two years made only a dozen comments? Weird.
You can't hide identity, Timmy, when repeat bombast and your Trump Derangement Syndrome always bringing up "El Cheetos". Sad.
[ link to this | view in thread ]
I don’t think he’s going to fuck you, Blue Balls.
[ link to this | view in thread ]
A White supremacist forum and a Black Lives Matter forum will always be moderated differently. You can’t make them both moderate the exact same way and expect them to retain their unique identities.
[ link to this | view in thread ]
Question, Blue Balls: If a certain kind of distasteful speech is legal, what law (“common” or otherwise) says a given platform must host it?
The grand irony here is that if this site truly has little-to-no influence, you’re wasting your time shit-talking it even more than (you think) the rest of us are wasting on our comments and readership. I mean, if the site is such basic bullshit that no one pays attention to it, who else besides regular commentators is paying attention to your bullshit?
[ link to this | view in thread ]
Re:
I would apply the law that is being currently being applied in some EU Member States' bars. Yeah, those places where you drink alcohol.
They can't discriminate. That is, they can't deny entry to people in a:
They can set, let's say, a dress code. But they can't deny access to a person to their bar based on their religion, sex, sexual orientation or race.
Any discrimination done in that regard is a felony.
If you want to have a white supremacist bar in the EU (at least in some MS), you're in for a bad day.
Of course, you're perfectly free to be a complete racist in your own home.
In case of a site/forum:
For example, a MMORPG forum can ban any mention to other games except theirs, because their purpose is to promote their game, not others. They can't choose to ban only one game, while allowing others.
A political forum can ban speech about idk, consoles, because that's not what they do. But it's a political forum, so they have to allow any talk about politics (as long as it isn't illegal).
See that the rule set is "a business". For example, Facebook and Twitter are businesses. Your average political forum might not be, as long as they don't make money and/or what they make is limited.
Of course, with those rules I'd add stronger anti-liability rules like:
Still, don't want to keep an eye on all this shit? Don't make a business out of a social network or a forum. You can make a non-profit forum and moderate all you want.
[ link to this | view in thread ]
Re: Re:
Ignoring for the moment the idea that a site could be forced to allow use by someone who was an ass in an inventive way such that the way they were an ass wasn't specifically spelled out in the TOS(and where have I heard that logic before...), if violations of the TOS in general wouldn't be enough, then said TOS would quickly become vague enough to allow them to give the boot to anyone, or it would be quickly be nullified by people finding new and inventive ways to skirt around it.
But it's a political forum, so they have to allow any talk about politics (as long as it isn't illegal).
This... would be a nightmare. Political forums would become battlegrounds, as the trolls/idiots from the various parties would go around filling up any forum run by another party with massive numbers of posts, not only forcing the owners from one party to host any and all content by opposing parties, no matter what they felt about it, but making any discussions between members all but impossible. If you've ever seen the comment section of a political video on YT and the absolute mess that tends to be, it would be like that except everywhere.
You've put more thought into this than a lot of people(sadly a good number of them politicians...), but even so there are some large problems with the idea, such that I still believe it would be a cure worse than the disease.
[ link to this | view in thread ]
Re: Not handed down by God on mile-high titanium blocks. Changea
First, let's all look for the "Decency" in "Communications Decency Act"! Where the HELL did that go? ... Oh, right: long since ruled UN-Constitutional! Any reference to the intent of legislators already proven FLATLY WRONG on the major point is STUPID! Most of the Act is already THROWN OUT.
Perhaps you should stop spewing nonsense and learn about the actual history of the law.
It was two separate laws mashed together. One part was thrown out. It was written by Senator Exon. One part was not thrown out. It was written by Reps. Cox and Wyden. So, yes, it's fine to ignore the legislative intent of Exon's part. That got thrown out. But that's got nothing to do with 230.
And you would know this if you weren't so consistently wrong on everything and refusing to even take the first steps to cure your ignorance. It is almost as if you thrive by making shit up. Maybe stop doing that. It's been over a decade. At some point, being a total ignorant asshole on a forum you hate has to have diminishing returns.
[ link to this | view in thread ]
Indexing
"Congress has immunized the re-publication of even false information."
This is why search engines can index Breitbart and the NYT.
[ link to this | view in thread ]
You know it's going to be good when you don't even have to post "Where's Poochie" comments in the 230 thread.
[ link to this | view in thread ]
I dislike White supremacists. I dislike their ideology. Under this suggested rule of yours, I could not delete White supremacist propaganda from a politics forum I run because “no sir, I don’t like it” isn’t a good enough reason. Similarly…
…I also couldn’t delete it because it is technically political speech.
Your suggestions would cause far more disorder and chaos than they would to rein it in. Then again, I suspect that may be the point.
[ link to this | view in thread ]
They'd misquote anything
To be fair, the people who will misquote this would misquote, "To be, or not to be," if it served their purposes.
[ link to this | view in thread ]
Re:
You're still assuming all viewers see the same effect from moderation decisions. If we were to apply Cox's idea of user control to Techdirt, you might configure the site to hide posts that other users have tagged as "racist", while someone else might choose to make those more visible and hide "social justice". It would be less necessary for each group to set up specific forums for themselves.
As another example, some people have complained when Techdirt uses certain "naughty" words in headlines. One headline recently quoted "fuck you". Given an option, some might configure the site to hide/bowdlerize those.
[ link to this | view in thread ]
Re: They'd misquote anything
"or not"
[ link to this | view in thread ]
Re: Re:
But that's got nothing to do with Stratton Oakmont v Prodigy. That decision didn't involve Prodigy providing tools for users to block certain posts*, it involved moderators actively deleting posts that violated forum guidelines.
* I'll add that Prodigy did have a rudimentary blocklist -- it filled up too fast to be much use -- but that's it; nothing remotely like the category sorting you're describing. It's worth remembering that the CDA passed in 1996, and Prodigy's proprietary forum software was already pretty long in the tooth even then; if you think people were looking at the sort of advanced tagging/categorization features we see today, you're getting way ahead of yourself.
[ link to this | view in thread ]
Re: Indexing
That level of false equivalence takes some serious chutzpah.
I'm no fan of the New York Times, but it's nowhere near the equivalent of Breitbart. It's not even the equivalent of the New York Post.
[ link to this | view in thread ]
Re: Re: Re: Re: Re: Re: Re:
I'm inclined to agree with you on this point. Consider the examples given: Prodigy, AOL, CompuServe. These are classic examples of the "walled garden" form of internet access and at the national level, they were what people expected. While you could certainly open a browser from within AOL and go where you liked, it wasn't the common form of usage. The idea of an open pipe was far less common and tended to be offered by local providers. (I used both AOL and a local dialup provider at different times back in the 90s)
That said, I also think that on the whole, Section 230 has accomplished what Cox and Wyden intended it to do, even if it might not have done it in exactly the way they intended it to happen.
[ link to this | view in thread ]
That doesn’t address how most sites don’t want to host specific types of speech regardless of whether users can filter it.
[ link to this | view in thread ]
Re: sad low energy mental illness
This is what paranoid schizophrenia looks like.
[ link to this | view in thread ]
Re:
You missed the part further down though that directly addresses moderation:
[ link to this | view in thread ]
Re: Not handed down by God on mile-high titanium blocks. Changea
Oh really? Want to bet on that? Here's that section you like to throw around so much in all it's textual glory:
See those parts I bolded? Here, let me make it clearer:
The provider can't be held liable for restricting access to content that the provider deems objectionable. Even if said content is Constitutionally protected.
Your entire argument is invalid. Now sit down and shut up.
[ link to this | view in thread ]
You're still lying and misrepresenting everything
If you're going to copy and paste then I'm going to copy and paste:
No, he didn't, he was quoting the paragraph/section down from the good faith clause you nincompoop.
From the paragraph immediately following the one you are talking about you dolt.
Wait's over:
https://www.law.cornell.edu/uscode/text/47/230
And:
https://www.google.com/search?rlz=1C1GCEB_ enUS852US852&ei=rsUDXbSCK5K2swW92rvACQ&q=No+provider+or+user+of+an+interactive+computer+serv ice+shall+be+held+liable+on+account+of+any+action+taken+to+enable+or+make+available+to+information+c ontent+providers+or+others+the+technical+means+to+restrict+access+to+material+described+in+paragraph &oq=No+provider+or+user+of+an+interactive+computer+service+shall+be+held+liable+on+account+of+an y+action+taken+to+enable+or+make+available+to+information+content+providers+or+others+the+technical+ means+to+restrict+access+to+material+described+in+paragraph&gs_l=psy-ab.3..0i71l8.72225.74382..7 4918...0.0..0.0.0.......1....2j1..gws-wiz.Pgx6m4-4SEc&safe=active&ssui=on
See above links.
Well, since he ACTUALLY was quoting the paragraph down, he didn't falsify anything.
Or maybe you are trying to misrepresent what Mike was saying to keep from being embarrassed that you are wrong. Again.
[ link to this | view in thread ]
Re: Re:
Not coverage but advertising yes. FCC has already done so, no need to pass another law.
[ link to this | view in thread ]