It's One Thing For Trolls And Grandstanding Politicians To Get CDA 230 Wrong, But The Press Shouldn't Help Them
from the stop-this-nonsense dept
There's an unfortunate belief among some internet trolls and grandstanding politicians that Section 230 of the Communications Decency Act requires platforms to be "neutral" and that any attempt to moderate content or to have any form of bias in a platform's moderation focus somehow removes 230 protections. Unfortunately, it appears that many in the press are incorrectly buying into this flat out incorrect analysis of CDA 230. We first saw it last year, in Wired's giant cover story about Facebook's battles, in which it twice suggested that too much moderation might lose Facebook its CDA 230 protections:
But if anyone inside Facebook is unconvinced by religion, there is also Section 230 of the 1996 Communications Decency Act to recommend the idea. This is the section of US law that shelters internet intermediaries from liability for the content their users post. If Facebook were to start creating or editing content on its platform, it would risk losing that immunity—and it’s hard to imagine how Facebook could exist if it were liable for the many billion pieces of content a day that users post on its site.
This is not just wrong, it's literally backwards from reality. As we've pointed out, anyone who actually reads the law should know that it was written to encourage moderation. Section (b)(4) directly says that one of the policy goals of the law is "to remove disincentives for the development and utilization of blocking and filtering technologies." And (more importantly), section (c)(2) makes it clear that Section 230's intent was to encourage moderation by taking away liability for any moderation decisions:
No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected...
In short: if a site decides to remove content that it believes is "objectionable" (including content it finds to be harassing), there is no liability for the platform even if the content blocked is "constitutionally protected."
Indeed, this was the core point of CDA 230 and the key reason why Rep. Chris Cox wrote the law in the first place. As was detailed in Jeff Kosseff's new book on the history of Section 230, Cox was spurred into action after reading about the awful ruling in the Stratton Oakmont v. Prodigy case, in which a judge decided that since Prodigy did some moderation of its forums, it was liable for any content that was left up. This was the opposite finding from another lawsuit, Cubby v. CompuServe, which found CompuServe not liable, since it didn't do any moderation.
However, part of Prodigy's pitch was that it was to be the more "family friendly" internet service compared to the anything goes nature of CompuServe. The ruling in the Stratton Oakmont case would have made that effectively impossible -- and thus Section 230 was created explicitly to encourage different platforms to experiment with different models of moderation, so that there could be different platforms who chose to treat content differently.
Unfortunately, it seems that this myth that CDA 230 requires "neutrality" is leaking out beyond just the trolls and grandstanding politicians -- and into the more mainstream media as well. Last weekend, the Washington Post ran a column by Megan McArdle about Facebook's recent decision to ban a bunch of high profile users. Whether or not you agree with Facebook's decision, hopefully everyone can agree that this description here gets Section 230 exactly backwards:
The platforms are immune from such suits under Section 230 of the Communications Decency Act of 1996. The law treats them as a neutral pass-through — something like a community bulletin board — and doesn’t hold them responsible for what users post there. That is eminently practical given the sheer volume of material the platforms have to deal with. But it creates a certain tension when a company such as Facebook argues that it has every right to kick off people who say things it considers abhorrent.
Facebook is acting more and more like a media company, with a media company’s editorial oversight (not to mention an increasing share of the industry’s ad revenue). If Facebook is going to behave like a media provider, picking and choosing what viewpoints to represent, then it’s hard to argue that the company should still have immunity from the legal constraints that old-media organizations live with.
It's not hard to argue that at all. Once again, the entire point of Section 230 was to encourage moderation, not to insist on neutrality. The Washington Post, of all newspapers, should know better than to misrepresent Section 230.
But it wasn't the only one. Just days later, Vox posted one of its "explainer" pieces also about Facebook's recent bans. The Vox piece, at least, quotes Section 230, but only section (c)(1) (the part that gets more attention), ignoring (c)(2), which is what makes it clear that it's encouraging moderation. Instead, Vox's Jane Coaston falsely suggests that Section 230 has a distinction between "media" companies and "platform" companies. It does not.
But if Facebook is a publisher, then it can exercise editorial control over its content — and for Facebook, its content is your posts, photos, and videos. That would give Facebook carte blanche to monitor, edit, and even delete content (and users) it considered offensive or unwelcome according to its terms of service — which, to be clear, the company already does — but would make it vulnerable to same types of lawsuits as media companies are more generally.
If the New York Times or the Washington Post published a violent screed aimed at me or published blatantly false information about me, I could hypothetically sue the New York Times for doing so (and some people have).
So instead, Facebook has tried to thread an almost impossible needle: performing the same content moderation tasks as a media company might, while arguing that it isn’t a media company at all.
This "publisher" v. "platform" concept is a totally artificial distinction that has no basis in the law. News publishers are also protected by Section 230 of the CDA. All CDA 230 does is protect a website from being held liable for user content or moderation choices. It does not cover content created by the company itself. In short, the distinction is not "platform" or "publisher" it's "content creator" or "content intermediary." Contrary to Coaston's claims, Section 230 equally protects the NY Times and the Washington Post if it chooses to host and/or moderate user comments. It does not protect content produced by those companies itself, but similarly, Section 230 does not protect content produced by Facebook itself.
There are enough issues to be concerned about regarding the internet and big platforms these days, that having the media repeatedly misrepresenting Section 230 of the CDA and suggesting -- falsely -- that it's a special gift to internet platforms doesn't help matters at all. CDA 230 protects platforms that host user speech -- including from any moderation choices they make. It does not require them to be neutral, and it does not require them to define themselves as a "platform" instead of a "publisher." News organizations should know better and should stop repeating this myth.
Filed Under: cda 230, jane coaston, liability, megan mcardle, platforms, reporting, section 230