The Importance Of Defending Section 230 Even When It's Hard
from the preventing-tough-cases-from-making-bad-law dept
The Copia Institute filed another amicus brief this week, this time in Fields v. Twitter. Fields v. Twitter is one of a flurry of cases being brought against Internet platforms alleging that they are liable for the harms caused by the terrorists using their sites. The facts in these cases are invariably awful: often people have been brutally killed and their loved ones are seeking redress for their loss. There is a natural, and perfectly reasonable, temptation to give them some sort of remedy from someone, but as we argued in our brief, that someone cannot be an internet platform.
There are several reasons for this, including some that have nothing to do with Section 230. For instance, even if Section 230 did not exist and platforms could be liable for the harms resulting from their users' use of their services, for them to be liable there would have to be a clear connection between the use of the platform and the harm. Otherwise, based on the general rules of tort law, there could be no liability. In this particular case, for instance, there is a fairly weak connection between ISIS members using Twitter and the specific terrorist act that killed the plaintiffs' family members.
But we left that point to Twitter to ably argue. Our brief focused exclusively on the fact that Section 230 should prevent a court from ever even reaching the tort law analysis. With Section 230, a platform should never find itself having to defend against liability for harm that may have resulted from how people used it. Our concern is that in several recent cases with their own terrible facts, the Ninth Circuit in particular has found itself willing to make exceptions to that rule. As much as we were supporting Twitter in this case, trying to help ensure the Ninth Circuit does not overturn the very good District Court decision that had correctly applied Section 230 to dismiss the case, we also had an eye to the long view of reversing this trend.
The problem is, like the First Amendment itself, speech protections only work as speech protections when they always work. When one can find exemptions here and there, all of a sudden none of these protections are effective and it chills the speech of those who were counting on them because no one can be sure whether or not the speech will ultimately be protected. In the case of Section 230, that chilling arises because if the platforms cannot be sure whether they will be protected from liability in their users' speech, then they will have to assume they are not. Suddenly they will have to make all the censoring choices with respect to their users' content that Section 230 was designed to prevent, just to avoid the specter of potentially crippling liability.
One of the points we emphasized in our brief was how such an outcome flouts what Congress intended when it passed Section 230. As we said then, and will say again as many times as we need to, the point of Section 230 is to encourage the most beneficial online speech and also minimize the worst speech. To see how this dual-purposed intent plays out we need to look at the statute as a whole, beyond the part of it that usually gets the most attention, at Subsection (c)(1), which is about how platforms are immune from liability manifest in their users' speech. There is also another equally important part of the statute, at Subsection (c)(2), that immunizes platforms from liability when they take steps to minimize harmful online content on their systems. This subsection rarely gets attention, but it's important not to overlook, especially as people look at the effect of the first subsection and worry that it might encourage too much "bad" speech. Congress anticipated this problem and built in a remedy as part of a balanced approach to encourage the most good speech and least bad speech. The problem with now holding online services liable for bad uses of their platforms is that it distorts this balance, and in distorting this balance undermines both these goals.
We used the cases of Barnes v. Yahoo and Doe 14 v. Internet Brands to illustrate this point. Both of these are cases where the Ninth Circuit did make exemptions and found Section 230 not to apply to certain negative uses of Internet platforms. For instance, in Barnes Section 230 was actually found to apply to part of the claim directly relating to the speech in question, which was a good result, but the lawsuit also included a promissory estoppel claim, and the Court decided that because it was not directly related to liability arising from content it could go forward. The problem here was that Yahoo had separately promised to take down certain content, and so the Court found it potentially liable for not having lived up to its promise. But as we pointed out, the effect of the Barnes case was that now platforms never promise to take content down. Even though Congress intended for Section 230 to help Internet platforms perform a hygiene function to help keep the Internet free of the worst content, by discouraging platforms from going the extra mile it has instead had the opposite effect from the one Congress intended. That's why courts should not continue to find reasons to limit Section 230's applicability. Even if they think they have good reason to find one, that very justification itself will be better advanced when Section 230's protection can be most robust.
We also pointed out that in terms of the other policy goal behind Section 230, to encourage more online speech, divining exemptions from Section 230's coverage would undermine that goal as well. In this case the plaintiffs want providers to have to deny terrorists the use of their platforms. As a separate amicus brief by the Internet Association explained, platforms actually want to keep terrorists off and go to great lengths to try to do so. But as the saying goes, "One man's terrorist is another man's freedom fighter." In other words, deciding who to label a terrorist can often be a difficult thing to do, as well as an extremely political decision to make. It's certainly beyond the ken of an "intermediary" to determine -- especially a smaller, less capitalized, or potentially even individual one. (Have you ever had people comment on one of your Facebook posts? Congratulations! You are an intermediary, and Section 230 applies to you too.)
Even if the rule were that a platform had to check prospective users' names against a government list, there are significant constitutional concerns, particularly regarding the right to speak anonymously and the prohibition against prior restraint, that arise from having to make these sorts of registration denial decisions this way. There are also often significant constitutional problems with how these lists are made at all. As the amicus brief by EFF and CDT also argued, we can't create a system where the statutory protection platforms depend on to be able to foster online free speech is conditioned on coercing platforms to undermine it.
Filed Under: fields v. twitter, free speech, intermediary liability, material support for terrorism, platforms, section 230, tamara fields, terrorism
Companies: twitter