from the don't-break-the-internet dept
Festivus came early this year — or perhaps two months late. The Department of Justice held a workshop Wednesday: Section 230 – Nurturing Innovation or Fostering Unaccountability? (archived video and agenda). This was perhaps the most official “Airing of Grievances” we’ve had yet about Section 230. It signals that the Trump administration has declared war on the law that made the Internet possible.
In a blistering speech, Trump’s embattled Attorney General, Bill Barr, blamed the 1996 law for a host of ills, especially the spread of child sexual abuse material (CSAM). That proved a major topic of discussion among panelists. Writing in Techdirt three weeks ago, TechFreedom’s Berin Szóka analyzed draft legislation that would use Section 230 to force tech companies to build in backdoors for the U.S. government in the name of stopping CSAM — and predicted that Barr would use this workshop to lay the groundwork for that bill. While Barr never said the word “encryption,” he clearly drew the connection — just as Berin predicted in a shorter piece just before Barr’s speech. Berin’s long Twitter thread summarized the CSAM-230 connection the night beforehand and continued throughout the workshop.
This piece ran quite long, so we’ve broken it into three parts:
-
This post, on why Section 230 is important, how it works, and how panelists proposed to amend it.
-
Part two, discussing how Section 230 has never applied to federal criminal law, but a host of questions remain about new federal laws, state criminal laws and more.
-
Part three, which will be posted next week, discussing what’s really driving the DOJ. Are they just trying to ban encryption? And can we get tough on CSAM without amending Section 230 or banning encryption?
Why Section 230 Is Vital to the Internet
The workshop’s unifying themes were “responsibility” and “accountability.” Critics claim Section 230 prevents stopping bad actors online. Actually, Section 230 places responsibility and liability on the correct party: whoever actually created the content, be it defamatory, harassing, generally awful, etc. Section 230 has never prevented legal action against individual users — or against tech companies for content they themselves create (or for violations of federal criminal law, as we discuss in Part II). But Section 230 does ensure that websites won’t face a flood of lawsuits for every piece of content they publish. One federal court decision (ultimately finding the website responsible for helping to create user content and thus not protected by Section 230) put this point best:
Websites are complicated enterprises, and there will always be close cases where a clever lawyer could argue that something the website operator did encouraged the illegality. Such close cases, we believe, must be resolved in favor of immunity, lest we cut the heart out of section 230 by forcing websites to face death by ten thousand duck-bites, fighting off claims that they promoted or encouraged — or at least tacitly assented to — the illegality of third parties.
Several workshop panelists talked about “duck-bites” but none really explained the point clearly: One duck-bite can’t kill you, but ten thousand might. Likewise, a single lawsuit may be no big deal, at least for large companies, but the scale of content on today’s social media is so vast that, without Section 230, a large website might face far more than ten thousand suits. Conversely, litigation is so expensive that even one lawsuit could well force a small site to give up on hosting user content altogether.
A single lawsuit can mean death by ten thousand duck-bites: an extended process of appearances, motions, discovery, and, ultimately, either trial or settlement that can be ruinously expensive. The most cumbersome, expensive, and invasive part may be “discovery”: if the plaintiff’s case turns on a question of fact, they can force the defendant to produce that evidence. That can mean turning a business inside out — and protracted fights over what evidence you do and don’t have to produce. The process can easily be weaponized, especially by someone with a political ax to grind.
Section 230(c)(1) avoids all of that by allowing courts to dismiss lawsuits without defendants having to go through discovery or argue difficult questions of First Amendment case law or the potentially infinite array of potential causes of action. Some have argued that we don’t need Section 230(c)(1) because websites should ultimately prevail on First Amendment grounds or that the common law might have developed to allow websites to prevail in court. The burden of litigating such cases at the scale of the Internet — i.e., for each of the billions and billions of pieces of content created by users found online, or even the thousands, hundreds or perhaps even dozens of comments that a single, humble website might host — would be impossible to manage.
As Profs. Jeff Kosseff and Eric Goldman explained on the first panel, Congress understood that websites wouldn’t host user content if the law imposed on them the risk of even a few duck bites per posting. But Congress also understood that, if websites faced increased liability for attempting to moderate harmful or objectionable user content on their sites, they’d do less content moderation — and maybe none at all. That was the risk created by Stratton Oakmont, Inc. v. Prodigy Services Co. (1995): Whereas CompuServe had, in 1991, been held not responsible for user content because it did not attempt to moderate user content, Prodigy was held responsible because it did.
Section 230 solved both problems. And it was essential that, the year after Congress enacted Section 230, a federal appeals court in Zeran v. America Online, Inc. construed the law broadly. Zeran ensured that Section 230 would protect websites generally against liability for user content — essentially, it doesn’t matter whether plaintiffs call websites “publishers” or “distributors.” Pat Carome, a partner at WilmerHale and lead defense counsel in Zeran, deftly explained the road not taken: If AOL had a legal duty as a “distributor” to take down content anyone complained about, anything anyone complained about would be taken down, and users would lose opportunities to speak at all. Such a notice-and-takedown system just won’t work at the scale of the Internet.
Why Both Parts of Section 230 Are Necessary
Section 230(c)(1) says simply that “No provider or user of an interactive computer service [content host] shall be treated as the publisher or speaker of any information provided by another information content provider [content creator].” Many Section 230 critics, especially Republicans, have seized upon this wording, insisting that Facebook, in particular, really is a “publisher” and so should be held “accountable” as such. This misses the point of Section 230(c)(1), which is to abolish the publisher/distributor distinction as irrelevant.
Miami Law Professor Mary Anne Franks proposed scaling back, or repealing, 230(c)(1) but leaving 230(c)(2)(A), which shields “good faith” moderation practices. She claimed this section is all that tech companies need to continue operations as “Good Samaritans.”
But as Prof. Goldman has explained, you need both parts of Section 230 to protect Good Samaritans: (c)(1) protects decisions to publish or not to publish broadly, while (c)(2) protects only proactive decisions to remove content. Roughly speaking, (c)(1) protects against complaints that content should have been taken down or taken down faster, while (c)(2) protects against complaints that content should not have been taken down or that content was taken down selectively (or in a “biased” manner).
Moreover, (c)(2) turns on an operator’s “good faith,” which they must establish to prevail on a motion to dismiss. That question of fact opens the door to potentially ruinous discovery — many duck-bites. A lawsuit can usually be dismissed via Section 230(c)(1) for relatively trivial legal costs (say, <$10k). But relying on a common law or 230(c)(2)(A) defense — as opposed to a statutory immunity — means having to argue both issues of fact and harder questions of law, and thus could raise that cost to easily ten times or more. Having to spend, say, $200k to win even a groundless lawsuit creates an enormous “nuisance value” to such claims — which, in turn, encourages litigation for the purpose of shaking down companies to settle out of court.
Class action litigation increases legal exposure for websites significantly: Though fewer in number, class actions are much harder to defeat because plaintiff’s lawyers are generally sharp and intimately familiar with how to wield maximum pressure to settle through the legal system. This is a largely American phenomenon and helps to explain why Section 230 is so uniquely necessary in the United States.
Imagining Alternatives
The final panel discussed “alternatives” to Section 230. FTC veteran Neil Chilson (now at the Charles Koch Institute) hammered a point that can’t be made often enough: it’s not enough to complain about Section 230; instead, we have to evaluate specific proposals to amend section 230 and ask whether they would make users better off. Indeed! That requires considering the benefits of Section 230(c)(1) as a true immunity that allows websites to avoid the duck-bites of the litigation (or state/local criminal prosecution) process. Here are a few proposed alternatives, focused on expanding civil liability. Part II (to be posted later today) will discuss expanding state and local criminal liability.
Imposing Size Caps on 230’s Protections
Critics of Section 230 often try to side-step startup concerns by suggesting that any 230 amendments preserve the original immunity for smaller companies. For example, Sen. Hawley’s Ending Support For Internet Censorship Act would make 230 protections contingent upon FTC certification of the company’s political neutrality if the company had 30 million active monthly U.S. users, more than 300 million active monthly users worldwide, or more than $500 million in global annual revenue.
Julie Samuels, Executive Director of Tech:NYC, warned that such size caps would “create a moat around Big Tech,” discouraging the startups she represents from growing. Instead, a size cap would only further incentivize startups to become acquired by Big Tech before they lose immunity. Prof. Goldman noted two reasons why it’s tricky to distinguish between large and small players on the Internet: (1) several smaller companies are among the top 15 U.S. services, e.g., Craigslist, Wikipedia, and Reddit, with small staffs but large footprints; and (2) some enormous companies rarely deal with user generated content, e.g., Cloudflare, IBM, but these companies would still be faced with all of the obligations that apply to companies that had a bigger user generated footprint. You don’t have to feel sorry for IBM to see the problem for users: laws like Hawley could drive such companies to get out of the business of hosting user-generated content altogether, deciding that it’s too marginal to be worth the burden.
Holding Internet Services Liable for Violating their Terms of Service
Goldberg and other panelists proposed amending Section 230 to hold Internet services liable for violating their terms of service agreements. Usually, when breach of contract or promissory estoppel claims are brought against services, they involve post or account removals. Courts almost always reject such claims on 230(c)(1) grounds as indirect attempts to hold the service liable as a publisher for those decisions. After all, Congress clearly intended to encourage websites to engage in content moderation, and removing posts or accounts is critical to how social media keep their sites usable.
What Goldberg really wants is liability for failing to remove the type of content that sites explicitly disallow in their terms (e.g., harassment). But such liability would simply cause Internet services to make their terms of service less specific — and some might even stop banning harassment altogether. Making sites less willing to remove (or ban) harmful content is precisely the “moderator’s dilemma” that Section 230 was designed to avoid.
Conversely, some complain that websites’ terms of service are too vague — especially Republicans, who complain that, without more specific definitions of objectionable content, websites will wield their discretion in politically biased ways. But it’s impossible for a service to foresee all of the types of awful content its users might create, so if websites have to be more specific in their terms of service, they’d have to constantly update their terms of service, and if they could be sued for failing to remove every piece of content they say they prohibit… that’s a lot of angry ducks. The tension between these two complaints should be clear. Section 230, as written, avoids this problem by simply protecting websites operators from having to litigate these questions.
Finally, in general, contract law requires a plaintiff to prove both breach and damages/harm. But with online content, damages are murky. How is one harmed by a violation of a TOS? It’s unclear exactly what Goldberg wants. If she’s simply saying Section 230 should be interpreted, or amended, not to block contract actions based on supposed TOS violations, most of those are going to fail in court anyway for lack of damages. But if they allow a plaintiff to get a foot in the door, to survive an initial motion to dismiss based on some vague theory of alleged harm, even having to defend against lawsuits that will ultimately fail creates a real danger of death-by-duck-bites.
Compounding the problem — especially if Goldberg is really talking about writing a new statute — is the possibility that plaintiffs’ lawyers could tack on other, even flimsier causes of action. These should be dismissed under Section 230, but, again, more duck-bites. That’s precisely the issue raised by Patel v. Facebook, where the Ninth Circuit allowed a lawsuit under Illinois’ biometric privacy law to proceed based on a purely technical violation of the law (failure to deliver the exact form of required notice for the company’s facial recognition tool). The Ninth Circuit concluded that such a violation, even if it amounted to “intangible damages,” was sufficient to confer standing on plaintiffs to sue as a class without requiring individual damage showings by each member of the class. We recently asked the Supreme Court to overrule the Ninth Circuit but they declined to take the case, leaving open the possibility that plaintiffs can get into federal court without alleging any clear damages. The result in Patel, as one might imagine, was a quick settlement by Facebook in the amount of $500 million shortly after the petition for certiorari was denied, given that the total statutory damages that would have been available to the class would have amounted to many billions. Even the biggest companies can be duck-bitten into massive settlements.
Limiting Immunity to Traditional Publication Torts
Several panelists claimed Section 230(c)(1) was intended to only cover traditional publication torts (defamation, libel and slander) and that over time, courts have wrongly broadened the immunity’s coverage. But there’s just no evidence for this revisionist account. Prof. Kosseff found no evidence for this interpretation after exhaustive research on Section 230’s legislative history for his definitive book. Otherwise, as Carome noted, Congress wouldn’t have needed to contemplate the other non-defamation related exceptions in the statute, like intellectual property, and federal criminal law.
Anti-Conservative Bias
Republicans have increasingly fixated on one overarching complaint: that Section 230 allows social media and other Internet services to discriminate against them, and that the law should require political neutrality. (Given the ambiguity of that term and the difficulty of assessing patterns at the scale the content available on today’s Internet, in practice, this requirement would actually mean giving the administration the power to force websites to favor them.)
The topic wasn’t discussed much during the workshop, but, according to multiple reports from participants, it dominated the ensuing roundtable. That’s not surprising, given that the roundtable featured only guests invited by the Attorney General. The invite list isn’t public and the discussion was held under Chatham House rules, but it’s a safe bet that it was a mix of serious (but generally apolitical) Section 230 experts and the Star Wars cantina freak show of right-wing astroturf activists who have made a cottage industry out of extending the Trumpist persecution complex to the digital realm.
TechFreedom has written extensively on the unconstitutionality of inserting the government into the exercise of editorial discretion by website operators. Just for example, read our statement on Sen. Hawley’s proposed legislation on regulating the Internet and Berin’s 2018 Congressional testimony on the idea (and Section 230, at that shit-show of a House Judiciary hearing that featured Diamond and Silk). Also read our 2018 letter to Jeff Sessions, Barr’s predecessor, on the unconstitutionality of attempting to coerce websites in how they exercise their editorial discretion.
Conclusion
Section 230 works by ensuring that duck-bites can’t kill websites (though federal criminal prosecution can, as Backpage.com discovered the hard way — see Part II). This avoids both the moderator’s dilemma (being more liable if you try to clean up harmful content) and that websites might simply decide to stop hosting user content altogether. Without Section 230(c)(1)’s protection, the costs of compliance, implementation, and litigation could strangle smaller companies even before they emerge. Far from undermining “Big Tech,” rolling back Section 230 could entrench today’s giants.
Several panelists poo-pooed the “duck-bites” problem, insisting that each of those bites involve real victims on the other side. That’s fair, to a point. But again, Section 230 doesn’t prevent anyone from holding responsible the person who actually created the content. Prof. Kate Klonick (St. John’s Law) reminded the workshop audience of “Balk’s law”: “THE INTERNET IS PEOPLE. The problem is people. Everything can be reduced to this one statement. People are awful. Especially you, especially me. Given how terrible we all are it’s a wonder the Internet isn’t so much worse.” Indeed, as Prof. Goldman noted, however new technologies might aggravate specific problems, better technologies are essential to facilitating better interaction. We can’t hold back the tide of change; the best we can do is to try to steer the Digital Revolution in better directions. And without Section 230, innovation in content moderation technologies would be impossible.
For further reading, we recommend the seven principles we worked with a group of leading Section 230 experts to draft last summer. Several panelists referenced them at the workshop, but they didn’t get the attention they deserved. Signed by 27 other civil society organizations across the political spectrum and 53 academics, we think they represent the best starting point for how to think about Section 230 yet offered.
Next up, in Part II, how Section 230 intersects with the criminal law. And, in Part III... what’s really driving the DOJ, banning encryption, and how to get tough on CSAM.
Filed Under: bill barr, cda, content moderation, csam, internet, law, publishing, section 230