Fighting Hate Speech Online Means Keeping Section 230, Not Burying It
from the first-do-no-harm dept
At Free Press, we work in coalition and on campaigns to reduce the proliferation of hate speech, harassment, and disinformation on the internet. It’s certainly not an easy or uncomplicated job. Yet this work is vital if we’re going to protect the democracy we have and also make it real for everyone — remedying the inequity and exclusion caused by systemic racism and other centuries-old harms seamlessly transplanted online today.
Politicians across the political spectrum desperate to “do something” about the unchecked political and economic power of online platforms like Google and Facebook have taken aim at Section 230, passed in 1996 as part of the Communications Decency Act. Changing or even eliminating this landmark provision appeals to many Republicans and Democrats in DC right now, even if they hope for diametrically opposed outcomes.
People on the left typically want internet platforms to bear more responsibility for dangerous third-party content and to take down more of it, while people on the right typically want platforms to take down less. Or at least less of what’s sometimes described as “conservative” viewpoints, which too often in the Trump era has been unvarnished white supremacy and unhinged conspiracy theories.
Free Press certainly aligns with those who demand that platforms do more to combat hate and disinformation. Yet we know that keeping Section 230, rather than radically altering it, is the way to encourage that. That may sound counter-intuitive, but only because of the confused conversation about this law in recent years.
Preserving Section 230 is key to preserving free expression on the internet, and to making it free for all, not just for the privileged. Section 230 lowers barriers for people to post their ideas online, but it also lowers barriers to the content moderation choices that platforms have the right to make.
Changes to Section 230, if any, have to retain this balance and preserve the principle that interactive computer services are legally liable for their own bad acts but not for everything their users do in real time and at scale.
Powerful Platforms Are Still Powering Hate, and Only Slowly Changing Their Ways
Online content platforms like Facebook, Twitter and YouTube are omnipresent. Their global power has resulted in privacy violations, facilitated civil rights abuses, provided white supremacists and other violent groups a place to organize, enabled foreign-election interference and the viral spread of disinformation, hate and harassment.
In the last few months some of these platforms have begun to address their role in the proliferation and amplification of racism and bigotry. Twitter recently updated its policies by banning links on Twitter to hateful content that resides offsite. That resulted in the de-platforming of David Duke, who had systematically skirted Twitter’s rules by linking to hateful content across the internet while following some limits for what he said on Twitter itself.
Reddit also updated its policies on hate and removed several subreddits. Facebook restricted “boogaloo” and QAnon groups. YouTube banned several white supremacists accounts. Yet despite these changes and our years of campaigning for these kinds of shifts, hate still thrives on these platforms and others.
Some in Congress and on the campaign trail have proposed legislation to rein in these companies by changing Section 230, which shields platforms and other websites from legal liability for the material their users post online. That’s coming from those who want to see powerful social networks held more accountable for third-party content on their services, but also from those who want social networks to moderate less and be more “neutral.”
Taking away Section 230 protections would alter the business models of not just big platforms but every site with user-generated material. And modifying or even getting rid of these protections would not solve the problems often cited by members of Congress who are rightly focused on racial justice and human rights. In fact, improper changes to the law would make these problems worse.
That doesn’t make Section 230 sacrosanct, but the dance between the First Amendment, a platform’s typical immunity for publishing third-party speech, and that same platform’s full responsibility for its own actions, is a complex one. Any changes proposed to Section 230 should be made deliberately and delicately, recognizing that amendments can have consequences not only unintended by their proponents but harmful to their cause.
Revisionist History on Section 230 Can’t Change the Law’s Origins or Its Vitality
To follow this dance it’s important to know exactly what Section 230 is and what it does.
Written in the early web era in 1996, the first operative provision in Section 230 reads: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
When a book or a newspaper goes to print, its publisher is legally responsible for all the words printed. If those words are plagiarized, libelous, or unlawful then that publisher may face legal repercussions. In the terms of Section 230, they are the law’s “information content provider[s]”.
Wiping away Section 230 could revert the legal landscape to the pre-1996 status quo. That’s not a good thing. At the time, a pair of legal decisions had put into a bind any “interactive computer service” that merely hosts or transmits content for others. One case held a web platform that did moderate content could be sued for libel (just as the original speaker or poster could be) if that alleged libel slipped by the platform’s moderators. The other case held sites that did not moderate were not exposed to such liability.
Before Section 230 became law, this pair of decisions meant websites were incentivized to go in one of two directions: either don’t moderate at all, tolerating not just off-topic comments but all kinds of hate speech, defamation, and harassment on their sites; or vet every single post, leading inexorably to massive takedowns and removal of anything that might plausibly subject them to liability for statements made by their users.
The authors of Section 230 wanted to encourage the owners of websites and other interactive computer services, to curate content on their websites as these sites themselves saw fit. But back then that meant those websites could be just as responsible as newspapers for anything anyone said on their platforms if they moderated at all.
In that state of affairs, someone like Mark Zuckerberg or Jack Dorsey would have the legal responsibility to approve every single post made on their services. Alternatively, they would have needed to take a complete, hands-off approach. The overwhelming likelihood is that under a publisher-liability standard those sites would not exist at all, at least not in anything like their present form.
There’s an awful lot we’re throwing out with the bathwater if we attack not just the abuses of ad-supported and privacy-invasive social-media giants but all sites that allow users to share content on platforms they don’t own. Smaller sites likely couldn’t make a go of it at all, even if a behemoth like Facebook or YouTube could attempt the monumental task of bracing for potential lawsuits over the thousands of posts made every second of the day by their billions of users. Only the most vetted, sanitized, and anodyne discussions could take place in whatever became of social media. Or, at the other extreme, social media would descend into an unfiltered and toxic cesspool of spam, fraudulent solicitations, porn, and hate.
Section 230’s authors struck a balance for interactive computer services that carry other people’s speech: platforms should have very little liability for third-party content, except when it violates federal criminal law and intellectual property law.
As a result, websites of all sizes exist across the internet. A truly countless number of these — like Techdirt itself — have comments or content created by someone other than the owner of the website. The law preserved the ability of those websites, regardless of their size, to tend to their own gardens and set standards for the kinds of discourse they allow on their property without having to vet and vouch for every single comment.
That was the promise of Section 230, and it’s one worth keeping today: an online environment where different platforms would try to attract different audiences with varying content moderation schemes that favored different kinds of discussions.
But we must acknowledge where the bargain has failed too. Section 230 is necessary but not sufficient to make competing sites and viewpoints viable online. We also need open internet protections, privacy laws, antitrust enforcement, new models for funding quality journalism in the online ecosystem, and lots more.
Taking Section 230 off the books isn’t a panacea or a pathway to all of those laudable ends. Just the opposite, in fact.
We Can’t Use Torts or Criminal Law to Curb Conduct That Isn’t Tortious or Criminal
Hate and unlawful activity still flourish online. A platform like Facebook hasn’t done enough yet, in response to activist pressure or advertiser boycotts, to further modify its policies or consistently enforce existing terms of service that ban such hateful content.
There are real harms that lawmakers and advocates see when it comes to these issues. It’s not just an academic question around liability for carrying third-party content. It’s a life and death issue when the information in question incites violence, facilitates oppression, excludes people from opportunities, threatens the integrity of our democracy and elections, or threatens our health in a country dealing so poorly with a pandemic.
Should online platforms be able to plead Section 230 if they host fraudulent advertising or revenge porn? Should they avoid responsibility for facilitating either online or real-world harassment campaigns? Or use 230 to shield themselves from responsibility for their own conduct, products, or speech?
Those are all fair questions, and at Free Press we’re listening to thoughtful proposed remedies. For instance, Professor Spencer Overton has argued forcefully that Section 230 does not exempt social-media platforms from civil rights laws, for targeted ads that violate voting rights and perpetuate discrimination.
Sens. John Thune and Brian Schatz have steered away from a takedown regime like the automated one that applies to copyright disputes online, and towards a more deliberative process that could make platforms remove content once they get a court order directing them to do so. This would make platforms more like distributors than publishers, like a bookstore that’s not liable for what it sells until it gets formal notice to remove offending content.
However, not all amendments proposed or passed in recent times have been so thoughtful, in our view, Changes to 230 must take the possibility of unintended consequences and overreach into account, no matter how surgical proponents of the change may think an amendment would be. Recent legislation shows the need for clearly articulated guardrails. In an understandable attempt to cut down on sex trafficking, a law commonly known as FOSTA (the “Fight Online Sex Trafficking Act”) changed Section 230 to make websites liable under state criminal law for the knowing “promotion or facilitation of prostitution.”
FOSTA and the state laws it ties into did not precisely define what those terms meant, nor set the level of culpability for sites that unknowingly or negligently host such content. As a result, sites used by sex workers to share information about clients or even used for discussions about LGBTQIA+ topics having nothing to do with solicitation were shuttered.
So FOSTA chilled lawful speech, but also made sex workers less safe and the industry less accountable, harming some of the people the law’s authors fervently hoped to protect. This was the judgment of advocacy groups like the ACLU that opposed FOSTA all along, but also academics who support changes to Section 230 yet concluded FOSTA’s final product was “confusing” and not “executed artfully.”
That kind of confusion and poor execution is possible even when some of the targeted conduct and content is clearly unlawful. But, rewriting Section 230 to facilitate the take-down of hate speech that is not currently unlawful would be even trickier and fundamentally incoherent. Saying platforms ought to be liable for speech and conduct that would not expose the original speaker to liability would have a chilling impact, and likely still wouldn’t lead to sites making consistent choices about what to take down.
The Section 230 debate ought to be about when it’s appropriate or beneficial to impose legal liability on parties hosting the speech of others. Perhaps this larger debate on the legal limits of speech should be broader. But that has to happen honestly and on its own terms, not get shoehorned into the 230 debate.
Section 230 Lets Platforms Choose To Take Down Hate
Platforms still aren’t doing enough to stop hate, but what they are doing is in large part thanks to having 230 in place.
The second operative provision in the statute is what Donald Trump, several Republicans in Congress, and at least one Republican FCC commissioner are targeting right now. It says “interactive computer services” can “in good faith” take down content not only if it is harassing, obscene or violent, but even if it is “otherwise objectionable” and “constitutionally protected.”
That’s what much hate speech is, at least under current law. And platforms can take it down thanks not only to the platforms’ own constitutionally protected rights to curate, but because Section 230 lets them moderate without exposing themselves to publisher liability as the pre-1996 cases suggested.
That gives platforms a freer hand to moderate their services. It lets Free Press and its partners demand that platforms enforce their own rules against the dissemination of hateful or otherwise objectionable content that isn’t unlawful, but without tempting platforms to block a broader swath of political speech and dissent up front.
Tackling the spread of online hate will require a more flexible multi-pronged approach that includes the policies recommended by Change the Terms, campaigns like Stop Hate for Profit, and other initiatives. Platforms implementing clearer policies, enforcing them equitably, enhancing transparency, and regularly auditing recommendation algorithms are among these much-needed changes.
But changing Section 230 alone won’t answer every question about hate speech, let alone about online business models that suck up personal information to feed algorithms, ads, and attention. We need to change those through privacy legislation. We need to fund new business models too, and we need to facilitate competition between platforms on open broadband networks.
We need to make huge corporations more accountable by limiting their acquisition of new firms, changing stock voting rules so people like Mark Zuckerberg aren't the sole emperors over these vastly powerful companies, and giving shareholders and workers more rights to ensure that companies are operated not just to maximize revenue but in socially responsible ways as well.
Preserving not just the spirit but the basic structure of Section 230 isn’t an impediment to that effort, it’s a key part of it.
Gaurav Laroia and Carmen Scurato are both Senior Policy Counsel at Free Press.
Filed Under: cda 230, content moderation, disinformation, hate speech, misinformation, section 230