from the catalog-of-bad-ideas dept
Some lawmakers are candid about their desire to repeal Section 230 entirely. Others, however, express more of an interest to try to split this baby, and "reform" it in some way to somehow magically fix all the problems with the Internet, without doing away with the whole thing and therefore the whole Internet as well. This post explores several of the types of ways they propose to change the statute, ostensibly without outright repealing it.
And several of the reasons why each proposed change might as well be an outright repeal, given each one's practical effect.
But before getting into the specifics about why each type of change is bad, it is important to recognize the big reason why just about every proposal to change Section 230, even just a little bit, undermines it to the point of uselessness: because if you have to litigate whether Section 230 applies to you, you might as well not have it on the books in the first place. Which is why there's really no such thing as a small change, because if your change in any way puts that protection in doubt, it has the same debilitating effect on online platform services as an actual repeal would have.
This was a key point we keep coming back to, including in suggesting that Section 230 operates more as a rule of civil procedure than any sort of affirmative subsidy (as it is often mistakenly accused of being). Section 230 does not do much that the First Amendment would not itself do to protect platforms. But the crippling expense of having to assert one's First Amendment rights in court, and potentially at an unimaginable scale given all the user-generated content Internet platforms facilitate, means that this First Amendment protection is functionally illusory if there's not a mechanism to get platforms out of litigation early and cheaply. It is the job of Section 230 to make sure they can, and that they won't have to worry about being bled dry in legal costs having to defend themselves even where, legally, they have a defense.
Without Section 230 their only choice would be to not engage in the activity that Section 230 explicitly encourages: intermediating third party content, and moderating it. If they don't moderate it then their services may become a cesspool, but if the choice they face is either to moderate, or to potentially be bankrupted in litigation (or even, as in the case of FOSTA, potentially prosecuted), then they won't. And as for intermediating content, if they can get into legal trouble for allowing the wrong content, then they will either host less user-generated content, or not be in the business of hosting any user content at all. Because if they don't make these choices, they set themselves up to be crushed by litigation.
Which is why it is not even the issue of ultimate liability that makes lawsuits such an existential threat to an Internet platform. It's just as bad if the lawsuit that crushes them is over whether they were entitled to the statutory liability protection needed to avoid the lawsuit entirely. And we know lawsuits can have that annihilating effect when platforms are forced to litigate these questions. One conspicuous example is Veoh Networks, a video-hosting service who today should still be a competitor to YouTube. But it isn't a competitor because it is no longer a going concern. It was obliterated by the costs of defending its entitlement to assert the more conditional DMCA safe harbor defense, even though it won! The Ninth Circuit found the platform should have been protected. But by then it was too late; the company had been run out of business, and YouTube lost a competitor that, today, the marketplace still misses.
It would therefore be foolhardy and antithetical to lawmakers' professed interest in having a diverse ecosystem of Internet services were they to do anything to make Section 230 similarly conditional, thereby risking even further market consolidation than we already have. But that's the terrible future that all these proposals tempt.
More specifically, here's why each type of proposal is so infirm:
Liability carve-outs. One way lawmakers propose to change Section 230 is to deny its protection to specific forms of liability that may arise in user content. A variety of these liability carve-outs have been proposed, and all require further scrutiny. For instance, one popular carve-out with lawmakers is trying to make Section 230 useless against claims of liability for posts that allegedly violate anti-discrimination laws. But while on first glace such a carve-out may seem innocuous, we know that it's not. And one way it's not is because people eager to discriminate themselves have shown themselves keen to try to force platforms to help them do it, including by claiming that anti-discrimination laws serve to protect their own efforts to discriminate. So far they have largely been unable to conscript platforms into enabling their hate, but if Section 230 no longer protects platforms from these forms of liability, then racists will finally be able to succeed by exploiting that gap.
These carve-outs also run the risk of making it harder for people who have been discriminated against from finding a place to speak out about it, since it will force platforms to be less willing to offer space to speech that they might find themselves forced to defend, because even if the speech were defensible just having to answer for it can be ruinous for the platform. We know that they will feel forced to turn away all sorts of worthy and lawful speech if that's what they need to do to protect themselves, because we've seen this dynamic play out as a result of the few carve-outs Section 230 has had from the start. For example, if the thing wrong with the user expression was that it implicated an intellectual property right, then Section 230 didn't protect the platform from liability in their users' content. Now, it turns out that platforms have some liability protection via the DMCA, but this protection is weaker and more conditional than Section 230, which is why we see all the swiss cheese online with videos and other content so often removed – even in cases when they were not actually infringing – because taking it down is the only way platforms can avoid trouble and not run the risk of going the way of Veoh Networks themselves.
Such an outcome is not good for encouraging free expression online, which was a main driver behind passing Section 230 originally, and it isn't even good for the people these carve outs were ostensibly intended to help, which we saw with FOSTA, which was an additional liability carve-out more recently added. With FOSTA, instead of protecting people from sexual exploitation, it led to platforms taking away their platform access, which drove them into the streets, where they got hurt or killed. And, of course, it also led to other perfectly lawful content disappearing from the Internet, like online dating and massage therapy ads, since FOSTA had made it impossibly risky for the platforms to continue to facilitate it.
It's already a big problem that there are even just these liability carve-outs. If Section 230 were to be changed in any way, it should be changed to remove them. But in any case, we certainly shouldn't be making any more if Section 230 is still to maintain any utility in protecting the platforms we need to facilitate online user expression.
Transactional speech carve-outs. As described above, one way lawmakers are proposing to change Section 230 is to carve out certain types of liability that might attach to user-generated content. Another way is to try to carve out certain types of user expression itself. And one specific type of user expression in lawmakers' crosshairs (and also some courts') is transactional speech.
The problem with this invented exception to Section 230 is that transactional speech is still speech. "I have a home to rent" is speech, regardless of whether it appears on a specialized platform that only hosts such offers, or more general purpose platforms like Craigslist or even Twitter where such posts are just some of the kinds of user expression enabled.
Lawmakers seem to be getting befuddled by the fact that some of the more specialized platforms may earn their money through a share of any consummated transaction their user expression might lead to, as if this form of monetization were somehow meaningfully distinct from any other monetization model, or otherwise somehow waived their First Amendment right to do what basically amounts to moderating speech to the point where it is the only type of user content they allow. And it is this apparent befuddlement that has led to attempts by lawmakers to tie Section 230 protection to certain monetization models and go so far as to eliminate it for certain ones.
Even these proposals were carefully drafted such proposals they would only end up chilling e-commerce by forcing platforms to use less-viable monetization models. But what's worse is that the current proposals are not being carefully drafted, and so we end up seeing bills end up threatening the Section 230 protection of any platform with any sort of profit model. Which, naturally, they all need to have in some way. After all, even non-profit platforms need some sort of income stream to keep the lights on, but proposals like these threaten to make it all but impossible to have the money needed for any platform to operate.
Mandatory transparency report demands. As we've discussed before, it's good for platforms to try to be candid about their moderation decisions and especially about what pressures forced them to make these decisions, like subpoenas and takedown demands, because it helps highlight when these instruments are being abused. Such reports are therefore a good thing to encourage.
But encouragement is one thing; requiring them is another, but that's what certain proposals try to do in conditioning Section 230 protection to the publication of these reports. And they are all a problem. Making transparency reports mandatory is an unconstitutional form of compelled speech. Platforms have the First Amendment right to be arbitrary in their moderation practices. We may prefer them to make more reasoned and principled decisions, but it is their right not to. But they can't enjoy that right if they are forced to explain every decision they've made. Even if they wanted to, it may be impossible, because content moderation is happening at scale, which inherently means it will never be perfect, and it also may be ill-advised to be fully transparent because it teaches bad actors how to game their systems.
Obviously a platform could still refuse to produce the reports as these bills would prescribe. But if that decision risks the statutory protection the platform depends on to survive, then it is not really much of a decision. It finds itself compelled to speak in the way that the government requires, which is not constitutional. And it also would end up impinging on that freedom to moderate, which both the First Amendment and Section 230 itself protect.
Mandatory moderation demands. But it isn't just transparency in moderation decisions that lawmakers want. Some legislators are running straight into the heart of the First Amendment and demanding that they get to dictate how platforms get to do any of their moderation by conditioning Section 230 protection to the platforms making these decisions the way the government insists.
These proposals tend to come in two political flavors. While they are generally utterly irreconcilable – it would be impossible for any platform to simultaneously satisfy both of them at the same time – they each boil down to the same unconstitutional demand.
Some of these proposals reflect legislative outrage at platforms for some of the moderation decisions they've made. Usually they condemn platforms for having removed certain speech or even banned certain speakers, regardless of how poor their behavior or how harmful the things those speakers said. This condemnation leads lawmakers who favor these speakers and their speech to want to take away the platforms' right to make these sorts of moderation decisions by, again, conditioning Section 230 on their continuing to leave these speakers and speech up on these systems. The goal with these proposals is to set up the situation where it is impossible for platforms to continue to exercise their First Amendment discretion in moderation and possibly take them down, lest they lose the protection they depend on to exist. Which is not only unconstitutional compulsion, but also itself ultimately voids the part of Section 230 that expressly protects that discretion, since it's discretion that platforms can no longer exercise.
On the flip side, instead of conditioning Section 230 on not removing speakers or speech, other lawmakers would like to condition Section 230 on requiring platforms to kick off certain speakers and speech (and sometimes even the same ones that the other proposals are trying to keep up). Which is just as bad as the other set of proposals, for all the same reasons. Platforms have the constitutional right to make these moderation choices however they choose, and the government does not have the right, per the First Amendment, to force them to make them in any particular way. But if their critical Section 230 protection can be taken away if they don't moderate however the sitting political power demands at the moment, then that right has been impinged and Section 230 rendered a nullity.
Algorithmic display carve-outs. Algorithmic display has become a target for many lawmakers eager to take a run at Section 230. But as with every other proposed reform, changing Section 230 so that it no longer applies to platforms using algorithmic display would end up obliterating the statute for just about everyone. And it's not quite clear that lawmakers proposing these sorts of changes quite realize this inevitable impact.
And part of the problem seems to be that they don't really understand what an algorithm is, or how commonly they are used. They seem to regard it as something nefarious, but there's nothing about an algorithm that inherently is. The reality is that nearly every platform uses software in some way to handle the display of user-provided content, and algorithms are just the programming logic coded into the software giving it the instructions for how to display that content. Moreover, these instructions can even be as simple as telling the software to display the content chronologically, alphabetically, or some other relevant way the platform has decided to render content, which the First Amendment protects. After all, a bookstore can decide to shelve books however it wants, including in whatever order or with whatever prominence it wants. What these algorithms do is implement these sorts of shelving decisions, just as applied to the online content a platform displays.
If algorithms were to end up banned by making the Section 230 protection platforms need to host user-generated content contingent on not using them, it would make it impossible for platforms to actually render any of that content. They either couldn't do it technically, if they were to abide by this rule withholding their Section 230 protection, or legally if that protection were to be withheld because they used this display. Such a rule would also represent a fairly significant change to Section 230 itself by gutting the protection for moderation decisions, since those decisions are often implemented by an algorithm. In any case, conditioning Section 230 on not using algorithms is not a small change but one that would radically upend the statutory protection and all the online services it enables.
Terms of Service carve-outs. One idea (which is, oddly, backed by Facebook, even though it needs Section 230 to remain robust in order to defeat litigation like this) is that Section 230 protection should be contingent on platforms upholding their terms of service. As with these other proposals, this one is also a bad idea.
First of all, it negates the utility of Section 230 protection by making its applicability the subject of litigation. In other words, instead of being protected from litigation, platforms will now have to litigate whether they are protected from litigation, which means they aren't really protected at all.
It also fails to understand what terms of service are for. Platforms have them in order to limit their liability exposure. There's no way that they are going to write them in a way that has the effect of increasing their liability exposure.
The way they are generally written now is to put potentially wayward users on notice that if they don't act consistently with these terms of service, the service may be denied them. They aren't written to be affirmative promises to do anything because they can't be affirmative promises – content moderation at scale is impossible to do perfectly, so it would be foolish for platforms to obligate themselves to do the impossible. But that's what changing Section 230 in this way would do, create this obligation if platforms are to retain their needed protection.
This pipe dream that some seem to have, that if only platforms did more moderation in accordance with their terms of service as currently written, everything would be perfect and wonderful is hopelessly naïve. After all, nothing about how the Internet works is nearly that simple. Nevertheless, it is fine to want platforms to do as much as they can to meet the aspirational goals they've articulated in their terms of service. But changing Section 230 in this way won't lead them to. Instead it will make it legally unsafe for platforms to even articulate any such aspirations and thus less likely to meet any of them. Which means that regulators won't get more of what they seek with this sort of proposal, but less.
Pre-emption elimination. One of the key clauses that makes Section 230 useful is its pre-emption provision. This is the provision that tells states that they cannot rejigger their own state laws in ways that would interfere with the operation of Section 230. The reason it is so important is because it gives the platforms the certainty they need to be able to benefit from the statute's protection. For it to be useful they need to know that it applies to them and that states have no ability to mess with it.
Unfortunately we are already seeing increasing problems with state and local jurisdictions attempting to ignore this pre-emption provision, and courts even sometimes letting them. But on top of that there are proposals in Congress to deliberately undermine it. In fact, with FOSTA, it already has been undermined, with individual state governments now able to impose liability directly on platforms for their user activity, no matter how arbitrarily.
We see with the moderation bills an illustration of what is wrong with states getting to mess with Section 230 and make its protection suddenly conditional – and therefore effectively useless. Given our current political polarity, the problem should be obvious: how is any platform going to reconcile the moderation demands of a Red State with the moderation demands of a Blue State? What is an inherently interstate Internet platform to do? Whose rules should they follow? What happens to them if they don't?
Congress put in the pre-emption provision because it knew that platforms could not possibly comply with all the myriad rules and regulations that every state, county, city, town, and locality might develop to impose liability on platforms. So it told them all to butt out. It's a mistake to now gut that provision if Section 230 is going to still have any value in making it safe for platforms to continue to do their job enabling the Internet.
Filed Under: carve outs, content moderation, free speech, intermediary liability, reform, repeal, section 230, transparency