No, Internet Companies Do Not Get A 'Free Pass' Thanks To CDA 230
from the it's-just-not-true dept
There are many critics of CDA 230 these days, and there's a pretty wide range in the quality of their arguments. Law professor Danielle Citron is, for good reason, considered one of the more thoughtful critics of the law. And, to her credit, she actually does understand the law, what it enables, and what the wider impacts of the law might be. Her scholarship tends to be thoughtful and careful as well, and, just recently, she was awarded a MacArthur "genius grant." And that's why I find it frustrating that her presentations before Congress recently seem to miss the mark by a fairly wide margin. Earlier this year, we called out her testimony on "deep fakes" because she falsely suggested that internet platforms have "no incentive to address destructive deepfake content."
As we explained at the time, nothing could be further from the truth. The companies have been facing tremendous pressure from the media, the public, politicians, and (importantly) advertisers to clean up junk on their networks, or they risk losing users and revenue. The incentive is the desire not to have their platforms turn into complete garbage dumps.
Unfortunately, Citron is continuing to spread this misleading idea to Congress, as she did last week, when the House held a hearing on Section 230. Citron's opening statement has also been posted to Slate with the title: Tech Companies Get a Free Pass on Moderating Content, and the subtitle: "It's time to change that." Once again the premise is simply false. What is accurate is that platforms do have a legal free pass to decide what level of moderation is appropriate, but that level of appropriateness is very much driven by the concerns of all of the stakeholders mentioned above, with users and advertisers topping the list (for fairly obvious reasons).
It is misleading in the extreme to suggest that a lack of legal incentive somehow means no incentive at all. It is a kind of faith in legal systems (and an ignorance of markets) that is disconnected from reality. Citron's piece does mention market power, but only to brush it away with a blind insistence that it couldn't possibly work:
The market is unlikely to turn this tide. Content that attracts likes, clicks, and shares generates advertising income or a cut of the profits in the case of online firearm marketplaces. Salacious, negative, and novel content is far more likely to attract eyeballs than vanilla, accurate stories. Market pressure is not enough, and it should not have to be.
We need legal reform to ensure that platforms wield their power responsibly.
Again, I do appreciate that Citron -- unlike many other 230 critics -- recognizes the benefits of 230 and that there are inevitable tradeoffs to modifying it. We just disagree on the scope of the downsides to the modifications she suggests. For example:
Another approach would be to adopt the proposal that Benjamin Wittes and I have suggested: to condition the immunity on reasonable content moderation practices rather than the free pass that exists today. If adopted, when the courts consider a motion to dismiss on Section 230 grounds, the question would not be whether a platform acted reasonably with regard to a specific use of the service. For instance, if Grindr is sued for negligently enabling criminal impersonation on its dating app, the legal shield would not depend upon whether the company did the right thing in the plaintiff’s case. Instead, the court would ask whether the provider or user of a service engaged in reasonable content moderation practices writ large with regard to unlawful uses that clearly create serious harm to others. Thus, in the hypothetical case of Grindr, the court would assess whether the dating app had reasonable processes in place to deal with obvious misuses of its service, including criminal impersonation. If Grindr could point to such reasonable practices, like having a functioning reporting system and the ability to ban IP addresses, then the lawsuit should be dismissed even if that system fell short in the plaintiff’s case.
This is the kind of idea that sounds good in theory, but would inevitably be a disaster in practice. A "reasonable" standard is extremely ambiguous until after a whole bunch of expensive case law is established, and would certainly create massive costs, especially for platforms that try to be creative or different in how they approach content moderation. And, then, once the case law is established, it will effectively "lock in" certain approaches, even if they are not the best or don't apply appropriately to other forms of content. If you're a smaller or up-and-coming platform and you want to avoid potentially company-destroying litigation, you are simply going to mimic the models of other companies that have already gone through the litigation gauntlet.
Indeed, such an approach inevitably favors the largest platforms -- the Googles, the Facebooks, etc -- in multiple ways. First off, they can afford to fight the expensive court battles, but that also means that they're the ones to effectively set the standards that the rest of the internet has to use -- and they can set those standards in a manner that only they can afford.
This is not theory. We already have real world examples of this, most obviously in the copyright space. Given all of the litigation around copyright, YouTube spent over $100 million developing ContentID to help it identify potentially infringing material. And now, at least in Europe (and, if industry lobbyists get their way, elsewhere as well), other companies are effectively being told that they need to implement such a filtering solution, with people pointing to Google's setup as evidence that it can be done. Except almost no one else can afford to spend $100 million to build their own system, and while there may be a market for third party services, they tend to be expensive and limiting. At the very least, they shut off all alternative paths to innovation, and lock in the large company's choice.
What's funny is that Citron concludes her testimony by advocating for variety:
There is no one size fits all approach to responsible content moderation. Unlawful activity changes and morphs quickly online, and the strategies for addressing unlawful activity clearly causing serious harm should change as well. A reasonableness standard would adapt and evolve to address those changes.
Except that her own proposal would lead to the exact opposite of that. "Reasonableness" is a vague and unworkable policy that will lead to a one size fits all approach, dominated and controlled by the largest players in the field. It's the wrong approach, driven by the false belief that there is some sort of "free ride" provided by CDA 230, which ignores the reality of public and market pressure, which has shown -- time and time again -- to be quite effective in creating change on these platforms.
Filed Under: cda 230, content moderation, danielle citron, free speech, market power, market pressure, reasonableness, section 230
Companies: facebook, google