from the it's-bizarre dept
We've already highlighted our concerns with Wired's big cover story on Section 230 (twice!). The very same day that came out, Wired UK published a piece by Prof. Danielle Citron entitled Fix Section 230 and hold tech companies to account. Citron's proposal was already highlighted in the cover story and now gets this separate venue. For what it's worth, Citron also spent a lot of energy insisting that the Wired cover story was the "definitive" article on 230 despite all of its flaws, and cheered on and liked tweets by people who mocked my arguments for why the article is just not very accurate.
Over the last few years, we've also responded multiple times to Citron's ideas, which are premised on a completely false narrative: that without a legal cudgel, websites have no incentive to keep their sites clean. That's clearly not true at all. If a company doesn't moderate, then it turns into a garbage dump of spam, harassment, and abuse. They lose users. They lose advertisers. It's just not good business. There are plenty of incentives to deal with bad stuff online -- though Citron never seems to recognize any of that, and insists, instead, that every site is a free-for-all because 230 does not provide them legal liability. This latest piece in Wired is more of the same.
American lawmakers sided with the new inventors, young men (yup, all men) who made assurances that they could be trusted with our safety and privacy. In 1996, US Congress passed Section 230 of the Communications Decency Act, which secured a legal shield for online service providers that under- or over-filtered third-party content (so long as aggressive filtering was done in good faith). It meant that tech companies were immune to lawsuits when they removed, or didn’t remove, something a third party posted on their platforms.
That's only partially accurate. The "good faith" part only applies to one subsection of the bill, and is not the key reason why sites can be "aggressive" in filtering. That's the 1st Amendment. The immunity part is a procedural benefit that prevents abusive litigation by those seeking to waste the court's time by filing frivolous and expensive lawsuits -- or, more likely, threaten to do so, if websites won't remove perfectly legal content they just don't like.
But, thanks to overbroad court rulings, Section 230 ended up creating a law-free zone. The US has the ignominious distinction of being a safe haven for firms hosting illegality.
This is just wrong. And Citron knows it's wrong, and it's getting to be embarrassing that she (and Wired) would repeat it. First, Section 230 has no impact on federal criminal law, so anything that violates federal criminal law is not safe. Second, there are almost no websites that want "illegal" content on their site. Most have teams of people who deal with such reports or court orders. Indeed, the willingness of websites to quickly remove any content deemed illegal has been abused by reputation management firms to get content removed via faked judicial orders, or through a convoluted scheme involving fake defendants who "settle" lawsuits just to get a court order out of it.
This isn’t just an American pathology: Because the dominant social media companies are global, illegality they host impacts people worldwide. Indeed, safety ministers in South Korea and Australia tell me that they can help their citizens only so much, since abuse is often hosted on American platforms. Section 230 is to social media companies what the Cayman Islands has long been to the banking industry.
Over and over again we've seen the exact opposite of this, in two separate, but important ways. First, many of these companies still are more than willing to geoblock content if it's found to violate the law in a certain country. However, much more importantly, the ability of US-based websites to keep content up means that threatened, marginalized, and oppressed people are actually able to get their messages out. Oppressive governments around the world, including in places like Turkey and India have sought to force websites to take down content that merely criticizes their governments.
Any reasonable discussion on this needs to take that into account in demanding that "illegal" content must automatically be taken down. And when weighed with the fact that most companies don't want to host truly illegal and problematic content, most of the content that is likely to be removed without those protections is exactly the kind of authoritarians-suppressing-speech content that we're concerned about.
Tech companies amplify damaging lies, violent conspiracies and privacy invasions because they generate copious ad revenue from the likes, clicks and shares. For them, the only risk is bad PR, which can be swiftly dispatched with removals, bans and apologies.
This is stated without anything backing it up and it's garbage. It's just not true. All of the big companies have policies in place against this content, and they (unlike Citron) recognize that it doesn't "generate copious ad revenue from the likes, clicks and shares" (likes, clicks and shares don't directly generate ad revenue...). These companies know that the long-term health of their platforms is actually important, and losing advertisers and users because of garbage is a problem. This is why Facebook, Twitter, and YouTube all have teams working on these issues and trying to keep the platforms in better shape. They're certainly not perfect at it, but part of that is because of the insane scale of these platforms and the ever changing nature of the problematic content on those platforms.
I know that among a certain set it's taken on complete faith that no one at these companies cares, because they just "want clicks" and "clicks mean money." But that shows an astounding disconnect from what the people at these companies, and those setting and enforcing these policies actually think. It's just ivory tower nonsense, completely disconnected from reality.
For individuals and society, the costs are steep. Lies about mask wearing during the Covid-19 pandemic led to a public health disaster and death.
Which spread via cable news more than on social media, and included statements from the President of the United States of America. That's not a Section 230 problem. It's also not something that changing Section 230 fixes. Most of those lies are still Constitutionally protected. Citron's problem seems to be with the 1st Amendment, not Section 230. And changing Section 230 doesn't change the 1st Amendment.
Plans hatched on social media led to an assault on the US Capitol. Online abuse, which disproportionately targets women and minorities, silences victims and upends careers and lives.
These are both true, but it's an incredible stretch to say that Section 230 was the blame for either of these things. The largest platforms -- again, Facebook, YouTube, Twitter, etc. -- all have policies against this stuff. Did they do a bad job enforcing them? Perhaps! And we can talk about why that was, but I can assure you it's not because "230 lets us ignore this stuff." It's because it's not possible to magically make the internet perfect.
Social media companies generally have speech policies, but content moderation is often a shell game. Companies don’t explain in detail what their content policies mean, and accountability for their decisions isn’t really a thing. Safety and privacy aren’t profitable: taking down content and removing individuals deprives them of monetizable eyes and ears (and their data). Yes, that federal law gave us social media, but it came with a heavy price.
This is the only point in which Citron even comes to close to acknowledging that the companies actually do make an effort to deal with this stuff, but then immediately undermines it by pretending they really don't care about it. Which is just wrong. At best it could be argued that the platforms didn't care enough about it in 2010. But that was a century ago in internet years, and it's just wrong now. And, "taking down content and removing individuals deprives them of monetizable eyes and ears (and their data)" only if those particular eyes and ears aren't scaring off many more from their platform. And every platform now recognizes that the trolls and problem makers do exactly that. Citron, incorrectly again, completely misses that these companies now recognize that not all users are equal, and trolls and bad actors do more damage to the platform than they're worth in "data" and "ad revenue."
It feels like Citron's analysis is stuck in the 2010 internet. Things have changed. And part of the reason they've changed is that Section 230 has allowed companies to freely experiment with a variety of remedies and solutions to best deal with these problems.
Are there some websites that focus on and cater to the worst of the worst? There sure are. And if she wanted to focus in on just those, that would be an interesting discussion. Instead, she points to the big guys, who are not acting the way she insists they do, to demand they do... what they already do, and insists we need to change the law to make that happen, while ignoring all of the actual consequences of such a legal change.
The time for having stars in our eyes about online connectivity is long over. Tech companies no longer need a subsidy to ensure future technological progress.
It's not a subsidy to properly apply legal liability to the actual problematic parties. It's a way of saving the judicial system from a ton of frivolous lawsuits and avoiding the ability to censor by proxy by giving aggrieved individuals the ability to silence critics by mere threats of litigation to third party platforms.
If anything, that subsidy has impaired technological developments that are good for companies and society.
Uh, no. 230's flexibility has allowed a wide range of different platforms to try a variety of different approaches, and to seek out the best approaches for that kind of community. Wikipedia's approach is different from Facebooks which is different from Reddit's which is different from Ravelry's which is different from Github's. That's because we have 230 that allows for these different approaches. And all of those companies are trying to come up with solutions that are "good for society" because if they don't, their sites turn into garbage dumps and people will seek out alternatives.
We should keep Section 230 – it provides an incentive for companies to engage in monitoring – but condition it on reasonable content moderation practices that address illegality causing harm. Companies would design their services and practices knowing that they might have to defend against lawsuits unless they could show that they earned the federal legal shield.
The issue with this is that if you have to first prove "reasonableness" you end up with a bunch of problems, especially for smaller sites. First, you massively increase the costs of getting sued (and as such, you vastly increase the ability of threats to have their intended effect to take down content that is perfectly legal). Second, in order to prove "reasonableness" many, many, many lawyers are going to say "just do what the biggest companies do" because that will have been shown in court to be reasonable. So, instead of getting more "technological developments that are good for companies and society" you get homogenization. You lose out on the innovation. You lose out on the experimentation for better models, because any new model is just a model that hasn't been tested in court yet and leaves you open to liability.
For the worst of the worst actors (such as sites devoted to nonconsensual porn or illegal gun sales), escaping liability would be tough. It’s hard to show that you have engaged in reasonable content moderation practices if hosting illegality is your business model.
This is... already true? Various nonconsensual porn sites have been taken down by both civil lawsuits and criminal prosecution over the years. Companies entirely engaged in illegal practices still face federal criminal prosecution as well without 230's protections. On top of that, courts themselves have increasingly interpreted 230 to not shield those worst of the worst actors.
Over time, courts would rule on cases to show what reasonableness means, just as courts do in other areas of the law, from tort and data security to criminal procedure.
Right. And then anyone with a better idea on how to build a better community online would never dare to risk the liability that came with having to first prove it "reasonable" in court.
In the near future, we would see social media companies adopt speech policies and practices that sideline, deemphasize or remove illegality rather than optimise to spread it.
Again, no mainstream site wants "illegality" on their site. This entire article is premised on a lie, backed up with misdirection and a historical myth.
There wouldn’t be thousands of sites devoted to nonconsensual porn, deepfake sex videos and illegal gun sales. That world would be far safer and freer for women and minorities.
Except there's literally no evidence to support this argument. We know what happened in the copyright space, which doesn't have 230 like protections, and does require "reasonable" policies for dealing with infringement. Infringement didn't go away. It remained. As for "women and minorities" it's hard to see how they're better protected in such a world. The entire #MeToo movement came about because people could tell their stories on social media. Under Citron's own proposal here, websites would face massive threats of liability should a bunch of people start posting #MeToo type stories. We've already seen astounding efforts by those jackasses who were exposed during #MeToo to silence their accusers. Citron's proposal would hand them another massive weapon.
The bigger issue here is that Citron refuses to recognize how (and how frequently) those in power abuse tools of content suppression to silence voices they don't want to hear. She's not wrong that there's a problem with a few narrow areas of content. And if she just focused on how to deal with those sites, her argument would be a lot more worth engaging with. Instead, she's mixing up different ideas, supported by a fantasy version of what she seems to think Facebook does, and then insisting that if they just moderated the way she wanted them to, it would all be unicorns and rainbows. That's not how it works.
Filed Under: content moderation, content suppression, danielle citron, section 230, tradeoffs