from the that's-not-how-it-works dept
It's getting absurd to have to do this every few weeks, but the media keeps publishing blatantly wrong things about Section 230 of the Communications Decency Act. You would think that after the NY Times had to roll back its own ridiculous headline blaming "hate speech" on the internet on Section 230, only to have to say "oops, actually, it's the 1st Amendment," that other publications would take the time to get things straight and recognize that nearly everything they're complaining about is actually the 1st Amendment, not Section 230. Section 230 merely protects the 1st Amendment, by making it easier to get out of SLAPPish lawsuits earlier in the process.
Yet, Newsweek apparently did not take note, and agreed to publish an op-ed by a group that was set up with former Republican Congressional staffers to deliberately push FUD and nonsense about successful internet companies called the "Internet Accountability Project" (which is not accountable for its own funding). IAP has been targeting Section 230 pretty much from day one, and this Newsweek op-ed is par for the course in that nearly everything it claims is wrong, misleading, or just ridiculous. First it describes a few examples of both Facebook and Google moderating potentially dangerous misinformation campaigns about COVID-19 and claims that this is some sort of evil censorship:
For speech, in particular, the consequences of tech's growing power have become increasingly plain. In April, Facebook began removing content promoting anti-lockdown events. From ABC News to Politico, it was reported that this was being done at the behest of state governments. By the evening, Facebook clarified that it would only be removing from its platform content pertaining to groups whose activities violated governments' social distancing guidance.
In other words, Facebook is not removing protest content that is unlawful—but rather, content that goes against state government "advisories." That is, guidance without the force of law.
On YouTube, owned by Google, a video of two emergency room doctors publicly questioning the official narrative about COVID-19 reportedly had over five million views before YouTube snatched it down, stating the video "violat[ed] our Community Guidelines, including content that explicitly disputes the efficacy of local healthy [
sic] authority recommended guidance on social distancing that may lead others to act against that guidance."
Note how this is framed -- as if removing disinformation or recommendations that will likely lead to people dying due to ignoring health advice regarding COVID-19 is somehow a bad thing. Meanwhile, of course, you have people at the other end of the political spectrum insisting (also incorrectly) that these platforms leave this kind of content up because of 230. So, which is it? Does 230 lead to platforms feeling comfortable leaving up dangerous nonsense or is it responsible for platforms taking down content?
This type of private company censorship is allowed because of Sec. 230—the part of the Communications Decency Act, passed by Congress in 1996, which provides Big Tech special legal treatment.
1. A private platform moderating nonsense is not censorship. 2. It's allowed because of the 1st Amendment. 3. It provides no special legal treatment to "big tech." Literally every single thing in this is outright false. Newsweek's fact checkers apparently are on vacation.
Sec. 230 was designed to allow platforms to remove "obscene, lewd, lascivious...or otherwise objectionable" content without being sued for doing so. But with the help of the courts, that targeted immunity privilege has been stretched to include, among other things, the enforcement of government-determined narratives against speech that would otherwise be constitutionally protected if it occurred outside these platforms and away from Sec. 230.
This is a neat little attempt to rewrite the law, the legislative history, and the judicial rulings on the law all in one shot. Section 230 was designed to enable the platforms to make their own decisions with regards to moderation, without facing legal liability for those choices. And, honestly, is IAP's position that encouraging people to ignore important health information should not be seen as "otherwise objectionable"? Because that sure would be fascinating. And whether or not the speech is "constitutionally protected" is meaningless. The limitations on regulating speech apply to the government, not a private company. Indeed, a regulatory regime that required the hosting of speech would raise its own constitutional questions -- which IAP refuses to acknowledge.
"For a private company to simply delete the promotion of protests it deems unacceptable is a remarkable expansion of its power over what was once a sacrosanct and constitutionally protected freedom," wrote one commentator recently. "Through these private companies...government officials can in effect restrict speech they are obligated to protect."
Notice that they say "one commenter" but don't note that it's a college student who apparently should go back and study the Constitution some more. A platform has every right to block the promotion of protests on its own platform. There are many other places that anyone who wants to protest can publish their protests, whether those are smart and useful protests, or dangerous lunacy (as the protests in question were).
The companies and their advocates say this type of moderation is necessary to ensure free speech. But what does it mean for free speech when the major communications platforms like Facebook, YouTube and Google align themselves with the government to deem certain speech, which would otherwise be constitutionally protected, too dangerous?
It means that every platform gets to choose. Some align themselves with government reports highlighting that the promotion of not social distancing might kill thousands of people. Some align themselves with ridiculous people who think that killing people and spreading a diseases is their god-given right. That's the nature of free-enterprise. I thought Republicans used to support that kinda thing? What happened?
Rhetorically, it's an odd turn, as the antidote to bad or misleading speech is actually more speech—the robust debate and engagement that shapes ideas, rebuts other ideas and reforms narratives. This is especially true in the sciences, whose entire history is an arc composed of overturning ideas once considered "settled."
It's true that the antidote to bad or misleading speech is more speech, but it doesn't mean that every platform must host that speech. They have other places they can speak, and they can (and have) build their own spaces as well.
The most sinister thing about these decisions, however, is that when it comes to deciding what COVID-19 information we get to see, Big Tech companies have now become a de facto arm of the state.
No. They haven't. Of course, if you get rid of CDA 230, then actually they might. Because then they may face liability for not pulling down enough content -- and if someone does something preternaturally stupid, like encouraging people to gather in large numbers without face masks and without social distancing in the midst of a deadly pandemic, then, you know, a platform without CDA 230 protections might face some amount of liability for the chaos that produces.
That's part of what's so ridiculously frustrating here. If they got what they "want" it would make what they claim is the problem significantly worse. Of course, the reality is they don't actually want to get rid of Section 230. They just want to cause trouble for Google because their funders hate that they've been out innovated by internet companies.
Far from private companies merely initiating their own content moderation, these platforms are now acting as enforcement wings for government-approved information campaigns. In Facebook's case, this means removing the ability of citizens to organize in a constitutionally protected activity—an activity that is not illegal, but is merely advised against.
Again, they're not. You want to know how I know? Because all of these platforms leave up tons of content that involves these kinds of protests and gatherings. Just because they pulled down some of the most ridiculous and most dangerous -- because they don't want people to fucking die -- does not mean that they are "enforcement wings for government-approved information campaigns." And again, without 230, if someone died because they went to one of these protests, how quickly do you think Facebook would get sued because it hosted the details of the protest?
Sec. 230—the sweetheart deal that allows tech to censor content without the same consequences as, say, the newspaper industry—is, at its root, a congressionally authorized tech industry subsidy. It has allowed the industry to grow from dorm rooms and garage basements to billion-dollar omnipresent mega corporations.
Section 230 was not a sweetheart deal, and it does not apply to just tech. Newspapers rely on Section 230 too. It is not a "subsidy" it's about the correct application of liability to avoid nonsense lawsuits (also, weren't Republicans historically against frivolous lawsuits and ambulance chasing? Without 230 you'd open up a huge rush of tort cases).
And in that regard, Sec. 230 is not unlike the host of other subsidies the government provides to various industries: expensing provisions given to manufacturers, the federal assistance we provide farmers to purchase crop insurance and so forth. These industries will tell you that these provisions are vital to their survival—and they might be right. But that hasn't prevented them from being routinely debated, updated or otherwise modified.
But it's not even remotely a subsidy. It's just making it clear who has the liability. That's it. That's all it is. And, again, the 1st amendment protects the activities that this article mistakenly thinks 230 protects. All 230 does is a procedural move to help identify who should be liable for what. That's it.
If COVID-19 has proven anything about Big Tech, it is that these companies have accumulated orders of magnitude more power than politicians could have imagined just a decade ago. That power has benefited us, but it has also threatened us. An intellectually honest assessment of its ramifications is now desperately needed.
And one thing you will never get from the Internet Accountability Project is even a sliver of "an intellectually honest assessment." It is a bad faith player, funded by bad faith companies, to do bad faith things -- like publishing this utter nonsense piece of pure fiction in a mainstream publication. Newsweek should not have taken the bait.
Oh, but let me save the final nonsense for last. Right beneath this post, Newsweek is advertising medical nonsense and miracle treatments.
Of course, Newsweek isn't liable for pushing people to that medical snake oil... because of Section 230's protections. Which it has now apparently decided it's okay to get rid of. Good one, Newsweek. Good one.
Filed Under: 1st amendment, cda 230, liability, section 230
Companies: facebook, google, internet accountability project