If You're Complaining About COVID-19 Misinformation Online AND About Section 230, You're Doing It Wrong
from the section-230-is-helping-quell-disinformation dept
I remain perplexed by people who insist that internet platforms "need to do more" to fight disinformation and at the same time insist that we need to "get rid of Section 230." This almost always comes from people who don't understand content moderation or Section 230 -- or who think that because of Section 230's liability protections that sites have no incentive to moderate content on their platforms. Of course platforms have tons of incentive to moderate: much of it social pressure, but also the fact that if they're just filled with garbage they'll lose users (and advertisers).
But a key point in all of these debates about content moderation with regards to misinformation around COVID-19, is that for it to work in any way, there needs to be flexibility -- otherwise it's going to be a total mess. And what gives internet platforms that flexibility? Why it's that very same Section 230. Because Section 230 makes it explicit that sites don't face liability for their moderation choices, that enables them to ramp up efforts -- as they have -- to fight off misinformation without fear of facing liability for making the "wrong" choices.
Without Section 230, these businesses would have had to vet every single post’s truthfulness and legality. Not only would that have bogged down businesses’ response, it also would have been impossible — we knew little about coronavirus when it first hit and don’t know much more today.
Put simply, Section 230 helps make the internet safer, and that, in turn, has let us all rely on it to keep life moving, even while we’re stuck inside.
I'd argue it's even more stark than that article lays out. Not only did we know little about the coronavirus at the beginning, we still don't know very much and many of the early messages from official sources turned out to be wrong. Indeed, one of the ways that we've zeroed in on more accurate information is by being able to discuss ideas freely and zero in on what makes the most sense.
This whole process involves experimentation on both sides of this market. The platform players get to experiment with different methods and ideas for content moderation, while users get to discuss and debate different ideas about COVID-19. But both of those only happen with the structural balance provided by Section 230. Platforms can experiment to figure out what works best to enable reasonable debate and move people towards more accurate analysis -- while minimizing the impact of blatantly wrong information, misinformation, and disinformation. And users get to discuss and debate ideas to get closer to the truth themselves. Without the balance of Section 230, you create massive structural problems that prevent most of that from happening.
Without 230, companies face the classic moderator's dilemma. Doing no moderation at all is one option -- but then that lets disinformation flow freely, and companies might face liability for that disinformation. Alternatively, they could moderate very thoroughly, and pull down lots of information. But that might actually include good and useful information. For example, the discussion over whether or not people should wear masks as the pandemic began was all over the place with the WHO and the CDC initially urging people not to wear masks. However, in part because of widespread discussions and evidence presented on social media, the narrative shifted and eventually the CDC and WHO came on board with the recommendation to wear masks.
Without 230, what would a platform do regarding the mask discussion? Someone at the company could unilaterally decide that masks are a good thing -- but then face outrage from those who supported the WHO and CDC, and they would argue that the platform is spreading dangerous misinformation that could lead to hoarding and fewer masks for medical professionals. And that, alone might create lawsuits (in the absence of 230). Or they could follow what the WHO and CDC said initially... and then might feel obligated to silence and delete the conversations which argued, persuasively, why masks actually are valuable. And that would create all sorts of problems as well. At the same time, there is actual misinformation about what types of masks to wear and how -- and there are strong arguments for why platforms should be able to moderate that.
But all of that becomes much trickier, and much riskier, without a Section 230 -- and the greatest likelihood is that platforms will seek to avoid liability, and that will mean censoring plenty of good and important information (such as how to make or wear masks and why they're so important). It's Section 230 that has enabled both platforms to adjust their moderation techniques and the important public discussions that allow people to share, debate, and discuss as we figure out what is going on and how best to deal with it.
Filed Under: cda 230, content moderation, content moderation at scale, covid-19, disinformation, misinformation, section 230