Being Too Aggressive At Policing COVID-19 Disinformation Risks Breaking The Important 'Collective Sensemaking Process'
from the free-speech-matters dept
As the pandemic got worse and worse earlier this year, many internet platforms sprang into action -- spurred by many calls to do exactly this -- to ramp up their content moderation to fight off "disinformation" attacks. And there is no doubt that there are plenty of sophisticated (and even nation state) actors engaging in nefarious disinformation campaigns concerning the whole pandemic. So there's good reason to be concerned about the spread of disinformation -- especially when disinformation can literally lead to death.
However, as I've been saying for quite some time now, content moderation at scale is impossible to do well. And that's true in the best of times. It gets much more complicated in the worst of times. As we noted a few weeks ago, various internet platforms said they'd be taking down information that contradicted what government officials were saying, but that ran into some problems when the various officials were wrong.
How can we expect internet platforms to know what is "allowed" and what is "truthful" vs. what is "disinformation" when we have a situation where even the experts are working in the dark, trying to figure things out. And the natural process of figuring things out often involves initially suggesting things that turn out to be incorrect.
Professor Kate Starbird has a great piece over at Brookings, detailing just how important social media is in helping people go through this "collective sensemaking process" and highlighting that if we're too aggressive in trying to take down "disinformation," it's likely that much of the important process of figuring out what's going on for real can get lost in the process. To be clear: this is not an excuse for doing nothing. Pretty much everyone agrees that some level of moderation is necessary to deal with outright dangerous disinformation. But as we've spent years detailing, these issues are very, very, very rarely black and white -- and we need that vast gray area to help everyone sort out what's going on.
First, state and platform censorship of certain content could dampen the collective sensemaking process that is vital both for information transfer and for coping psychologically with impacts of the event. Consider “social credit” policies in China that punish social media users for sharing what the Chinese government considers misinformation. These policies may limit the spread of rumors but likely also chill speech, reducing the spread of accurate information and content critical of the government.
Silencing voices that challenge official response organizations—and to some extent just privileging the messages of those organizations as “authoritative voices”—may not be as straightforwardly positive as it seems. During an event like this one, populations need to be able criticize government responses and challenge government claims that conflict with other evidence. Without the early whistleblowers in Wuhan (who were accused of spreading false rumors), this outbreak may have spread further, faster. And in the U.S., there is emerging criticism of early recommendations by the CDC against wearing masks, which may have misled people about their efficacy. These are both cases where information that conflicted with the messages of official government response organizations—information that might have been labelled as “misinformation”—helped us get closer to the truth.
Information sharing is an innately human response to crisis events. Social media platforms enable people to come together and share information at unprecedented scales—and in new ways. In just a few years, these platforms have become part of the critical infrastructure of crisis response. Researchers of disaster sociology remind us that human behavior during crisis events is often pro-social, and recent studies document people using social media platforms in altruistic ways—for example, to find and share critical information and to organize volunteer efforts. These platforms have also become a place where people converge to make sense of the event and deal with its psychological and social impacts.
It always amazes me how so many people think that it's magically "easy" to determine what is and what is not disinformation. While there are some clear cases, most are not. And being too aggressive in taking down content actually risks creating an even bigger problem because it can slow down people's ability to communicate actual details and actual solutions.
As Starbird notes:
Fine-grained policing of content may inadvertently silence the collective sensemaking process that is so vital for people coping with the pandemic’s complex impacts. By focusing on the influencers who select and mobilize content for political or reputational gain and not on the sensemakers who are trying to understand a frightening, dynamic situation, the platforms can significantly dampen the spread of misinformation while still providing a place for people to come together to cope with the impacts of the pandemic.
It's definitely an intriguing idea -- and shifts some of the content moderation thinking: promote those who are part of the "sensemaking" process, and not those just pushing for reputational gain. But of course, figuring out who's who puts you right back at square one of content moderation.
Filed Under: collective sensemaking, covid-19, disinformation, figuring stuff out