Stop Using Content Moderation Demands As An Effort To Hide The Government's Social Policy Failures
from the look!-squirrel! dept
We've been seeing over and over again lately that politicians (and, unfortunately, the media) are frequently blaming social media and content moderation for larger societal problems, that the government itself has never been able to solve.
In other words, what's really happening is that the supposedly "bad stuff" that shows up on social media is really indicative of societal failures regarding education, mental health services, criminal law, social safety nets, and much much more. All social media is really doing is putting a spotlight on those failures. And the demands from politicians and the media for content moderation to "solve" these issues is really often about trying to sweep those problems under the rug by hiding them from public view, rather than looking for ways to tackle those much larger, much more difficult societal questions.
Over in Wired, Harvard law lecturer (and former Techdirt podcast guest), Evelyn Douek, has one of the best articles I've seen making this point. First, she describes how -- contrary to the narrative that still holds among some that social media companies resist doing any moderation at all -- these days, they're much more aggressive in seeking to strike down disinformation:
Misinformation about the pandemic was supposed to be the easy case. In response to the global emergency, the platforms were finally moving fast and cracking down on Covid-19 misinformation in a way that they never had before. As a result, there was about a week in March 2020 when social media platforms, battered by almost unrelenting criticism for the last four years, were good again. “Who knew the techlash was susceptible to a virus?” Steven Levy asked.
Such was the enthusiasm for these actions that there were immediately calls for them to do the same thing all the time for all misinformation—not just medical. Initially, platforms insisted that Covid misinformation was different. The likelihood of harm arising from it was higher, they argued. Plus, there were clear authorities they could point to, like the World Health Organization, that could tell them what was right and wrong.
But the line did not hold for long. Platforms have only continued to impose more and more guardrails on what people can say on their services. They stuck labels all over the place during the US 2020 election. They stepped in with unusual swiftness to downrank or block a story from a major media outlet, the New York Post, about Hunter Biden. They deplatformed Holocaust deniers, QAnon believers, and, eventually, the sitting President of the United States himself.
But, as the article notes -- especially on topics where we're learning new stuff every day, and early ideas and thinking may prove incorrect later -- it's proven that relying on content moderation to deal with these issues might not be that great an idea.
The chaos of 2020 shattered any notion that there’s a clear category of harmful “misinformation” that a few powerful people in Silicon Valley must take down, or even that there’s a way to distinguish health from politics. Last week, for instance, Facebook reversed its policy and said it will no longer take down posts claiming Covid-19 is human-made or manufactured. Only a few months ago
The New York Times had cited belief in this “baseless” theory as evidence that social media had contributed to an ongoing “reality crisis.” There was a similar back-and-forth with masks. Early in the pandemic, Facebook banned ads for them on the site. This lasted until June, when the WHO finally changed its guidance to recommend wearing masks, despite many experts advising it much earlier. The good news, I guess, is they weren’t that effective at enforcing the ban in the first place. (At the time, however, this was not seen as good news.)
She separately highlights how these efforts in the US are being used as an excuse for authoritarian governments around the globe to ramp up actual censorship and suppression of activists and dissident voices.
But the key point is that sweeping larger societal issues under the rug by hiding them doesn't solve the underlying issues.
“Just delete things” removes content but not its cause. It’s tempting to think that we can content-moderate society to a happier and healthier information environment or that the worst social disruptions of the past few years could have been prevented if more posts had just been taken down. But fixing social and political problems will be much harder work than tapping out a few lines of better code. Platforms will never be able to fully compensate for other institutional failures.
There's a lot more in Douek's write-up, but I think it's important for anyone who is debating the content moderation space to have read this piece and to at least account for it in any of these debates and discussions. It is not saying not to do any moderation. It is not saying that we should throw up our hands and do nothing. But it is making the very, very important point that content moderation alone does not solve underlying social issues, and yet so much of the focus on questions around social media and content moderation really are discussions about those failures. And we're not going to make progress on any of these issues if people don't understand what's the symptom and what's the actual disease.
Filed Under: content moderation, government failures, institutional failures, societal issues