EFF Highlights Stories Of Bad Content Moderation With New TOSsed Out Site
from the content-moderation-is-impossible dept
We've pointed out for many years that content moderation at scale isn't just hard, it's impossible to do well. At the scale of giant platforms, there needs to be some level of moderation or the platforms and users will get overwhelmed with spam or abuse. But at that scale, there will be a ton of mistakes -- both type I and type II errors (blocking content that shouldn't be blocked and failing to block content that probably should be blocked). Some -- frankly dishonest -- people have used a few examples of certain content moderation choices to falsely claim that there is "anti-conservative bias" in content moderation choices. We've pointed out time and time again why the evidence doesn't support this, though many people insist it's true (and I'll predict they'll say so again in the comments, but when asked for evidence, they will fail to present any).
That's not to say that the big platforms and their content moderation practices are done well. As we noted at the very beginning, that's an impossible request. And it's important to document the mistakes. First, it helps get those mistakes corrected. Second, while it will still be impossible for the platforms to moderate well, they can still get better and make fewer errors. Third, it can help people understand that errors are not because someone hates you or has animus towards a political group or political belief, but because they fuck up the moderation choices all the time. Fourth, it can actually help to find what actual patterns there are in these mistakes, rather than relying on moral panics. To that end, it's cool to see that the EFF has launched a new site, creatively dubbed TOSsed Out to help track stories of bad content moderation practices.
Just looking through the stories already there should show you that bad content moderation choices certainly aren't limited to "conservatives," but certainly do seem to end up impacting actually marginalized groups:
EFF is launching TOSsed Out with several examples of TOS enforcement gone wrong, and invites visitors to the site to submit more. In one example, a reverend couldn’t initially promote a Black Lives Matter-themed concert on Facebook, eventually discovering that using the words “Black Lives Matter” required additional review. Other examples include queer sex education videos being removed and automated filters on Tumblr flagging a law professor’s black and white drawings of design patents as adult content. Political speech is also impacted; one case highlights the removal of a parody account lampooning presidential candidate Beto O’Rourke.
“The current debates and complaints too often center on people with huge followings getting kicked off of social media because of their political ideologies. This threatens to miss the bigger problem. TOS enforcement by corporate gatekeepers far more often hits people without the resources and networks to fight back to regain their voice online,” said EFF Policy Analyst Katharine Trendacosta. “Platforms over-filter in response to pressure to weed out objectionable content, and a broad range of people at the margins are paying the price. With TOSsed Out, we seek to put pressure on those platforms to take a closer look at who is being actually hurt by their speech moderation rules, instead of just responding to the headline of the day.”
As the EFF notes, this is something of a reaction to the White House's intellectually dishonest campaign database building program to get people to report stories of social media bias against conservatives. Unlike the White House, which is focused on pulling some one-sided anecdotes it can misrepresent for political points, the TOSsed out effort is a real opportunity to track what kinds of mistakes happen in content moderation and how to deal with them.
Filed Under: content moderation, mistakes, tossed out, type 1 errors, type 2 errors
Companies: eff