Content Moderation At Scale Is Impossible: Recent Examples Of Misunderstanding Context
from the masnick's-law dept
I've said over and over and over again that content moderation at scale is impossible to do well, and one of the biggest reasons for this is that it's difficult to understand context. Indeed, I've heard that some content moderation policies explicitly suggest that moderators don't try to take context into account. This is not because companies don't think it's important, but the recognition that understanding the context behind every bit of content, would make the process so slow as to be absolutely useless. Yes, it would be great if every content moderator had the time and resources to understand the context of every tweet or Facebook post, but the reality is that we'd then need to employ basically every human being alive to be researching context. Low level content moderators tend to only have a few seconds to make decisions on content, or the entire process slows to a crawl, and then the media will slam those companies for leaving "dangerous" content up too long. So tradeoffs are made, and often that means that understanding context is a casualty of the process.
Pro Tip for #TTRPG marketing: if you wanna do a Facebook or Instagram promotion DO NOT put the word "supplement" anywhere in the ad description or their filters will give you constant headaches by blocking and disabling your business account.
No, you can not DRINK the RPG...— 𝙳𝚆 𝙳𝚊𝚐𝚘𝚗 is doing a ZineQuest (@DW_Dagon) February 23, 2021
The confusion here is not hard to figure out. First off, lots of roleplaying games have "supplements," or variations/adjustments/add-ons. However... "supplements" also refer to dietary supplements, a market filled with often highly questionable things that people put into their bodies with promises of... well... all sorts of things. And, making matters even worse (as I just discovered!) there's actually a dietary supplement called "RPG" so the Google searches are, well... complex.
And, to make matters even more complex, you may recall that a decade ago, the Justice Department got Google to hand over $500 million for displaying ads for non-approved drugs and supplements. So, I'm sure that both Facebook and Google are extra sensitive to any advertisement that might contain sketchy drugs and supplements -- and thus the rules are designed to be overly aggressive. To them, it's worst case, you shut down an account advertising a role playing game... which is better than having the DOJ show up and confiscate $500 million.
That's not to say this is a good result -- but to explain what likely happened on the back end.
Next up, we have Kate Bevan, who wrote about another content moderation fail on Facebook:
Well done, Facebook. Someone commenting "beautiful puss" on a picture of a cat in a cat group is not "violating community standards", you absolute thundering planks 🙄🙄
— Kate Yes, I've seen the viral cat thing Bevan (@katebevan) February 23, 2021
Again, the context here seems fairly obvious. Commenting about a picture of a cat and saying "beautiful puss" is... um... referring to a cat. Not anything else. But, again, in these days when companies are getting sued for all kinds of "bad" things online, you can see why a content moderator having to make a decision in seconds might get this one wrong.
Finally, we've got one that hits a little closer to home. Many of you may be familiar with one of our prolific commenters, That Anonymous Coward (or TAC, for short) who also is a prolific Twitter user. Or was until about a week ago when his account got suspended. Why did his account get suspended? Because of a reply he made to me! Chris Messina had tweeted following the news that Facebook had blocked news links in Australia, by noting that angry Australians were giving bad reviews of Facebook's app in the Australian Apple iOS App Store. And I tweeted, wondering if anyone actually looks at the reviews for apps like Facebook:
Does anyone actually sign up (or not sign up) for Facebook... based on the appstore reviews?!? https://t.co/IuikumdWeR
— Mike Masnick (@mmasnick) February 20, 2021
If you look below that tweet there are a few replies, including this:
What kind of reply could have possibly violated the rules? Well, here is the offending tweet from TAC:
Reading that in context, it's not at all difficult to see that TAC is mocking people who believe all of those nonsense conspiracy theories. But, right now, Twitter is extra sensitive to conspiracy theories on the site, in part because reporters are highlighting each and every "Q" believer who is allowed to spout nonsense as if it's a moral failing on the part of the companies themselves. So it's perhaps not surprising, even if ridiculous in context, for Twitter to say a tweet like that must violate its rules, and demand that TAC remove it, claiming it violated rules against spreading "misleading and potentially harmful information related to COVID-19."
In this case, TAC appealed... and was (surprisingly quickly) told that his case was reviewed... and the appeal was rejected.
That feels a bit ridiculous, but again highlights the impossibility of content moderation at scale. Technically TAC's tweet is repeating the kinds of disinformation that social media websites are getting attacked over. Of course, it should seem fairly obvious to anyone reading the tweet that he's mocking the people who make those false conspiracy theory claims. But, how do you write a policy that says "unless they're referring to it sarcastically"? Because once you have that in place, and then you get to a point where you have terrible, terrible people saying terrible, terrible things, and then when called on it, they claim they were just saying it "sarcastically."
Indeed, when the "style guide" for the Nazi propaganda site "The Daily Stormer" was leaked, it explicitly told writers to write horrific things with plausible deniability: "it should come across as half-joking." And later in the same document: "The unindoctrinated should not be able to tell if we are joking or not."
That's not to excuse the decisions made here, but to explain how we get to this kind of absurd result. Obviously, it seems to me that all three of these cases are "mistakes" in content moderation, but they're the kind of mistakes that get made when you have to do moderation on millions of pieces of content per day, in a short period of time, or else governments around the world threaten to impose draconian rules or massive fines on you.
Filed Under: content moderation, content moderation at scale, masnick's impossibility law
Companies: facebook, twitter