Google Moderation Team Decides My Piece About The Impossible Nature Of Content Moderation Is 'Dangerous Or Derogatory'
from the thanks-for-proving-my-point dept
Well, well. A few weeks back I had a big post all about the impossibility of moderating large content platforms at scale. It got a fair bit of attention, and has kicked off multiple discussions that are continuing to this day. However, earlier this week, it appears that Google's ad content moderation team decided to help prove my point about the impossibility of moderating content at scale when... it decided that post was somehow "dangerous or derogatory."
If you can't read that, it says that Google has restricted serving ads on that page because it has determined that the content is "dangerous or derogatory." And then it has a list of possible ways in which the content is either "dangerous or derogatory."
Dangerous or derogatory content
As stated in our program policies, Google ads may not be placed on pages that contain content that:
- Threatens or advocates for harm on oneself or others;
- Harasses, intimidates or bullies and individual or groups of individuals;
- Incites hatred against, promotes discrimination of, or disparages an individual or group on the basis of their race or ethnic origin, religion, disability, age, nationality, veteran status, sexual orientation, gender, gender identity, or other characteristic that is associated with systemic discrimination or maginalization.
Huh. I've gone back and read the post again, and I don't see how it can possibly fall into any of those categories. Now, if I were a conspiracy theory nutcase, I'd perhaps argue that this was somehow Google trying to "silence" me for calling out its awful moderation practices. Of course, the reality is almost certainly a lot more mundane. Just as the post describes, doing this kind of content moderation at scale is impossible to do well. That doesn't mean they can't do better -- they can (and the post has some suggestions). But, at this kind of scale, tons of mistakes are going to be made. Even if it's just as fraction of a percent of content that is wrongly "moderated," at the scale of content, it's still going to involve millions of pieces of legitimate content incorrectly flagged. It's not a conspiracy to silence me (or anyone). It's just the nature of how impossible this task is.
This is also not the first or second time Google's weird morality police have dinged us over posts that clearly do not violate any of their policies (at this point, we get these kinds of notices every few months, and we appeal, and the appeal always gets rejected without explanation). I'm just writing about this one because it's so... fitting.
The fact is these kinds of things happen all the time. Hell, there was a similar story a week ago as well, concerning Google refusing to put ads on a trailer for the documentary film The Cleaners... a film all about the impossibility of content moderation at scale. Coincidentally, I had just been invited to a screening of The Cleaners a week earlier, and it's a truly fantastic documentary, that does a really amazing job not just highlighting the people who sit in cubicles in the Philippines deciding what content to leave up and what to take down, but also laying out the impossibility of that task, and helping people understand the very subjective nature of these decisions, and how there's so much gray area that is left in the eye of the beholder (in this case, relatively low wage contract employees in the Philippines).
So those are two examples of moderators deciding (obviously incorrectly) to moderate content that shows the impossibility of moderating content well. While it does serve to reinforce the point of just how impossible this kind of moderation is, it's pretty obviously done without intent or political bias. It's just that when you have someone who has 5 seconds to make a decision, and they have to skim a ton of content without context, they're going to make mistakes. Lots of them.
Now, let's see if this post gets moderated too...
Filed Under: adwords, content moderation, dangerous, derogatory
Companies: google