Content Moderation At Scale Is Impossible To Do Well: Series About Antisemitism Removed By Instagram For Being Antisemetic
from the hate-vs.-reporting-on-hate dept
I've written a lot about the impossibility of doing content moderation well at scale, and there are lots of reasons for that. But one of the most common is the difficulty both AI and human beings have in distinguishing hateful/trollish/harassing behavior from those reporting on that behavior. We've pointed this out over and over again in a variety of contexts. One classic example is social media websites pulling down human rights activists highlighting war crimes by saying it's "terrorist content." Another were the many examples of people on social media talking about racism and how they're victims of racist attacks having their accounts and posts shut down over claims of racism.
And now we have another similar example. A new video series about antisemitism posted its trailer to Instagram... where it was removed for violating community guidelines.
Thank you Instagram for proving the point @Yair_Rosenberg and we at @JewishUnpacked are trying to make. Yes, #antisemitism is bad, BUT educating people about it isn't. You taking our video down shows that this work is more important than ever. https://t.co/PVCItAm1pc pic.twitter.com/dBg3izUGfo
— Johnny Kunza (@johnkunza) August 4, 2021
You can see the video on YouTube, and it's not difficult to figure out how this happened. The message from Instagram says it violates that organization's community guidelines against "violence or dangerous organizations." The video in question, all about antisemitism, does include some Nazi imagery, obviously to make the point that in its extreme form, antisemitism can lead to the murder of Jews. But, Instagram has banned all Nazi content, in part due to those who complained about antisemitism on Instagram.
And that leads to a dilemma. If you're banning Nazi content, you also have to realize how that might lead to content about Nazis (to criticize them and to warn about what they might do) also getting banned. And, again, this isn't new. Earlier this year we had a case study on how YouTube's similar ban took down historical and educational videos about the Holocaust.
The point here is that there is no easy answer. You can say that it should be obvious to anyone reviewing this that trailer (highlighting how bad antisemitism is) is different from actual antisemitism, but it's a lot harder in practice at massive scale. First you need people who actually understand the difference, and you have to be able to write rules that can go out to thousands of moderators in a simple enough manner that explicitly makes clear the differences. And, you also need to give reviewers enough time to actually understand the context, which is kind of impossible given the scale of the content that needs to be reviewed. In such situations the "simpler" versions of the rules often are what get written: "No Nazi content." That's clear and scalable, but leads to these kinds of "mistakes."
Filed Under: antisemitism, community standards, content moderation, hate speech, nazi content, scale
Companies: facebook, instagram