Moderation Of Racist Content Leads To Removal Of Non-Racist Pages & Posts (2020)
from the moderation-mistakes dept
Summary: Social media platforms are constantly seeking to remove racist, bigoted, or hateful content. Unfortunately, these efforts can cause unintended collateral damage to users who share surface similarities to hate groups, even though many of these users take a firmly anti-racist stance.
A recent attempt by Facebook to remove hundreds of pages associated with bigoted groups resulted in the unintended deactivation of accounts belonging to historically anti-racist groups and public figures.
The unintentional removal of non-racist pages occurred shortly after Facebook engaged in a large-scale deletion of accounts linked to white supremacists, as reported by OneZero:
Hundreds of anti-racist skinheads are reporting that Facebook has purged their accounts for allegedly violating its community standards. This week, members of ska, reggae, and SHARP (Skinheads Against Racial Prejudice) communities that oppose white supremacy are accusing the platform of wrongfully targeting them. Many believe that Facebook has mistakenly conflated their subculture with neo-Nazi groups because of the term “skinhead.”
The suspensions occurred days after Facebook removed 200 accounts connected to white supremacist groups and as Mark Zuckerberg continues to be scrutinized for his selective moderation of hate speech.
Dozens of Facebook users from around the world reported having their accounts locked or their pages disabled due to their association with the "skinhead" subculture. This subculture dates back to the 1960s and predates the racist/fascist tendencies now commonly associated with that term.
Facebook’s policies have long forbidden the posting of racist or hateful content. Its ban on "hate speech" encompasses the white supremacist groups it targeted during its purge of these accounts. The removals of accounts not linked to racism -- but linked to the term "skinhead' -- were accidental, presumably triggered by a term now commonly associated with hate groups.
Questions to consider:
- How should a site handle the removal of racist groups and content?
- Should a site use terms commonly associated with hate groups to search for content/accounts to remove?
- If certain terms are used to target accounts, should moderators be made aware of alternate uses that may not relate to hateful activity?
- Should moderators be asked to consider the context surrounding targeted terms when seeking to remove pages or content?
- Should Facebook provide users whose accounts are disabled with more information as to why this has happened? (Multiple users reported receiving nothing more than a blanket statement about pages/accounts "not following Community Standards.")
- If context or more information is provided, should Facebook allow users to remove the content (or challenge the moderation decision) prior to disabling their accounts or pages?
Filed Under: bias, case studies, content moderation, mistakes, racist speech, skinheads
Companies: facebook