Content Moderation Case Study: Talking About Racism On Social Media (2019)
from the what's-racist,-and-what's-a-discussion dept
Summary: With social media platforms taking a more aggressive stance regarding racist, abusive, and hateful language on their platforms, there are times when those efforts end up blocking conversations about race and racism itself. The likelihood of getting an account suspended or taken down has been referred to as “Facebooking while Black.”
As covered in USA Today, the situations can become complicated quickly:
A post from poet Shawn William caught [Carolyn Wysinger’s] eye. "On the day that Trayvon would've turned 24, Liam Neeson is going on national talk shows trying to convince the world that he is not a racist." While promoting a revenge movie, the Hollywood actor confessed that decades earlier, after a female friend told him she'd been raped by a black man she could not identify, he'd roamed the streets hunting for black men to harm.
For Wysinger, an activist whose podcast The C-Dubb Show frequently explores anti-black racism, the troubling episode recalled the nation's dark history of lynching, when charges of sexual violence against a white woman were used to justify mob murders of black men.
"White men are so fragile," she fired off, sharing William's post with her friends, "and the mere presence of a black person challenges every single thing in them."
This post was quickly deleted by Facebook, claiming that it violated the site’s “hate speech” policies. She was also warned that attempting to repost the content would lead to her being banned for 72 hours.
Facebook’s rules are that an attack on a “protected characteristic” -- such as race, gender, sexuality or religion -- violates its “hate speech” policies. In this case, the removal was because Wysinger’s post was speech that targeted a group based on a “protected characteristic” (in this case “white men”) and thus it was flagged for deletion.
Questions to consider:
- How should a site handle sensitive conversations regarding discrimination?
- If a policy defines “protected characteristics,” are all groups defined by one of those characteristics to be treated equally?
- If so, is that in itself a form of disparate treatment for historically oppressed groups?
- If not, does that risk accusations of bias?
- Is there any way to take wider context into account during human or technological reviews?
- Should the race/gender/sexuality/religion of the speaker be taken into account? What about the target of the speech?
- Is there a way to determine if a comment is “speaking up” to power or “speaking down” from a position of power?
Filed Under: case study, content moderation, content moderation case study, racism