Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Facebook's Internal 'Hate Speech' Guidelines Appear To Leave Protected Groups Unprotected (June 2017)

from the making-rules-is-difficult dept

Summary: Facebook has struggled to moderate "hate speech" over the years, resulting in it receiving steady criticism not only from users, but from government officials around the world. Part of this struggle is due to the nature of the term "hate speech" itself, which is often vaguely-defined. These definitions can vary from country to country, adding to the confusion and general difficulty of moderating user content.

Facebook's application of local laws to moderate "hate speech" has resulted in collateral damage and the silencing of voices that such laws are meant to protect. In the United States, there is no law against "hate speech," but Facebook is still trying to limit the amount of abusive content on its site as advertisers flee and politicians continue to apply pressure.

Facebook moderators use a set of internal guidelines to determine what is or isn't hate speech. Unfortunately for many users, the guidelines -- which they never saw before ProPublica published them -- result in some unexpected moderation decisions.

Users wondered why hate speech targeting Black children was allowed while similar speech targeting, for instance, white men wasn't. The internal guidelines explained the factors considered by moderators, which led exactly to these seemingly-inexplicable content removals.

According to Facebook's internal guidelines, these categories are "protected," which means moderators will remove "hateful" content targeting anything on this list.

  • Sex
  • Race
  • Religious affiliation
  • Ethnicity
  • National origin
  • Sexual orientation
  • Gender identity
  • Serious disability/disease
And this is the list of categories not considered "protected" by Facebook:
  • Social class
  • Occupation
  • Continental origin
  • Political ideology
  • Appearance
  • Religions
  • Age
  • Countries
Critics pointed out the internal standards would seem to lead directly to harassment of groups supposedly protected (Black children), while shielding groups historically-viewed -- at least in the United States -- as not in any need of additional protections (white men).

This seemingly-incongruous outcome is due to the application of the rules by moderators. If a "protected" class is modified by an "unprotected" category ("Black" [race/protected] + "children" [age/unprotected]), the resulting combination is determined to be "unprotected." In the case of white men, both categories are protected: race + sex. What seems to be a shielding of a pretty protected group (white men) is actually just the proper application of Facebook's internal moderation guidelines

In response to criticism about outcomes like these, Facebook pointed out it operated globally. What might be considered a ridiculous (or even harmful) moderation decision here in the United States makes more sense in other areas of the world where white men might not make up a large percentage of the population or have historically held a great number of positions of power.

Decisions to be made by Facebook:

  • Should content be removed if it conveys hateful rhetoric against certain groups or individuals even if it doesn't specifically violate the internal guidelines?
  • Should context be considered when moderating posts that violate the internal guidelines to ensure users who are spreading awareness/criticizing other users' hateful speech aren't subjected to the same moderation efforts or account limitations?
  • Which first principles should Facebook be operating on when creating anti-hate policies, and are these policies holding up those principles in practice?
Questions and policy implications to consider:
  • When moderating hate speech, should more discretion be used by moderators to ensure better protection of marginalized groups?
  • Would altering or expanding the scope of the internal guidelines result in users switching to other social media services?
  • Do seemingly inconsistent internal rules (i.e., moderation that protects white men while leaving Black children open to abuse) confuse users and/or result in loss of advertising revenue?
Resolution: Facebook moderators continue to use lists like these to make decisions about perceived "hate speech." The company continues to consider all stakeholders, including foreign governments who have passed "hate speech" laws that surpass what the site's internal guidelines already target for removal.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, hate speech, protected groups
Companies: facebook


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    Wyrm (profile), 9 Oct 2020 @ 5:16pm

    I still don't understand how "unprotected" has priority over "protected".
    If I criticize white men, that's wrong, but if I criticize white adult men, it's fine?

    There might be some logic here, but I don't see it.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 11 Oct 2020 @ 12:58am

      Re:

      I find it silly to focus on who is being criticized and not what they're being criticized about. E.G. is a white man being criticized for being a white man or being criticized for being an adult a.k.a. "okay, boomer".

      Similar to the article's mention of criticizing a black child. Are they being criticized for being black or for saying something childish?

      link to this | view in chronology ]

  • identicon
    Katherin, 9 Oct 2020 @ 5:59pm

    Sex or Age

    Why can’t children be under sex and age. As well as men

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 9 Oct 2020 @ 8:53pm

    In response to criticism about outcomes like these, Facebook pointed out it operated globally.

    That non sequitur is not as subtle as they think it is.

    link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    identicon
    Alisa Zoe, 9 Oct 2020 @ 11:23pm

    Case Study

    The Python Masterclass Bundle highlights 10 internet learning units and 58 hours of video instructional exercises to assist you with acing the language. At this moment, you can pursue whatever value you need. https://www.assignmentuk.co.uk/

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 10 Oct 2020 @ 5:04am

    Facebook's policies are insane. I was just banned for a month for "hate speech" for quoting a term in a reply. "I think hillbilly white trash complained about being compared to him..." Someone had previously used the term "hillbilly white trash" and got a warning for it. Simply using the adjective "white" should not get you a ban...

    link to this | view in chronology ]

  • identicon
    Glenn, 10 Oct 2020 @ 5:30pm

    When you start to censor anything you don't like, you tend to find more and more things to not like. Mission creep happens. Compound that with censorship by committee, and your lowest common denominator starts to approach zero. Meaning: since everyone hates something, almost nothing isn't hated... and you're not allowed to talk about it.

    2120: "First Amendment? ...what's that?"

    link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 10 Oct 2020 @ 5:54pm

      censor

      Ahem.

      Moderation is a platform/service owner or operator saying “we don’t do that here”. Personal discretion is an individual telling themselves “I won’t do that here”. Editorial discretion is an editor saying “we won’t print that here”, either to themselves or to a writer. Censorship is someone saying “you won’t do that anywhere” alongside threats or actions meant to suppress speech.

      Now, with that in mind, please explain how Facebook’s moderation is actually censorship.

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 11 Oct 2020 @ 10:55am

      Re:

      One can not divide by zero, many have tried and failed. I think some of them ended up on the other side of a worm hole in the proximity of Betelgeuse.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 11 Oct 2020 @ 3:24am

    An inevitable outcome from the premise that only some people should be protected from "hate" and others not. People will always disagree who the favored groups should be.

    link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 11 Oct 2020 @ 7:04am

      We should all be protected from hate. But some groups of people — e.g., gay people — have always been marginalized in society by hatred of the dominant group (in this case, straight people). Society enacts laws to protect such groups from “hate” (read: discrimination) because we’ve seen what happens when we don’t offer such protections to those groups. It isn’t pretty, and it isn’t a period of time worth revisiting.

      Also: The same laws that protect gay people from discrimination on the basis of sexual orientation also apply to straight people. Marginalized groups aren’t asking for “special rights” — they’re asking for equitable treatment under the law.

      link to this | view in chronology ]

      • icon
        Stephen T. Stone (profile), 11 Oct 2020 @ 7:34am

        by hatred of the dominant group

        That should read “by hatred from the dominant group”. Golly, I need to really re-read my posts before I hit Submit. 😅

        link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

Close

Email This

This feature is only available to registered users. Register or sign in to use it.