Content Moderation Case Study: GitHub Attempts To Moderate Banned Words Contained In Hosted Repositories (2015)
from the word-filters dept
Summary: GitHub solidified its position as the world's foremost host of open source software not long after its formation in 2008. Twelve years after its founding, GitHub is host to 190 million repositories and 40 million users.
Even though its third-party content is software code, GitHub still polices this content for violations of its terms of service. Some violations are more overt, like possible copyright infringement. But much of it is a bit tougher to track down.
A GitHub user found themself targeted by a GitHub demand to remove certain comments from their code. The user's code contained the word "retard" -- a term that, while offensive in certain contexts, isn't offensive when used as a verb to describe an intentional delay in progress or development. But rather than inform the user of this violation, GitHub chose to remove the entire repository, resulting in users who had forked this code to lose access to their repositories as well.
It wasn't until the user demanded an explanation that GitHub finally provided one. In an email sent to the user, GitHub said the code contained content the site viewed as "unlawful, offensive, threatening, libelous, defamatory, pornographic, obscene, or otherwise objectionable." More specifically, GitHub told the user to remove the words "retard" and "retarded," restoring the repository for 24 hours to allow this change to be made.
Decisions for GitHub:
- Is the blanket banning of certain words a wise decision, considering the idiosyncratic language of coding (and coders)?
- Should GitHub account for downstream repositories that may be negatively affected by removal of the original code when making content moderation decisions, and how?
- Could banned words inside code comments be moderated by only removing the comments, which would avoid impacting the functionality of the code?
- Is context considered when moderating possible terms of service violations?
- Is it possible to police speech effectively when the content hosted isn't what's normally considered speech?
- Does proactive moderation of certain terms deter users from deploying code designed to offend?
Unfortunately for GitHub, this drew attention to its less-than-consistent approach to terms of service violations. Searches for words considered "offensive" by GitHub turned up dozens of other potential violations -- none of which appeared to have been targeted for removal despite the inclusion of far more offensive terms/code/notes.
And the original offending code was modified with a tweak that substituted the word "retard" with the word "git" -- terms that are pretty much interchangeable in other parts of the world. The not-so-subtle dig at GitHub and its inability to detect nuance may have pushed the platform towards reinstating content it had perhaps pulled too hastily.
Originally posted on the Trust & Safety Foundation website.
Filed Under: code, content moderation, repositories
Companies: github