Content Moderation Case Study: Facebook Nudity Filter Blocks Historical Content And News Reports About The Error (June 2020)
from the content-moderation-is-hard dept
Summary: Though social media networks take a wide variety of evolving approaches to their content policies, most have long maintained relatively broad bans on nudity and sexual content, and have heavily employed automated takedown systems to enforce these bans. Many controversies have arisen from this, leading some networks to adopt exceptions in recent years: Facebook now allows images of breastfeeding, child-birth, post-mastectomy scars, and post-gender-reassignment surgery photos, while Facebook-owned Instagram is still developing its exception for nudity in artistic works. However, even with exceptions in place, the heavy reliance on imperfect automated filters can obstruct political and social conversations, and block the sharing of relevant news reports.
One such instance occurred on June 11, 2020 following controversial comments by Australian Prime Minister Scott Morrison, who stated in a radio interview that “there was no slavery in Australia”. This sparked widespread condemnation and rebuttals from both the public and the press, pointing to the long history of enslavement of Australian Aboriginals and Pacific Islanders in the country. One Australian Facebook user posted a late 19th century photo from the state library of Western Australia, depicting Aboriginal men chained together by their necks, along with a statement:
Kidnapped, ripped from the arms of their loved ones and forced into back-breaking labour: The brutal reality of life as a Kanaka worker - but Scott Morrison claims ‘there was no slavery in Australia’
Facebook removed the post and image for violation of their policy against nudity, although no genitals are visible, and restricted the user’s account. The Guardian Australia contacted Facebook to determine if this decision was made in error and, the following day, Facebook restored the post and apologized to the user, explaining that it was an erroneous takedown caused by a false positive in the automated nudity filter. However, at the same time, Facebook continued to block posts that included The Guardian’s news story about the incident, which featured the same photo, and placed 30-day suspensions on some users who attempted to share it. Facebook’s community standards report shows that in the first three months of 2020, 39.5-million pieces of content were removed for nudity or sexual activity, over 99% of those takedowns were automated, 2.5-million appeals were filed, and 613,000 of the takedowns were reversed.
Decisions to be made by Facebook:
- Can nudity filters be improved to result in fewer false-positives, and/or is more human review required?
- For appeals of automated takedowns, what is an adequate review and response time?
- Should automated nudity filters be applied to the sharing of content from major journalistic sources such as The Guardian?
- Should questions about content takedowns from major news organizations be prioritized over those from regular users?
- Should 30-day suspensions and similar account restrictions be manually reviewed only if the user files an appeal?
Questions and policy implications to consider:
- Should automated filter systems be able to trigger account suspensions and restrictions without human review?
- Should content that has been restored in one instance be exempted from takedown, or flagged for automatic review, when it is shared again in future in different contexts?
- How quickly can erroneous takedowns be reviewed and reversed, and is this sufficient when dealing with current, rapidly-developing political conversations?
- Should nudity policies include exemptions for historical material, even when such material does include visible genitals, such as occurred in a related 2016 controversy over a Vietnam War photo?
- Should these policies take into account the source of the content?
- Should these policies take into account the associated messaging?
Resolution: Facebook’s restoration of the original post was undermined by its simultaneous blocking of The Guardian’s news reporting on the issue. After receiving dozens of reports from its readers that they were blocked from sharing the article and in some cases suspended for trying, The Guardian reached out to Facebook again and, by Monday, June 15, 2020, users were able to share the article without restriction. The difference in response times between the original incident and the blocking of posts is possibly attributable to the fact that the latter came to the fore on a weekend, but this meant that critical reporting on an unfolding political issue was blocked for several days while the subject was being widely discussed online.
Photo Credit (for first photo):
State Library of Western Australia
[Screenshot is taken directly from a Twitter embed]
Filed Under: case study, consistency, content moderation, historical content, nudity, reporting
Companies: facebook