Content Moderation Case Study: Facebook Responds To A Live-streamed Mass Shooting (March 2019)
from the live-content-moderation dept
Summary: On March 15, 2019, the unimaginable happened. A Facebook user -- utilizing the platform's live-streaming option -- filmed himself shooting mosque attendees in Christchurch, New Zealand.
By the end of the shooting, the shooter had killed 51 people and injured 49. Only the first shooting was live-streamed, but Facebook was unable to end the stream before it had been viewed by a few hundred users and shared by a few thousand more.
The stream was removed by Facebook almost an hour after it appeared, thanks to user reports. The moderation team began working immediately to find and delete re-uploads by other users. Violent content is generally a clear violation of Facebook's terms of service, but context does matter. Not every video of violent content merits removal, but Facebook felt this one did.
The delay in response was partly due to limitations in Facebook's automated moderation efforts. As Facebook admitted roughly a month after the shooting, the shooter's use of a head-mounted camera made it much more difficult for its AI to make a judgment call on the content of the footage.
Facebook's efforts to keep this footage off the platform continue to this day. The footage has migrated to other platforms and file-sharing sites -- an inevitability in the digital age. Even with moderators knowing exactly what they're looking for, platform users are still finding ways to post the shooter's video to Facebook. Some of this is due to the sheer number of uploads moderators are dealing with. The Verge reported the video was re-uploaded 1.5 million times in the 48 hours following the shooting, with 1.2 million of those automatically blocked by moderation AI.
Decisions to be made by Facebook:
- Should the moderation of live-streamed content involve more humans if algorithms aren't up to the task?
- When live-streamed content is reported by users, are automated steps in place to reduce visibility or sharing until a determination can be made on deletion?
- Will making AI moderation of livestreams more aggressive result in over-blocking and unhappy users?
- Do the risks of allowing content that can't be moderated prior to posting outweigh the benefits Facebook gains from giving users this option?
- Is it realistic to "draft" Facebook users into the moderation effort by giving certain users additional moderation powers to deploy against marginal content?
- Given the number of local laws Facebook attempts to abide by, is allowing questionable content to stay "live" still an option?
- Does newsworthiness outweigh local legal demands (laws, takedown requests) when making judgment calls on deletion?
- Does the identity of the perpetrator of violent acts change the moderation calculus (for instance, a police officer shooting a citizen, rather than a member of the public shooting other people)?
- Can Facebook realistically speed up moderation efforts without sacrificing the ability to make nuanced calls on content?
Filed Under: case study, christchurch, content moderation, live streaming, new zealand, shooting
Companies: facebook