Content Moderation Case Study: Facebook Responds To A Live-streamed Mass Shooting (March 2019)
from the live-content-moderation dept
Summary: On March 15, 2019, the unimaginable happened. A Facebook user -- utilizing the platform's live-streaming option -- filmed himself shooting mosque attendees in Christchurch, New Zealand.
By the end of the shooting, the shooter had killed 51 people and injured 49. Only the first shooting was live-streamed, but Facebook was unable to end the stream before it had been viewed by a few hundred users and shared by a few thousand more.
The stream was removed by Facebook almost an hour after it appeared, thanks to user reports. The moderation team began working immediately to find and delete re-uploads by other users. Violent content is generally a clear violation of Facebook's terms of service, but context does matter. Not every video of violent content merits removal, but Facebook felt this one did.
The delay in response was partly due to limitations in Facebook's automated moderation efforts. As Facebook admitted roughly a month after the shooting, the shooter's use of a head-mounted camera made it much more difficult for its AI to make a judgment call on the content of the footage.
Facebook's efforts to keep this footage off the platform continue to this day. The footage has migrated to other platforms and file-sharing sites -- an inevitability in the digital age. Even with moderators knowing exactly what they're looking for, platform users are still finding ways to post the shooter's video to Facebook. Some of this is due to the sheer number of uploads moderators are dealing with. The Verge reported the video was re-uploaded 1.5 million times in the 48 hours following the shooting, with 1.2 million of those automatically blocked by moderation AI.
Decisions to be made by Facebook:
- Should the moderation of live-streamed content involve more humans if algorithms aren't up to the task?
- When live-streamed content is reported by users, are automated steps in place to reduce visibility or sharing until a determination can be made on deletion?
- Will making AI moderation of livestreams more aggressive result in over-blocking and unhappy users?
- Do the risks of allowing content that can't be moderated prior to posting outweigh the benefits Facebook gains from giving users this option?
- Is it realistic to "draft" Facebook users into the moderation effort by giving certain users additional moderation powers to deploy against marginal content?
- Given the number of local laws Facebook attempts to abide by, is allowing questionable content to stay "live" still an option?
- Does newsworthiness outweigh local legal demands (laws, takedown requests) when making judgment calls on deletion?
- Does the identity of the perpetrator of violent acts change the moderation calculus (for instance, a police officer shooting a citizen, rather than a member of the public shooting other people)?
- Can Facebook realistically speed up moderation efforts without sacrificing the ability to make nuanced calls on content?
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: case study, christchurch, content moderation, live streaming, new zealand, shooting
Companies: facebook
Reader Comments
Subscribe: RSS
View by: Time | Thread
Unimaginable because it's in New Zealand, or what? Mass shootings happen several times a year in the USA, and it doesn't take a lot of imagination to think of streaming them.
It's terrible, but I'm sure I've seen several fictional movies with plots like this, not to mention the real-life antecedents. The Wikipedia "Live streaming crime" page shows 5 incidents in 2017 (including murder, suicide, and gang rape).
They're psychopaths, not idiots. They don't respect rules, and I don't imagine they have many plans for the future.
This seems like the 1970s snuff-film moral panic all over again.
[ link to this | view in thread ]
Authoritarian Apologist?
ALL these posts about the futility of Content Moderation are tiresome, and REEK of extremist apologisia. Is the poster a member of a group planing murderous violence? It seems so. Or has the poster been banned from Twitter too many times to count? Content Moderation is REQUIRED in a civil society, and those who oppose all forms of it are dangerous.
[ link to this | view in thread ]
Re: Authoritarian Apologist?
If you have followed this site for any length of time you would know that the majority support moderation, and it is mainly the extreme right who are having problems pushing their racist viewpoint that object to any moderation.
Besides which, just how do you stop thing like the live stream under discussion appearing without eliminating live streaming, and requiring that all content is pre-moderated. Doing both will silence the majority on the Internet and destroy useful services like zoom.
In other words, just how do you propose to successfully moderate all the conversation of the human race?
[ link to this | view in thread ]
Re:
"Unimaginable because it's in New Zealand, or what?"
Yes. Gun crime is very rare in some countries, even if it's background noise where you are. Most New Zealanders would never have dreamed of it happening there.
"They're psychopaths, not idiots."
They're referring to the idiots who restream the acts, not the people perpetrating them.
"This seems like the 1970s snuff-film moral panic all over again."
No, unlike snuff films these actually exist, and there's fairly good evidence that the 8chan types who do these things are in part encouraged by the extra exposure they get from such activity.
[ link to this | view in thread ]
Re: Authoritarian Apologist?
"ALL these posts about the futility of Content Moderation are tiresome"
Not anywhere near as tiresome as the idiots who think that there's a magic wand that will perfectly moderate content without collateral damage.
"Content Moderation is REQUIRED in a civil society, and those who oppose all forms of it are dangerous."
Good thing nobody he opposes it, then. It's just noted that it's impossible to do perfectly at scale, especially with something like streamed live video.
[ link to this | view in thread ]
Re: Authoritarian Apologist?
Pardon?
These case studies are designed to be extremely neutral. We are outlining what happened. Companies face these decisions every day, and they are often challenging, raise complex questions, trigger unforeseen side effects, or just don't go well. We're documenting these kinds of incidents to help understand the challenges of content moderation and highlight the difficult tradeoffs, so it can be done better - not to make the case that it's "futile".
Content moderation is never going to be easy or simple. These case studies aim to help people navigate it.
[ link to this | view in thread ]
Re: Authoritarian Apologist?
ALL these posts about the futility of Content Moderation are tiresome, and REEK of extremist apologisia.
Huh?
Is the poster a member of a group planing murderous violence? It seems so.
What?!?
Or has the poster been banned from Twitter too many times to count?
Nope.
Content Moderation is REQUIRED in a civil society, and those who oppose all forms of it are dangerous.
Neutrally written case studies on how different content moderation challenges were handled, highlighting some of the trade offs and key issues... written to help people better understand content moderation... makes you think that we're arguing AGAINST content moderation?
Also, have you ever read this site?
Your reading comprehension filters are in need of a reboot, buddy.
[ link to this | view in thread ]
Re: Re: Authoritarian Apologist?
Without a universally accepted set of societal norms, this is an impossible task. The best one can accomplish is the moderation of individual social groups.
Beyond that, there are moral and ethical concerns dealing with demanding the silencing of others not directly involved in the conversation. One such concern is the limitation of human progress by forbidding certain modes of thought. Another is the risk of a grievance becoming a criminal act against society due to society's unwillingness to listen. Of course the one concern people are most familiar with is the destruction of political discourse and the detrimental effect it has on a society "of the people" when such discourse is limited to only approved talking points.
Different societies have different norms and what's acceptable discourse to some is unheard of and offensive to others. Trying to apply pervasive moderation to the entire species, when that species hasn't yet agreed on a set of norms, will very likely prohibit that species from ever doing so. Even if a species does have universally accepted norms, applying pervasive moderation may very well lead that species to ruin.
[ link to this | view in thread ]
Re: Re: Re: Authoritarian Apologist?
"Without a universally accepted set of societal norms, this is an impossible task. The best one can accomplish is the moderation of individual social groups"
This is exactly the point. When people say "well, just hire more people", the number of people who would be hired would have to be, by definition, a group from every religious, social economic and cultural background. Letting them individually come up with the moderation criteria would never have any level of consistency, so you have to come up with some centralised neutral criteria. This would never be acceptable to everyone.
OK, so automate it. Then, you have the problem that algorithms can never understand subjective information, of which the majority of what they are moderating are. So, you double your problems, not only do you have the central "neutral" criteria, but you have a moderator incapable of understanding why, say, a gory shot from Evil Dead 2 or Monty Python is funny but a real life dismemberment is unacceptable, and there's a lot more subtle disagreements than those.
There's no easy answer, but I fear that people as dense as the AC above believe that it is, so people who understand reality will long be in conflict with people who believe in magic.
[ link to this | view in thread ]
Re: Authoritarian Apologist?
Content Moderation is REQUIRED in a civil society, and those who oppose all forms of it are dangerous.
Wait, who's the authoritarian here?
[ link to this | view in thread ]
Re: Re: Authoritarian Apologist?
Plus that statement is literally self contradictory and impossible thus nonsensical. Quick! Wear a hat fully on your head and don't have a hat touching your head at the same time!
[ link to this | view in thread ]