Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019)
from the godwin-in-effect dept
Summary: On June 5, 2019, YouTube announced it would be stepping up its efforts to remove hateful content, focusing on the apparent increase of white nationalist and pro-Nazi content being created by users. This change in algorithm would limit views of borderline content and push more viewers towards content less likely to contain hateful views. The company's blog post specifically stated it would be removing videos that "glorified Nazi ideology."
Unfortunately, when the updated algorithm went to work removing this content, it also took down content that educated and informed people about Nazis and their ideology, but quite obviously did not "glorify" them.
Ford Fischer -- a journalist who tracks extremist and hate groups -- noticed his entire channel had been demonetized within "minutes" of the rollout. YouTube responded to Fischer's attempt to have his channel reinstated by stating multiple videos -- including interviews with white nationalists -- violated the updated policy on hateful content.
A similar thing happened to history teacher Scott Allsop, who was banned by YouTube for his uploads of archival footage of propaganda speeches by Nazi leaders, including Adolph Hitler. Allsop uploaded these for their historical value as well as for use in his history classes. The notice placed on his terminated account stated it had been taken down for "multiple or severe violations" of YouTube's hate speech policies.
Another YouTube user noticed his upload of 1938 documentary about the rise of the Nazi party in Germany had been taken down for similar reasons, even though the documentary was decidedly anti-Nazi in its presentation and had obvious historical value.
Decisions to be made by YouTube:
- Should algorithm tweaks be tested in a sandboxed environment prior to rollout to see how often they're flagging content that doesn't actually violate policies?
- Given that this sort of mis-targeting has happened in the past, does YouTube have a response plan in place to swiftly handle mistaken content removals?
- Should additional staffing be brought on board to handle the expected collateral damage of updated moderation policies?
- Should there be a waiting period on enforcement that would allow users with flagged content to make their case prior to being hit by enforcement methods like demonetization or bans?
- Should YouTube offer some sort of compensation to users whose channels are adversely affected by mistakes like these?
- Should users whose content hasn't been flagged previously for policy violations be given a benefit of a doubt when flagged by automated moderation efforts?
Originally published to the Trust & Safety Foundation website.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: content moderation, disinformation, education, educational videos, hate speech, history, nazis
Companies: youtube
Reader Comments
Subscribe: RSS
View by: Time | Thread
Makes sense.
I mean YouTube being incredibly useless of course, not what they're doing.
[ link to this | view in chronology ]
Ah YouTube, I hardly knew ya
The ineptness radiates from YouTube control like a beacon of hubris as they insist they can solve all the hard problems with their AI.
Which is not AI. Merely ML. They routinely generate bad press by using ML on data that a simple regex could give the thumbs up or down to.
[ link to this | view in chronology ]
Re: Ah YouTube, I hardly knew ya
The historical documentaries was created because Nazi's started WWII, and then those documentaries where mistakenly taken down as a result of Nazi's behaving like Nazi's.
So in the end, it's the Nazi's fault regardless.
[ link to this | view in chronology ]
Re: Re: Ah YouTube, I hardly knew ya
So with all that technology, they did Nazi that happening?
[ link to this | view in chronology ]
One alternative would be to simply have the community down vote material to hide it.
YouTube has many working content filters that, to anyone who prefers gateway guards over simply deleting, finds acceptable.
Be it age banners aMD or violence or nudity, offensive banners for things known to trigger specific groups. Etc.
Relying on machine learning and less than specific algorithms for content removal rarely works. Be it copyright or politics, or in this case Nazis.
Purely, or mostly, relying on people for content removal causes the Twitter impasse where nearly half the country believes they ONLY moderate on political grounds and nearly half believe political takedowns NEVER happen.
YouTube’s pre-playback screens work quite well. Adult content is blocked from under-aged accounts. As is any inappropriate/triggering material.
Adults are (generally) considered wise enough to read the screen and decide if they wish to view something. Before clicking on it.
Sure the downside is it takes a bit of views to flag material if it’s not author/uploader tagged, but for the majority of people it simply works.
[ link to this | view in chronology ]