Content Moderation Case Study: US Army Bans Users For Asking About War Crimes On Twitch & Discord (July 2020)
from the the-1st-amendment-matters dept
Summary: Content moderation questions are not just about the rules that internet platforms create for themselves to enforce: they sometimes involve users themselves enforcing some form of the site’s rules, or their own rules, within spaces created on those platforms. One interesting case study involves the US Army’s esports team and how it has dealt with hecklers.
The US Army has a variety of different channels for marketing itself to potential recruits, and lately it’s been using its own “professional esports team” as something of a recruiting tool. Like many esports teams, the US Army team set up a Discord server. After some people felt that the Army was trying to be too “cute” on Twitter -- by tweeting the internet slang “UwU” -- a bunch of users set out to see how quickly they could be banned from the Army’s Discord server. In fact, many users started bragging about how quickly they were being banned -- often by posting links or asking questions related to war crimes, and accusations of the US Army’s involvement in certain war crimes.
This carried over to the US Army’s esports streaming channel on Twitch, where it appears that the Army set up certain banned words and phrases, including “war crimes,” leading at least one user -- esports personality Rod “Slasher” Breslau -- to try to get around that filter by typing “w4r cr1me” instead. This made it through and a few seconds later Breslau was banned from the chat by the Army’s esports player Green Beret Joshua “Strotnium” David, with David saying out loud during the stream “have a nice time getting banned, my dude.” Right before saying this David was mocking “internet keyboard monsters” for this kind of activity.
When asked about this, the Army told Vice News that it considered the questions to be a form of harassment, and in violation of Twitch’s stated rules, even though it was the Army that was able to set the specific moderation rules on the account and choose who to ban:
"The U.S. Army eSports Team follows the guidelines and policies set by Twitch, and they did ban a user from their account," a representative of the U.S. Army esports team said in a statement. "Team members are very clear when talking with potential applicants that a game does not reflect a real Army experience. They discuss their career experiences in real terms with factual events. Team members ensure people understand what the Army offers through a realistic lens and not through the lens of a game meant for entertainment. This user's question was an attempt to shift the conversation to imply that Soldiers commit war crimes based on an optional weapon in a game, and we felt that violated Twitch's harassment policy. The U.S. Army offers youth more than 150 different careers, and ultimately the goal of the Army eSports Team is to accurately portray that range of opportunities to interested youth."
Decisions to be made by the US Army:
- Is it appropriate or reasonable to set up filters to block certain keywords that might be seen as harassing?
- Is it legal to do so under the 1st Amendment?
- How aggressive should the Army be in blocking words or phrases and banning accounts?
- How does the Army determine who is harassing the esports team vs. who is asking legitimate questions?
- Is it the Army’s decision as to who is violating Twitch’s rules?
- If the Army is banning accounts, should it take direct responsibility for that, or should it point to Twitch’s rules instead?
- Will banning accounts or blocking phrases lead to even greater unwanted attention?
- Are there alternative ways to deal with those who wish to highlight what they perceive to be war crimes by the US Army?
- Should government accounts be allowed to enforce Twitch’s rules any way they see fit?
- Given litigation concerning US politician blocks on social media accounts violating the 1st Amendment, should we even allow US government accounts to have tools for blocking and banning?
- Unlike private companies, the US government is bound by the limitations of the 1st Amendment in moderating content. How does its use of social media deal with users who appear to be harassing government employees?
- In situations like this one, bans and blocks will often get more attention, leading to even more people trying on purpose to get blocked. Are there policies in place to deal with this? Is this kind of behavior something worth banning? Are there alternative approaches?
- How much of the moderation for US government accounts should be managed by the companies, rather than the users?
- If the companies are moderating on behalf of a US government account, do they also need to keep 1st Amendment limitations in mind?
Indeed, after this case study was originally published, the Knight 1st Amendment Center, which brought the legal challenge that had Trump's Twitter blocking being declared a violation of the 1st Amendment, warned the Army about this, leading the Army to temporarily stop using Twitch.
However, the Army has recently returned to Twitch after first unbanning the users it had banned. Separately, Twitch told the Army to stop sharing fake prize giveaways after that practice was called out by The Nation.
The Army's return to Twitch has not been without issues. Upon return, the chat for the Army's Twitch channel was flooded with references to war crimes, the issue over which it was initially banning users.
Filed Under: 1st amendment, content moderation, us army
Companies: twitch