Twitter's Attempt To Clean Up Spammers Meant That People Sarcastically Tweeting 'Kill Me' Were Suspended
from the not-helpful dept
Just recently, Senator Amy Klobuchar suggested that the government should start fining social media platforms that don't remove bots fast enough. We've pointed out how silly and counterproductive (not to mention unconstitutional) this likely would be. However, every time we see people demanding that these platforms better moderate their content, we end up with examples of why perhaps we really don't want those companies to be making these kinds of decisions.
You may have heard that, over the weekend, Twitter started its latest sweep of accounts to shutdown. Much of the focus was on so-called Tweetdeckers, which were basically a network of teens using Tweetdeck software to retweet accounts for money. In particular, it was widely reported that a bunch of accounts known for copying (without attribution) the marginally funny tweets of others and then paying "Tweetdeckers" for mass promotion. These accounts were shutdown en masse over the weekend.
Twitter noted that the sweep was about getting rid of spammers:
A spokesperson for Twitter told HuffPost on Saturday that the sweep was a part of a broader company effort to fight spam on the platform. Last month, Twitter announced it would be making changes to TweetDeck and restricted people from using the app to retweet the same tweet across multiple accounts.
“Keeping Twitter safe and free from spam is a top priority for us,” the company said in a February blog post. “One of the most common spam violations we see is the use of multiple accounts and the Twitter developer platform to attempt to artificially amplify or inflate the prominence of certain Tweets.”
Fair enough. But some people noticed that not everyone swept up in this mass suspensions were involved in such shady practices. The Twitter account @madblackthot2, whose main account (drop the "2") appears to have been temporarily suspended, put together a fascinating thread about how Twitter appeared to be suspending accounts based on keywords around self-harm with a few different examples of people having their accounts suspended for old tweets in which they sarcastically said "kill me."
tw : s**c*de
I don't know what's funnier: that twitter suspended me for a tweet from August that said "that's how Maine I am, k*ll me", or that twitter's policy when they think someone is at risk of self-harm is to cut them off from social networks by suspending them. lmaoooo pic.twitter.com/kndBbNMz4J
— Coffee Spoonie (@coffeespoonie) March 6, 2018
I’m gonna continue to add examples of it happening to people to prove my point. If it’s happened to you, please feel free to reply to this with a screenshot of what Twitter sent you. pic.twitter.com/yW7nztigYh
— THE TEMPORARILY SUSPENDED ORACLE (@madblackthot2) March 11, 2018
Twitter is suspending accounts for using what they consider trigger words that incite violence or promote self-harm, no matter what the context is, or how old the tweets are. (The middle screenshot is from a verified artist so truly nobody is exempt)
Examples: pic.twitter.com/uvAlpTFD83
— backup account (@madblackthot2) March 10, 2018
There are more examples as well. Not everyone who tweets "kill me" is getting suspended, so at least the algorithm is slightly more sophisticated than that. One explanation given is that when a user is reported for certain reasons, the system then searches through past tweets for specific keywords. Perhaps that works in some contexts, but clearly not all of them.
And, again we end up in a situation where demanding that a social media platform do "more moderation!" to kill off bad accounts leads to lots of collateral damage in the dumbest possible way. And, yet, at the same time, people are quickly finding new election propaganda Twitter bots sprouting up like weeds.
This is not to say that Twitter shouldn't be doing anything. The company is clearly trying to figure out what to do and how to handle some of these issues. The issue is that companies are inevitably going to be bad at this. And, yet, the constant push from politicians is to make them more and more legally responsible for not fucking up such things -- which is basically an impossible task. If Twitter were legally mandated to remove certain types of accounts, it's likely that we'd end up seeing many, many more examples of bad takedowns a la the "kill me" suspensions.
Filed Under: algorithms, content moderation, kill me. jokes, suspensions
Companies: twitter