Twitter Suspends Reporter's Account... After He Gets Targeted By Russian Twitter Bots
from the nice-work,-geniuses dept
Over the last few weeks, we've written a number of times about how systematically bad internet platforms are at determining how to deal with abuse online. This is not to mock the platforms, even though many of the decisions are comically bad, but to note that this is inevitable in dealing with the scale of these platforms -- and to remind people that this is why it's dangerous to demand that these companies be legally liable for policing speech on their platforms. It won't end well. Just a few weeks ago, we wrote about how Twitter suspended Ken "Popehat" White for posting an email threat he'd received (Twitter argued he was violating the privacy of the guy threatening him). From there, we wrote about a bunch of stories of Facebook and Twitter punishing people for documenting abuse that they had received.
But this latest story is even slightly crazier, as it appears that abusers were taking advantage of this on purpose. In this case, the story involves Russian Twitter bots. First, the Atlantic Council wrote about Russian Twitter trolls trying to shape a narrative after the Nazi event in Charlottesville. In response, those very same Twitter bots and trolls started bombarding the Twitter feeds of the researchers. And here's where the story gets even weirder. When Joseph Cox, writing for The Daily Beast, wrote about this (at the link above), those same Twitter bots started targeting him too.
And... that caused Twitter to suspend his account. No, really.
“Caution: This account is temporarily restricted,” a message on my account read Tuesday. “You’re seeing this warning because there has been some unusual activity from this account,” it continued.
Again, it's not hard to see how this happened. Cox's Twitter account suddenly took on a bunch of bot followers, many of whom started retweeting him. From Twitter's perspective, it's easy to see how that looks like someone gaming the system -- possibly buying up fake followers and fake retweets. But, here, it appears to have been done to target the user, rather than to fake boost him. After all, it's completely understandable why Twitter would have a system that would seek out situations where a ton of fake followers were suddenly following someone and retweeting them. That would be a clear pattern indicating spam or something nefarious. And, in designing the system, you might think that such a thing would never be used to harm someone -- but by building in the mechanism to recognize this is happening and to suspend the account, you're now creating a weapon that will be gamed.
Cox eventually got his account back and got an apology ("for the inconvenience") from Twitter. But, once again, for everyone out there demanding that these platforms be more forceful in removing users, or (worse) arguing that there should be legal liability on them if they fail to kick off people expeditiously, be careful what you wish for. You may get it... and not like the results.
Filed Under: fraud, joseph cox, moderation, russian bots, trust & safety, twitter bots
Companies: twitter