Twitter Admits It Messed Up In Suspending Accounts Under Its New Policy, But Policies Like This Will ALWAYS Lead To Overblocking
from the who-didn't-see-that-coming dept
Last week, we called out a questionable move by Twitter to put in place a new policy that banned images or videos that included someone without their permission. The company claimed that this was to prevent harassment and also (for reasons I still don't understand) buried the fact that this was not supposed to apply to public figures or newsworthy media. However, as we pointed out at the time, the standard here was incredibly subjective, and wide open to abuse. Indeed, we pointed out examples in that article of the policy clearly being abused. And even as Fox News talking heads insisted the policy would only be used against conservatives, in actuality, a bunch of alt right/white nationalists/white supremacists immediately saw this as the perfect way to get back at activists who had been calling out their behavior, leading to a mass brigading effort to "report" those activists for taking pictures and videos at white nationalist rallies and events.
In other words, exactly what we and tons of others expected.
And, a few days later, Twitter admitted it messed up the implementation of the policy -- though it doesn't appear to be rolling back the policy itself.
Twitter said Friday that it had mistakenly suspended accounts under a new policy following a flood of “coordinated and malicious reports” targeting anti-extremism researchers and journalists first reported Thursday by The Washington Post.
The company said it had corrected the errors and launched an internal review to ensure that its new rule — which allows someone whose photo or video was tweeted without their consent to request its removal — was “used as intended.”
[....]
In a statement Friday, Twitter spokesman Trenton Kennedy said that the company had been overwhelmed with a “significant amount” of malicious reports and that its “enforcement teams made several errors” in the aftermath.
What I am perplexed about, however, is that I know for a fact that Twitter has a bunch of smart and thoughtful people who work on trust and safety issues and who clearly would know how this policy would play out in practice, and yet for whatever reason the policy was still rolled out the way it was. I don't know if this is because people underestimated how quickly and wildly it would be abused, or if others at the company simply overruled concerns raised by experts, or if it was something else entirely. Either way... it's a weird and surprising misstep for Twitter.
That said, it's also illustrative of a really important point that we've been trying to raise for ages, going back to the DMCA takedown process, and covering all sorts of other policy debates regarding content moderation: if you give people tools to take down some content, they will be abused. Always. That's not to say you should never moderate or never take down content, because that's impossible. There is always going to be some content that sites need to take down, whether for legal purposes or because it's harming the integrity of the site (things like spam or harassment).
But any such policy always opens itself up to abuse and dishonest reporting. And any company that has such policies (i.e., every company with third party content) needs to have a plan in place not just to deal with the abusive/problematic content on the site, but also the abuse of the moderation process to silence voices that shouldn't actually be silenced.
Again, from my interactions with Twitter trust & safety people, I know they know this. And this is part of why I find the rollout of this policy so perplexing.
However, it's also an important lesson for policymakers in various state legislatures, in DC, and around the globe. There is so much effort these days to pressure (or require!) internet companies to remove "bad" content, but almost none of those policy plans take into account the ways in which those policies will be abused to silence reporting, silence marginalized voices, and silence those calling out abuses of power. Twitter's rollout of this policy has been a disaster (and one that could have been prevented) but at the very least it should be a warning to policymakers who seem to think that they can design policy requirements to moderate certain content, without bothering to explore the likelihood that those mandates will be abused to silence important speech.
Filed Under: content moderation, overblocking, photos, private information, private media, videos
Companies: twitter