Twitter's New 'Private Information' Policy Takes Impossible Content Moderation Challenges To New, Ridiculous Levels
from the the-end-of-everything dept
I've been working on a post about a whole bunch of interesting (and good!) things that Twitter has been doing over the last few months. I'm not sure when that post will be ready, but I need to interrupt that process to note a truly head-scratching change in Twitter's moderation policy announced this week. Specifically, Twitter announced that its "private information policy" has now been expanded to include "media." Specifically, Twitter says that it will remove photos and videos that are posted "without the permission of the person(s) depicted." In other words, Twitter has taken the old, meme-ified, "I'm in this photo and I don't like it" into official policy for taking down content.
Buried deeper in the rules is a very subjective conditional:
This policy is not applicable to media featuring public figures or individuals when media and accompanying text are shared in the public interest or add value to public discourse.
But that's going to lead to some very, very big judgment calls about (a) who is a "public figure" and (b) what is "in the public interest." And early examples suggest that Twitter's Trust & Safety team are failing this test.
I can understand the high level, first pass thinking that leads to this policy: there are situations in which photos or videos that are taken surreptitiously and then are used to mock or harass someone can certainly raise questions and there are perfectly reasonable policy choices to be made on how to handle those scenarios. But how do you distinguish those rare circumstances with a much wider series of cases where people may not have given permission to be in photos or videos, but keeping that content online is clearly beneficial. This can range from the obvious incidental background images of people walking by in a crowded place to -- much more concerning -- situations of journalism being done by individuals, recording important events. Twitter's insistence that it won't apply to "public interest" issues is hardly reassuring. We've seen those claims before and they rarely work in practice.
The most obvious example of this is one of the biggest stories of 2020: the video taping of police officer Derek Chauvin kneeling on George Floyd's neck until he died. In theory, under the broadness of this policy, that video would be taken down off of Twitter. There are lots of other situations as well, including things like Amy Cooper, who was filmed in Central Park calling the police on Christian Cooper (no relation to Amy) who was in the park bird-watching. There are plenty of other examples where people are filmed in public, without their permission, but it's done to reveal important things that have happened in the world. For example, law enforcement relied on help from social media to help identify people who stormed the Capital on January 6th. It seems that under this new policy, all those photos of January 6th insurrectionists could be removed from Twitter. Is sharing them in the public interest? Depends on who you ask, I imagine...
For years we've seen tons of people abusing other systems to take down content they didn't like. For example, there was the part owner of the Miami Heat who literally sued over an unflattering photo by first obtaining the copyright for it. Or the revenge porn extortionist who tried to force stories about him offline with copyright notices. In Europe, we've seen something similar with abuses of the "right to be forgotten" to memory hole news stories.
And here, Twitter is setting up to just take down any such photo or video upon request? This seems wide open for massive levels of abuse. Indeed, there are already a number of reports about the policy being used to silence activists and researchers:
Predictably, @Twitter's new "private media" policy is being used to protect white nationalists from public scrutiny.
Twitter has locked Atlanta Antifascists out of their account, over a 2018 tweet about a White Student Union racist organizer. @afainatl are appealing. Disgusting. pic.twitter.com/eUS4P2bBHU
— Atlanta Anti-Racist News (@ATLantiracism) December 1, 2021
URGENT: As we feared, @TwitterSafety is already locking and suspending the accounts of extremism researchers under its new "Private Media" policy.
The video is from September (predating the policy) and shows two right-wing extremists IN PUBLIC, planning violent assaults. pic.twitter.com/dp7zlt1u4r
— Chad Loder (they/them) (@chadloder) November 30, 2021
.@TwitterSafety just forced photojournalist Kelly Stuart (@SkySpider_) to remove a video under their new "private media" policy.
The video shows two right-wing extremists (in public) planning a criminal assault on reporter @emilymolli, documented here: https://t.co/o5Zjj26zN0
— Chad Loder (they/them) (@chadloder) December 1, 2021
NEW: A Minneapolis activist has been targeted under @TwitterSafety's new Private Media policy for posting a screenshot of a public Facebook post by a prominent local landlord who runs a public, 25,000-member crime watch group.
The "private media" is a post linking to a GoFundMe. pic.twitter.com/ZuJ4KTthUg
— Chad Loder (they/them) (@chadloder) December 1, 2021
Yes, even some of the examples above may be considered edge cases with more nuance than is presented by the people posting them, but as we've seen with copyright and the right to be forgotten, give people a tool to get any information removed from social media, and it will be massively and widely abused to try to hide bad behavior.
I'm honestly perplexed at why Twitter implemented such a broad policy, so difficult to enforce, and so open to abuse. It seems extremely unlike the more thoughtful trust & safety moves the company has made over the past few years.
Filed Under: content moderation, media, newsworthy, photos, private information, public figure, trust & safety, videos
Companies: twitter