Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017)

from the verified-or-a-stamp-of-approval? dept

Summary: Many social networks have enabled users to use a pseudonym as their identity on that network. Since users could use whatever name they wanted, they could pretend to be someone else, creating certain challenges for those platforms. For example, for sites that allowed such pseudonyms, how would they identify who the actual person was and who was merely an impostor? Some companies, such as Facebook, went the route of requiring users to use their real names. Twitter went another way, allowing pseudonyms.

But what can a company do when there are multiple accounts of the same, often famous, person?

In 2009, Twitter began experimenting with a program to “verify” celebrities.

The initial intent of this program was to identify which Twitter account actually belongs to the person or organization of that Twitter handle (or name). Twitter’s announcement of this feature explains it in straightforward terms:

With this feature, you can easily see which accounts we know are 'real' and authentic. That means we've been in contact with the person or entity the account is representing and verified that it is approved. (This does not mean we have verified who, exactly, is writing the tweets.)

This also does not mean that accounts without the 'Verified Account' badge are fake. The vast majority of accounts on the system are not impersonators, and we don't have the ability to check 100% of them. For now, we've only verified a handful of accounts to help with cases of mistaken identity or impersonation.

From the start, Twitter denoted “verified” accounts with a, now industry-standard, “blue checkmark.” In the initial announcement, Twitter noted that this was experimental, and the company did not have time to verify everyone who wanted to be verified. It was not until 2016 that Twitter first opened up an application process for anyone to get verified.

A year later, in late 2017, the company closed the application process, noting that people were interpreting “verification” as a stamp of endorsement, which it had not intended. Recognizing this unintended perception, Twitter began removing verification checkmarks from accounts that violated certain policies, starting with high-profile white supremacists.

While this policy received some criticism for “blurring” the line between speakers and speech, it was a recognition of the concerns about how the checkmark was seen as an “endorsement” of someone whose views and actions (even those off of Twitter) were not those Twitter wished to endorse. In that way, the removal of the verification became a content moderation tool for a type of subtle negative endorsement.

Even though those users were “verified” as authentic, Twitter recognized that being verified was a privilege and that removing it was a tool in the content moderation toolbox. Rather than suspending or terminating accounts, the company said that it would also consider removing the verification on accounts that violated its new hateful conduct and abusive behavior policies.

Company Considerations:

  • What is the purpose of a verification system on social media? Should it just be to prove that a person is who they say they are, or should it also signal some kind of endorsement? How should the company develop a verification system to match that purpose? 
  • If the public views verification as a form of endorsement, how important is it for a company to reflect that in its verification program? Are there any realistic ways to have the program not be considered an endorsement?
  • Under what conditions does it make sense to use removal of verification as a content moderation tool? Is removing verification an effective content moderation tool? If not, are there ways to make it more effective?

Issue Considerations:

  • What are the consequences of using the verification (and de-verification) process as a content moderation tool to “punish” rule violators?
  • What are both the risks and benefits of embracing verification as a form of endorsement?
  • Are there other subtle forms of content moderation similar to the removal of privileges like the blue checkmark, and how effective can they be?

Resolution: It took many years until Twitter reopened its verification system, and then it did so only in a very limited manner. The system has already ran into problems, as journalists discovered multiple fake accounts that were verified.

However, a larger concern over the new verification rules is that it allows for significant subjective decision-making by the company over how the rules are applied. Activist Albert Fox Cahn explained how the new program is making it “too easy” for journalists to get verified but “too difficult” for activists, showing the challenging nature of any such program.

“When Angela Lang, founder and executive director of the Milwaukee-based civic engagement group BLOC, decided to get a checkmark, she thought, ‘I've done enough. Let’s check out how to be verified.’ Despite Lang and BLOC’s nationally recognized work on Black civic engagement, she found herself shut out. When Detroit-based activist and Data 4 Black Lives national organizing director Tawana Petty applied, her request was promptly rejected. Posting on the platform that refused to verify her, Petty said, ‘Unbelievable that creating a popular hashtag would even be a requirement. This process totally misses the point of why so many of us want to be verified.’ Petty told me, ‘I still live with the anxiety that my page might be duplicated and my contacts will be spammed.’ Previously, she was forced to shut down pages on other social media platforms to protect loved ones from this sort of abuse.

“According to anti-racist economist Kim Crayton, verification is important because ‘that blue check automatically means that what you have to say is of value, and without it, particularly if you’re on the front lines, particularly if you’re a Black woman, you’re questioned.’ As Lang says, ‘Having that verification is another way of elevating those voices as trusted messengers.’ According to Virginia Eubanks, an associate professor of political science at the University at Albany, SUNY, and author of Automating Inequality, ‘The blue check isn't about social affirmation, it’s a safety issue. Someone cloning my account could leave my family or friends vulnerable and could leave potential sources open to manipulation.’” — Albert Fox Cahn

Originally published to the Trust & Safety Foundation website.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: case study, content moderation, endorsement, verification badges
Companies: twitter


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. identicon
    Anonymous Coward, 15 Sep 2021 @ 3:47pm

    In other words, Twitter recognized that being "verified" was being interpreted as being "endorsed" and sought to only "verify" those accounts whose content it actually did endorse.

    link to this | view in thread ]

  2. icon
    That Anonymous Coward (profile), 15 Sep 2021 @ 4:03pm

    "creating a popular hashtag would even be a requirement."

    Da Faq is this shit?
    You can be an important person but since you never managed to make a hashtag trend we have no time to bother with you.

    They missed the bus on checkmarks, they let people take them the wrong way for far to long & then kept playing keep away from some people while they approved fake accounts.

    I mean I managed to get Tiff v Twitter to trend one day, does that mean I'm worthy of review to be verified?
    How in the hell would they manage to verify I am the one true TAC anyways?
    Is it like a secret society where other people with blue checkmarks can vouch for a person being the pseudonym they claim to be?

    link to this | view in thread ]

  3. This comment has been flagged by the community. Click here to show it
    icon
    Koby (profile), 15 Sep 2021 @ 4:31pm

    Re:

    How in the hell would they manage to verify I am the one true TAC anyways? Is it like a secret society where other people with blue checkmarks can vouch for a person being the pseudonym they claim to be?

    For twitter, authenticity doesn't matter as much as agreement does.

    link to this | view in thread ]

  4. icon
    That One Guy (profile), 15 Sep 2021 @ 5:21pm

    Re: Re:

    Which content that they don't agree with would that be again, and as always be specific.

    link to this | view in thread ]

  5. identicon
    Anonymous Coward, 15 Sep 2021 @ 7:25pm

    Bad implementation by Twitter leads to this:

    • “According to anti-racist economist Kim Crayton, verification is important because ‘that blue check automatically means that what you have to say is of value, and without it, particularly if you’re on the front lines, particularly if you’re a Black woman, you’re questioned.’ As Lang says, ‘Having that verification is another way of elevating those voices as trusted messengers.’ According to Virginia Eubanks, an associate professor of political science at the University at Albany, SUNY, and author of Automating Inequality, ‘The blue check isn't about social affirmation, it’s a safety issue. Someone cloning my account could leave my family or friends vulnerable and could leave potential sources open to manipulation.’” — Albert Fox Cahn *

    Look, these things are important, but the Blue Check Mark™ addresses none of them, nor should it. Especially a safety issue, wow. That's what you're counting on?

    Seriously, if they weren't into the whole brevity thing, they'd have a check mark or whatever with the text, "This person is who they say they is," or something unambiguously to that effect, and only that effect. That should be pretty darn clear.

    link to this | view in thread ]

  6. icon
    Toom1275 (profile), 16 Sep 2021 @ 1:38am

    Re: Re:

    [Projects facts not in evidence]

    link to this | view in thread ]

  7. icon
    PaulT (profile), 16 Sep 2021 @ 4:35am

    Re: Re:

    You're free to go to their many competitors any time you want. If you and your klan buddies aren't welcome anywhere the rest of us hang out, that's on you.

    link to this | view in thread ]

  8. identicon
    Anonymous Coward, 16 Sep 2021 @ 5:29am

    One of the greatest satirists of the XX century, Jaroslav Hašek, wrote about the taking back blue checkmarks back in 1920s:

    '...he was the first in his regiment to have his leg torn off by a shell. He got an artificial leg and began to boast about his medal everywhere and to say he was the first and very first war cripple in the regiment. Once he came to the Apollo at Vinohrady and had a row with butchers from the slaughterhouse. In the end they tore off his artificial leg and clouted him over the head with it. The man who tore it off him didn't know it was an artificial one and fainted with fright. At the police station they put Mlicko's leg back again, but from then on he was furious with his Great Silver Medal for valour and went and pawned it in the pawnshop, where they seized him and the medal too. He had some unpleasantness as a result.

    There was a kind of special court of honour for disabled soldiers and it sentenced him to be deprived of his Silver Medal and later of his leg as well . . .'

    ' How do you mean ? '

    'Awfully simple. One day a commission came to him and informed him that he was not worthy of having an artificial leg. Then they unscrewed it, took it off and carried it away.'

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

Close

Email This

This feature is only available to registered users. Register or sign in to use it.