Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017)
from the verified-or-a-stamp-of-approval? dept
Summary: Many social networks have enabled users to use a pseudonym as their identity on that network. Since users could use whatever name they wanted, they could pretend to be someone else, creating certain challenges for those platforms. For example, for sites that allowed such pseudonyms, how would they identify who the actual person was and who was merely an impostor? Some companies, such as Facebook, went the route of requiring users to use their real names. Twitter went another way, allowing pseudonyms.
But what can a company do when there are multiple accounts of the same, often famous, person?
In 2009, Twitter began experimenting with a program to “verify” celebrities.
The initial intent of this program was to identify which Twitter account actually belongs to the person or organization of that Twitter handle (or name). Twitter’s announcement of this feature explains it in straightforward terms:
With this feature, you can easily see which accounts we know are 'real' and authentic. That means we've been in contact with the person or entity the account is representing and verified that it is approved. (This does not mean we have verified who, exactly, is writing the tweets.)
This also does not mean that accounts without the 'Verified Account' badge are fake. The vast majority of accounts on the system are not impersonators, and we don't have the ability to check 100% of them. For now, we've only verified a handful of accounts to help with cases of mistaken identity or impersonation.
From the start, Twitter denoted “verified” accounts with a, now industry-standard, “blue checkmark.” In the initial announcement, Twitter noted that this was experimental, and the company did not have time to verify everyone who wanted to be verified. It was not until 2016 that Twitter first opened up an application process for anyone to get verified.
A year later, in late 2017, the company closed the application process, noting that people were interpreting “verification” as a stamp of endorsement, which it had not intended. Recognizing this unintended perception, Twitter began removing verification checkmarks from accounts that violated certain policies, starting with high-profile white supremacists.
While this policy received some criticism for “blurring” the line between speakers and speech, it was a recognition of the concerns about how the checkmark was seen as an “endorsement” of someone whose views and actions (even those off of Twitter) were not those Twitter wished to endorse. In that way, the removal of the verification became a content moderation tool for a type of subtle negative endorsement.
Even though those users were “verified” as authentic, Twitter recognized that being verified was a privilege and that removing it was a tool in the content moderation toolbox. Rather than suspending or terminating accounts, the company said that it would also consider removing the verification on accounts that violated its new hateful conduct and abusive behavior policies.
Company Considerations:
- What is the purpose of a verification system on social media? Should it just be to prove that a person is who they say they are, or should it also signal some kind of endorsement? How should the company develop a verification system to match that purpose?
- If the public views verification as a form of endorsement, how important is it for a company to reflect that in its verification program? Are there any realistic ways to have the program not be considered an endorsement?
- Under what conditions does it make sense to use removal of verification as a content moderation tool? Is removing verification an effective content moderation tool? If not, are there ways to make it more effective?
Issue Considerations:
- What are the consequences of using the verification (and de-verification) process as a content moderation tool to “punish” rule violators?
- What are both the risks and benefits of embracing verification as a form of endorsement?
- Are there other subtle forms of content moderation similar to the removal of privileges like the blue checkmark, and how effective can they be?
Resolution: It took many years until Twitter reopened its verification system, and then it did so only in a very limited manner. The system has already ran into problems, as journalists discovered multiple fake accounts that were verified.
However, a larger concern over the new verification rules is that it allows for significant subjective decision-making by the company over how the rules are applied. Activist Albert Fox Cahn explained how the new program is making it “too easy” for journalists to get verified but “too difficult” for activists, showing the challenging nature of any such program.
“When Angela Lang, founder and executive director of the Milwaukee-based civic engagement group BLOC, decided to get a checkmark, she thought, ‘I've done enough. Let’s check out how to be verified.’ Despite Lang and BLOC’s nationally recognized work on Black civic engagement, she found herself shut out. When Detroit-based activist and Data 4 Black Lives national organizing director Tawana Petty applied, her request was promptly rejected. Posting on the platform that refused to verify her, Petty said, ‘Unbelievable that creating a popular hashtag would even be a requirement. This process totally misses the point of why so many of us want to be verified.’ Petty told me, ‘I still live with the anxiety that my page might be duplicated and my contacts will be spammed.’ Previously, she was forced to shut down pages on other social media platforms to protect loved ones from this sort of abuse.
“According to anti-racist economist Kim Crayton, verification is important because ‘that blue check automatically means that what you have to say is of value, and without it, particularly if you’re on the front lines, particularly if you’re a Black woman, you’re questioned.’ As Lang says, ‘Having that verification is another way of elevating those voices as trusted messengers.’ According to Virginia Eubanks, an associate professor of political science at the University at Albany, SUNY, and author of Automating Inequality, ‘The blue check isn't about social affirmation, it’s a safety issue. Someone cloning my account could leave my family or friends vulnerable and could leave potential sources open to manipulation.’” — Albert Fox Cahn
Originally published to the Trust & Safety Foundation website.
Filed Under: case study, content moderation, endorsement, verification badges
Companies: twitter