Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Twitter Briefly Bans Russian Parody Accounts (2016)

from the parody-in-action dept

Summary: Twitter allows parody accounts to remain live (often over the protests of those parodied), provided they follow a narrow set of rules -- rules apparently intended to make sure everyone's in on the joke.

Here's everything Twitter users agree to do when creating a parody account:

  • Bio: The bio should clearly indicate that the user is not affiliated with the subject of the account. Non-affiliation can be indicated by incorporating, for example, words such as (but not limited to) "parody," "fake," "fan," or "commentary.” Non-affiliation should be stated in a way that can be understood by the intended audience.

  • Account name: The account name (note: this is separate from the username, or @handle) should clearly indicate that the user is not affiliated with the subject of the account. Non-affiliation can be indicated by incorporating, for example, words such as (but not limited to) "parody," "fake," "fan," or "commentary.” Non-affiliation should be stated in a way that can be understood by the intended audience.

Unfortunately for the very popular Vladimir Putin parody account (@DarthPutinKGB), Twitter's moderators decided the account didn't strictly adhere to the "make it obvious" policies covering accounts like these.

In May 2016, Twitter suspended the account for its alleged violations.

This ban immediately resulted in backlash from other Twitter users who were fans of the account -- one that made it clear (albeit without all the specifics demanded by Twitter) that it was a parody. Disappointed fans included Estonian president Toomas Hendrik and Radio Free Europe, which published a collection of the account's best tweets.

While the ban was technically justified by the violation of the specifics of Twitter's rules, the end result was a lot of Twitter users wondering whether Twitter moderators were capable of recognizing obvious parody without accounts bios copying the platform's parody guidelines word-for-word.

Decisions to be made by Twitter:

  • Is the banning of harmless parody accounts an acceptable tradeoff for protecting users from impersonation?

  • Should the parody guidelines be altered to make it easier to identify parody accounts?

  • Should moderators be allowed to make judgment calls if an account is clearly a parody but does not strictly adhere to the parody account guidelines?

Questions and policy implications to consider:
  • Should Twitter use more caution when moderating parody accounts whose parodic nature isn't immediately clear?

  • Is impersonation too much of a problem on the platform to ever relax the standards governing this kind of humor?

Resolution: Twitter swiftly reinstated the account following the backlash. The account remains active, despite its new bio not explicitly following the Twitter Rules for parody accounts.

But it wasn't the first time Twitter moderated accounts parodying Russian government officials. A similar thing happened roughly a year earlier, when Twitter blocked an account parodying powerful Russian oil executive Igor Sechin, apparently in response to a Russian government complaint the satirical account "violated privacy laws." This happened despite the fact the user's handle was IgorSechinEvilTwin, making it clear it was a parody, rather than an attempt to impersonate the real Igor Sechin.

Originally published on the Trust & Safety Foundation website.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, darthputin, parody, russia
Companies: twitter


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    That Anonymous Coward (profile), 2 Apr 2021 @ 4:28pm

    "Non-affiliation should be stated in a way that can be understood by the intended audience."

    And there-in lies the problem.

    If moderators were not the intended audience they misread things.

    Its like that one creepy guy at the office party who doesn't laugh at the joke everyone else does, because he lacks the frame of reference required because he's never watched the Simpsons.

    It is impossible for humans to look at 1 tweet removed from all context & divine intent.

    More than once, when I was allowed to use the platform who me pissed off?, I often rooted for an ELE just so I could see if the next species nature lifted up would be more interesting to watch.

    ELE - Extinction Level Event

    Strict reading is me calling for genocide of the entire human race... that seems bad.

    Add the piece of the puzzle that I am 'an immortal sociopath tired of watching you hairless apes making the same mistakes since time started'.

    Add the piece that these comments are often made in response to media stories of humans being human.

    Man carries baby into elephant pen, nearly killed, drops kid all to get a fscking selfie.

    Man kills 9 yr old son when sled, HE TIED TO THE BACK OF THE CAR, slammed into a parked car.

    Mitch McConnel reelected after he rips out 105 yr old womans heart and eats to to seal a dark pact for another 500 yrs of power.

    Something horrific happens, people tut & fret, people demand change, 3 weeks later it happens again & people are shocked just shocked.

    I'm a go-getter sort of immortal, be the ELE you wish to see in the world.

    Responses vary based on how much of the puzzle of TAC you are aware of.
    We follow each other... oh its a day that ends in Y.
    Someone you know follows & RTs me... oh its that weird guy again.
    Someone clueless sees it & I am become death, destroyer of humanity. I must be reported and stopped.

    Call for an ELE, no problemo
    Call yourself faggot, GTFO

    Twitter where the rules are made up & aren't followed anyways.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Apr 2021 @ 7:18am

    This is ... much more stream-of-consciousness than usual for you. Have you been reading Twitter again?

    link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    icon
    cynoclast (profile), 3 Apr 2021 @ 10:21am

    "Content Moderation" is newspeak for censorship.

    NT

    link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 3 Apr 2021 @ 10:27am

      Moderation is a platform/service owner or operator saying “we don’t do that here”. Personal discretion is an individual telling themselves “I won’t do that here”. Editorial discretion is an editor saying “we won’t print that here”, either to themselves or to a writer. Censorship is someone saying “you won’t do that anywhere” alongside threats or actions meant to suppress speech.

      Now, which one of these applies to the incident in question?

      link to this | view in chronology ]

    • icon
      Toom1275 (profile), 3 Apr 2021 @ 10:42am

      Re: "Content Moderation" is newspeak for censorship.

      [Asserts acts not in evidence]

      link to this | view in chronology ]

    • icon
      That Anonymous Coward (profile), 3 Apr 2021 @ 3:58pm

      Re: "Content Moderation" is newspeak for censorship.

      LMFTFY

      "Content Moderation" is newspeak for censorship, but only when it happens to me, when it happens to people who hold views I disagree with its perfectly fine because they shouldn't be allowed to say those things ever.

      link to this | view in chronology ]

    • icon
      That One Guy (profile), 3 Apr 2021 @ 6:11pm

      The conservative that cried 'censorship!'

      By all means keep pushing that dishonest definition, all you're doing is watering the word down so that on the off chance that you actually are censored at some point and seek sympathy from those around you all you'll get is an indifferent shrug or support for the one who silenced you, as you'll have trained people to associate any claims of 'censorship' with 'suffered a penalty for breaking the rules and/or acting like an ass on private property' and you'll have only yourself and your fellow 'victims' to blame for that response.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 5 Apr 2021 @ 8:15am

    "Moron in a hurry" test would solve this

    Unless a given moderator is a moron in a hurry.

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

Close

Email This

This feature is only available to registered users. Register or sign in to use it.