Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Twitter Briefly Restricts Account Of Writer Reporting From The West Bank (2021)

from the mistakes-were-made dept

Summary: In early May 2021, writer and researcher Mariam Barghouti was reporting from the West Bank on escalating conflicts between Israeli forces and Palestinian protestors, and making frequent social media posts about her experiences and the events she witnessed. Amidst a series of tweets from the scene of a protest, shortly after one in which she stated “I feel like I’m in a war zone,” Barghouti’s account was temporarily restricted by Twitter. She was unable to post new tweets, and her bio and several of her recent tweets were replaced with a notice stating that the account was “temporarily unavailable because it violates the Twitter Media Policy”.

The incident was highlighted by other writers, some of whom noted that the nature of the restriction seemed unusual, and the incident quickly gained widespread attention. Fellow writer and researcher Joey Ayoub tweeted that Barghouti had told him the restriction would last for 12 hours according to Twitter, and expressed concern for her safety without access to a primary communication channel in a dangerous situation.

The restriction was lifted roughly an hour later. Twitter told Barghouti (and later re-stated to VICE’s Motherboard) that the enforcement action was a “mistake” and that there was “no violation” of the social media platform’s policies. Motherboard also asked Twitter to clarify which specific policies were initially believed to have been violated, but says the company “repeatedly refused”.

Company Considerations:

  • In cases where enforcement actions are taken involving sensitive news reporting content, how can the reasons for enforcement be better communicated to both the public and the reporters themselves?
  • How can the platform identify cases like these and apply additional scrutiny to prevent erroneous enforcement actions?
  • What alternatives to account suspensions and the removal of content could be employed to reduce the impact of errors?
  • How can enforcement actions be applied with consideration for journalists’ safety in situations involving the live reporting of dangerous events?

Issue Considerations:

  • With so much important news content, especially live reporting, flowing through social media platforms, what can be done to prevent policy enforcement (erroneous or otherwise) from unduly impacting the flow of vital information?
  • Since high-profile enforcement and reversal decisions by platforms are often influenced by widespread public attention and pressure, how can less prominent reporters and other content creators protect themselves?

Resolution: Though the account restriction was quickly reversed by Twitter, many observers did not accept the company’s explanation that it was an error, instead saying the incident was part of a broader pattern of social media platforms censoring Palestinians. Barghouti said:

"I think if I was not someone with visibility on social media, that this would not have garnered the attention it did. The issue isn’t the suspension of my account, rather the consideration that Palestinian accounts have been censored generally but especially these past few weeks as we try to document Israeli aggressions on the ground."

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, israel, journalism, mariam barghouti, palestine, reporting, warzones
Companies: twitter


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    That Anonymous Coward (profile), 8 Dec 2021 @ 4:18pm

    " asked Twitter to clarify which specific policies were initially believed to have been violated, but says the company “repeatedly refused”. "

    Because never explaining how the screw up came into being will make sure the bad guys can't learn from it...
    Or they had no justification beyond well maybe this is bad & its better to nuke them from orbit & 'apologize' later.

    One does wonder how many of these decisions could pass the 'teddy bear test'.
    You make them explain the situation to a teddy bear like it was another person so they can hear what they are saying & thinking. Often they correct their issue on their own without need to involve others.
    Instead wide swaths get screwed till someone makes the right noise that gets attention & suddenly it was an oppsie & we have nothing further to say on the subject

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Dec 2021 @ 10:25am

      Re:

      Ah, yes. The "bad guys" who want to report from war zones. Can't have that. Cause they're bad.

      My vote goes for they had no justification.

      link to this | view in chronology ]

  • icon
    migi (profile), 9 Dec 2021 @ 5:41am

    You'd think if it was an algorithm error they'd just blame the algorithm, but instead they say that the enforcement action was a mistake. So by implication the original enforcement was done by a human moderator.

    So did the human click the wrong account to suspend, did they make a bad judgement call, or was it a pattern of behaviour targeting certain types of people?
    If the community believes there is a pattern of censoring Palestinians, is it a cultural problem among a segment or the whole moderation team?

    How can the platform identify cases like these and apply additional scrutiny to prevent erroneous enforcement actions?

    This depends on whether it was an automated moderation error or a human enforcement error.
    A very simple dividing line would be to have any automated moderation action reviewed by a human, if it applies to a verified account.
    If the issue is human error or malice, then you could have more than 1 person review the action. However that would double the amount of humans you need, which would be hard to justify if the scale of the problem is small.

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

Close

Email This

This feature is only available to registered users. Register or sign in to use it.