Content Moderation At Scale Is Impossible: That Time Twitter Nazis Got A Reporter Barred From Twitter Over Some Jokes

from the free-speech? dept

Reporter Jon Schwarz, over at The Intercept, has yet another story of content moderation at scale gone wrong, focusing this time on Twitter and his own account. It seems that a bunch of white supremacists on Twitter got mad at him, found an old joke, taken out of context, reported it en masse, and Twitter blocked him over it. Schwarz's story is worth reviewing in detail, but I think he gets the wrong message out of it. His take is, more or less, that Twitter doesn't much care about lowly users, and can't be bothered to understand the context of things (we'll get to the details of the spat in a moment):

It would be easy to interpret this as active contempt by Twitter for its users. But it’s more likely to be passive indifference. Like any huge corporation, Twitter is focused on the needs of its customers, which are its advertisers. By contrast, Twitter’s users are not its customers. They’re its product. Grocery stores don’t care if a can of soup complains about being taken off the shelf.

Similarly, contrary to speculation by some, I don’t think CEO Jack Dorsey secretly sympathizes with his Nazi user base. He probably just enjoys being a billionaire. As he’s said, “from a simple business perspective … Twitter is incentivized to keep all voices on the platform.” Whatever else you want to say about Nazis, they definitely drive engagement, which in turn lets Twitter charge higher prices for diaper ads.

I even sympathize a little bit with Twitter’s conundrum. They aspired to be a globe-straddling highly profitable monopoly that had no responsibility for what their users did. This was a circle that couldn’t be squared. Proctor & Gamble doesn’t want its promoted tweets to appear right above hypothetical user @hh1488 livestreaming himself massacring 17 congregants at an Albuquerque mosque.

I was simply caught in the natural dynamics that flow from this contradiction. The structure of multinational publicly-traded corporations inevitably puts them somewhere politically from the center-right to the hard-right.

While an interesting take, I'd argue that it gets nearly every important point confused. Indeed, I'd argue that Schwarz is making the very same mistake that conservatives who blame Twitter for supposedly anti-conservative bias are making: looking just at their own situations and the content moderation choices they're aware of, and imparting on the company some sort of natural political motive. Twitter is neither "liberal" nor is it, as Schwarz says, "center-right to the hard-right." It's a company. It doesn't have political beliefs like that. (Relatedly, in the past, I've made it quite clear how misleading and unhelpful the whole "if you're not paying, you're the product" line is).

So, now let's dig into the specifics of what happened to Schwarz and why, rather than it being some sort of political bias at play, or (as Schwarz hints at in his opening) Twitter bending over to appease white supremacists, that this is yet another manifestation of Masnick's Impossibility Theorem... that it's impossible to do content moderation well at scale.

What happened here was that first Schwarz made a joke about Fox News host Tucker Carlson, who appeared to be doing some dog whistling:

As Schwarz notes, he is referencing a joke from the sitcom "30 Rock":

In fact, I was referring to a famous “30 Rock” joke, which had now assumed human form in Carlson. When NBC executive Jack Donaghy decides that TGS, the TV-show-within-the-show, doesn’t have wide enough appeal, he complains to its head writer Liz Lemon:

JACK: The television audience doesn’t want your elitist, East Coast, alternative, intellectual, left-wing —

LIZ: Jack, just say Jewish, this is taking forever.

That's not the joke he got blocked over, though. Instead, former KKK leader and all around awful person, David Duke, took that joke and paired it with another out-of-context joke from a few years earlier to mock Schwarz. I'm not linking to Duke's tweet, but this was the joke that he paired with the one above to say "These are not good people, folks." Which, truly, is some world class projection.

In case you're unable to load the image, Schwarz's 2015 tweet had said:

you know, it actually would make much more sense if jews and muslims joined forces to kill christians.

As Schwarz explained, in context, this is actually the kind of snarky reply that Duke would have historically agreed with, because it was part of a longer thread criticizing Israel (something Duke does frequently, though perhaps with other motivations in mind):

But Duke is such a cretin that it never occurred to him that my 2015 joke was exactly what he adores: criticism of Israel. That’s hopefully clear even out of context. But thanks to Twitter’s advanced search function, you can see that I was talking specifically in the context of two events — the publication of photographs of Gaza taken after Israel’s bombing campaign in Operation Protective Edge, and the murder of three Muslim students in Chapel Hill, North Carolina. That week I also wistfully suggested, “how about nobody kill anybody and then we go from there.”

Either way, lots and lots of Carlson and Duke fans reported that particular tweet to Twitter (and, of course, bombarded Schwarz with the kind of Twitter barrage that seems all too common these days) and Twitter took action over that tweet.

I have no great insight into how this particular decision went down, but having spent a lot of time in the past few years talking with content moderation folks at Twitter (and other social media platforms), what's a lot more likely than Schwartz's theory is simply this: Twitter has constantly had to tweak its rules over time, and because there is a decently large number of people on the "trust and safety" team, they need to have rules that can be easily understood and carried out -- and that means that understanding context is generally not possible. Instead, there will be some more bright line rules -- things like "no references to killing or violence directed at specific protected classes of people." This is the kind of rule that you could easily see put in place on just about any set of content moderation rules.

And, when looked at through that lens, Schwarz's tweet, even in jest, would trip that line. It's a statement about killing people of a particular religion.

As for why it only caused trouble four years after the tweet, again, the reason is pretty simple. It's not because Twitter Nazis were reporting it so often, but because anyone reported it. Twitter doesn't review each and every tweet. They review tweets that come to their trust & safety team's attention. And, I've heard first hand from people at Twitter that if they come across older tweets, even ones that have been up for many years, if they violate current rules, they will be subject to action.

Again, from the position of thinking about how to run a content moderation/trust & safety team at scale, you can totally see how these rules would get put in place, and how they'd actually be quite sensible. I'm guessing just about every internet platform that has any kind of content policy has just such a rule. And it's easy to sit here and say, "but in context, it's clear that he's making a joke" or "it's clear he's trying to make a very different point and not literally advocating for Jews and Muslims to kill Christians."

But how do you write those exceptions into the rules such that an entire diverse team of employees on a trust & safety team can understand that?

You can try to put in an "except for jokes" clause, but that would require everyone to be able to recognize a joke. Also, it would lead to gaming the system where people would advocate for such killings... and then claim "just joking!" It would also require a team that is culturally sensitive and able to recognize humor, context, and joking for nearly every cultural group around the globe.

That's literally impossible.

And that's why this is just yet another example of why content moderation at scale is impossible to do well. It has nothing to do with politics. It has nothing to do with left-right. It has nothing to do with Twitter appeasing neo-Nazis. It has everything to do with the impossibility of moderating speech at scale.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, content moderation at scale, david duke, jokes, jon schwarz, masnick's impossibility theorem, tucker carlson


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    Stephen T. Stone (profile), 4 Dec 2019 @ 9:45am

    So…what you’re saying is that Twitter loves Nazis~?

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 4 Dec 2019 @ 10:15am

    No argument against content moderation at scale being impossible to do well. However, in this case at least, Mr. Schwarz should have self-moderated and not tweeted what he did simply because it's far too easy to take out of context. I feel he shot himself in the foot here and should not be too alarmed at the end result.

    Context is always important but in a medium that only allow small snippets and quotes context can be hard to make clear. Therefore the message itself must be clear on its own lest it be taken out of context and consequences manifested.

    How this all happened is abhorrent and Schwarz is without a doubt the victim here. Still, he does share a small part of the blame and should have been more careful about what he posted. He could have avoided this whole situation.

    link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 4 Dec 2019 @ 10:36am

      Context is always important but in a medium that only allow small snippets and quotes context can be hard to make clear.

      Context can be hard to make clear in any medium. That’s why moderation at scale is hard even on sites like Facebook. If I were to copy a paragraph from a Breitbart article and post it on Facebook, could you tell by the quote alone whether I posted it because I sincerely agree with the quote or whether I posted it as an ironic jab at Breitbart, conservatives, or something or someone else?

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 4 Dec 2019 @ 11:24am

        Re:

        Facebook allows longer posts which gives you, the poster, the opportunity to make the context more clear. If you fail to do so that differs little if any from the situation described in the article above.

        link to this | view in chronology ]

    • icon
      Wendy Cockcroft (profile), 6 Dec 2019 @ 3:21am

      Re: Think before you tweet

      How this all happened is abhorrent and Schwarz is without a doubt the victim here. Still, he does share a small part of the blame and should have been more careful about what he posted. He could have avoided this whole situation.

      Agreed in full. Anyone who takes a side in politics (I'm a moderate conservative who loathes right-wingers and extremist authoritarians of every stripe, whichever side of the aisle they're on, and don't get me started on liars) is going to get hammered by the people on the other side if they challenge a particular cult figure or principle. As a moderate, I get bashed by extremists on both sides of the aisle for not blindly accepting their BS. As a Christian I get bashed by atheists. As a fairly liberal person I get bashed by authoritarians. To be fair, I rarely get bashed as a woman. The point is, you will get bashed. If you're prominent, anyone who takes a dislike to you will try to pull you down. For this reason, be careful when you post, even if you're joking. I skate pretty close to the line when I advocate hunting Tories for sport but that's as far as I'll go. I know that, if I ever crossed the line, anyone who took a dislike to me would go trawling through my tweets to find something to report and pull my account down. It's not a bad idea to bear that in mind and think before you tweet.

      Whether it is reasonable or fair is not at issue. It's wise.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 31 Mar 2020 @ 2:58pm

        Re: Re: Think before you tweet

        'Loathes right-wingers'? Nope, you're not conservative, or moderate for that matter. Stop sitting on the fence and admit you're an anti-American leftist like everyone else on Techdirt.

        link to this | view in chronology ]

    • icon
      Wyrm (profile), 6 Dec 2019 @ 2:56pm

      Re:

      Mr. Schwarz should have self-moderated and not tweeted what he did...

      And sadly, that's exactly what hecklers want you to do.
      They want you to avoid saying things, moderate yourself far more than is legally required and avoid "offending sensibilities".

      Note that those hecklers are the very same who will in turn complain that others are too sensitive and should quit complaining about the hecklers' protected speech.

      link to this | view in chronology ]

  • identicon
    Christenson, 4 Dec 2019 @ 10:55am

    Context and context-free mediums

    And, with this discussion of context, these world-wide platforms seem to work hard to make themselves context-free.

    Example:
    Isolated Tweets. Confusing threading. Text only.
    Absolutely anyone can see an isolated tweet. I can't limit it to just the citizens of Techdirt!

    Now where were we??? lol

    Moderation at scale will be impossible unless you create a situation where the context is universal, in the sense that every place online has a non-trivial context.

    link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    identicon
    Comboman, 4 Dec 2019 @ 10:57am

    Once again...NOT AT SCALE!!

    This tweet was not removed by an AI scanning billions of tweets with an opaque algorithm. Humans beings made a complaint which caused a single tweet to be reviewed by a single human reviewer. Nothing about this was "at scale". Was the ruling arbitrary? Sure (like pretty much all rulings). Trying to make every news story support Mike's new "law" of "Content Moderation is Impossible at Scale" just makes you look silly. You already got your name in the dictionary for "Streisand Effect", don't push it.

    link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 4 Dec 2019 @ 11:02am

      Humans beings made a complaint which caused a single tweet to be reviewed by a single human reviewer.

      How many complaints did it take? Were all the complaints made in good faith? Did the tweet have surrounding context that made the complaints a bad faith effort? On a smaller platform, all three questions would be easier to answer, and moderation efforts would be easier as a result. But on a platform the size of Twitter, that isn’t feasibly possible, no matter how many people you hire and how many algorithms you put in place.

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 4 Dec 2019 @ 11:26am

      Re: Once again...NOT AT SCALE!!

      ---===### The Point
      Your Head

      The "scale" here is that the review of such reported tweets is necessarily done by humans who cannot possibly keep up with demand while also doing a thorough and accurate job of it.

      Good try though.

      link to this | view in chronology ]

    • icon
      James Burkhardt (profile), 4 Dec 2019 @ 11:53am

      Re: Once again...NOT AT SCALE!!

      Not at scale? Certainly not this one tweet in isolation. But the commentary of moderation at scale describes moderation efforts for an entire ecosystem. The entirety of a forum, of Facebook, of Twitter. The whole point is that, at the scale of a major website, issues emerge overall. For instance, in the case of 100 reported tweets a day, 9-10 like minded people could take a half hour, time research, contextualize, and make a call based on complex rules on every tweet that is reported (with each person only pulling 40 hours a week). At 99.9% moderation success rate, you have maybe 1 bad call a week.

      Twitter has 500 million tweets per day and if we assume only 0.1% tweets are reported and need to be viewed by a moderator, that is 500,000 tweets that need to be addressed every day. Under the same half hour analysis, 40 hours a week, you would need 44,000 moderators, who will be a diverse group who will not have like minded values (this introduces error due to differing values). but if somehow we still have 99.9% accurate to the rules moderation, we still would see 500 errors EVERY DAY. This level of perfection in adhering to the rules is near impossible, and we could see 5000 errors a day (99% accuracy) without any stretch of the imagination.

      If you look at the individual tweets, every mistake is obvious. But you aren't looking at the moderation efforts at scale. The individual rulings based on content, rules, and value judgments are not made in isolation. Discussions about moderation at scale are all about the aggregated effect of Millions of tweets, and millions of different points of view, and the impossibility of making everyone happy. Its not hard to understand, but given you consider systemic abuse of copyright simply a series of isolated anomalies that are unrelated to the failing of copyright, your confusion is incredibly on brand.

      link to this | view in chronology ]

      • identicon
        Christenson, 4 Dec 2019 @ 3:47pm

        Reducing Scale

        All if these platforms seem to want to compute away all the people in service of ad dollars, which is natural.

        I Claim, in sympathy with Mike, that the solution involves making things feel a bit smaller and pulling in effort from folks that are paying attention anyway.

        If people want spaces that are free of their favorite bad thing, that needs to happen within a smaller universe than “the whole world”. Someone, not Twitter directly, will have to keep the philistines out, and we need room (CDA 230) for them to make mistakes without too much consequence.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 5 Dec 2019 @ 12:51am

          Re: Reducing Scale

          If people want spaces that are free of their favorite bad thing,

          The real problem is not those people, but rather the people who wish to destroy everything that they disagree with, like Jhon. That latter group will not stay away from sites they dislike, but actively try to drive the audience away by trolling.

          link to this | view in chronology ]

  • icon
    Koby (profile), 4 Dec 2019 @ 11:17am

    Difficulty of expression

    One of the big problems of text, be it on Twitter, email, forums, or anything else written, is that writing in a sarcastic or joking tone is practically impossible. Perhaps we need some kind of style markup, akin to italics or boldface, which would denote seriousness or joking. The written meaning could be less subject to interpretation, thereby allowing people to express their speech, with less risk of an outrage mob deliberately mis-interpreting what was written.

    link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 4 Dec 2019 @ 11:33am

    "imparting on the company some sort of natural political motive" - M. Masnick

    Pot. Kettle. Etc.

    link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 4 Dec 2019 @ 11:58am

      If’n you have something to say, coward, say it.

      link to this | view in chronology ]

    • icon
      Mike Masnick (profile), 4 Dec 2019 @ 12:40pm

      Re:

      I just wasted a good 30 seconds trying to figure out what you could possibly mean by this and I'm stumped. When did I impart a partisan political motive to any company?

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 4 Dec 2019 @ 5:06pm

        Re: Re:

        I think that's antidirtese for "I hate you, Masnick. I hate you so much I want you behind the toolshed. Bring the railroad spike and don't forget to swap out the lube for caustic soda due to my allergies."

        Or it could be John Smithian, I'm not well-versed in Techdirtrology.

        link to this | view in chronology ]

  • icon
    Nathan F (profile), 4 Dec 2019 @ 12:13pm

    Applying the current rules to tweets that were made 4 years ago sounds like madness to me.

    link to this | view in chronology ]

  • icon
    btr1701 (profile), 4 Dec 2019 @ 3:06pm

    Limitations

    But how do you write those exceptions into the rules such that an entire diverse team of employees on a trust & safety team can understand that?

    It would, however, be relatively simple to put a statute of limitations into the rules, where people don't suddenly find their account suspended over something they posted a decade ago, especially when the post didn't violate any rules at the time it was posted. That's something that should trigger a deeper dive into both context and/or whether it was a violation of existing rules at the time.

    link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    icon
    seedeevee (profile), 4 Dec 2019 @ 4:05pm

    Fucking Coward

    link to this | view in chronology ]

    • icon
      That One Guy (profile), 4 Dec 2019 @ 5:22pm

      Coward for refusing to give attention to a loser? Okay then

      Why am I not surprised you actually went out of your way to track down the loser in question, and then went through whatever garbage passes for their posts to find the tweet involved?

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 4 Dec 2019 @ 8:30pm

        Re: Coward for refusing to give attention to a loser? Okay then

        Presumably because he already follows David Duke.

        link to this | view in chronology ]

    • icon
      Mike Masnick (profile), 4 Dec 2019 @ 9:20pm

      Re: Fucking Coward

      It is not cowardice, but rather a desire not to send traffic to an out-and-out bigoted white supremacist, former KKK leader who has shown no change of heart.

      But, noted for the record that you have no issue in doing so.

      So given your commenting history is (a) defending Russian propaganda (b) attacking any evidence about Russian propaganda and now (c) supporting white supremacists, you sure are making it quite clear what kind of person you are. And it's not a good picture seedeevee. I feel sorry for you.

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 31 Mar 2020 @ 3:02pm

      Re: Fucking Coward

      Here's a TL;DR of Masnik's article and comments:

      • Duke does wrongthink and must be purged
      • Schwarz does goodthink and must be re-platformed

      link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 5 Dec 2019 @ 2:06am

    Mr Masnick, did you even read the comments on your "if you're not paying, you're the product" article? You were wrong then and you're still wrong now.

    link to this | view in chronology ]

    • This comment has been flagged by the community. Click here to show it
      identicon
      Anonymous Coward, 5 Dec 2019 @ 3:03am

      +1

      link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    identicon
    R/O/G/S, 6 Dec 2019 @ 6:50am

    re: Jon "my grampa,was,Jewishy” Swartz

    When Lew Rockwell picks up your journalism, maybe, pause.

    Then breathe, and hope he doesnt do it again, Jon.

    Then ask yourself about nepotism and tribal-religious bias in media.

    Swartz was dead silent when guys like me got DOXXD by both the altR and Israelis/ADL zionists, and he and Micah Lee at TI routinely gaslight real human interest stories for years, only to exploit them later to cry wolf.

    Its hard to cry for the guy now.

    Hes one of the major problems in journalism, holding out his Jewish heritage, rather than the old fashioned "everyman” approach to journalism, and getting paid to do it.

    Jon Swartz and Lew Rockwell:
    https://www.lewrockwell.com/2016/12/no_author/someone-officially-called-cias-bluff/

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.