Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Using Fact Checkers To Create A Misogynist Meme (2019)

from the content-moderation-inception dept

Summary: Many social media sites include fact checkers in order to either block or at least highlight information that is determined to be false or misleading. However, in some ways, that alone can create a content moderation challenge.

Alan Kyle, a privacy and policy analyst, noticed this in late 2019 when he came across a meme picture on Instagram from a meme account called “memealpyro” showing what appeared to be a great white shark leaping majestically out of the ocean. When he spotted the image, it had been blurred, with a notice that it had been deemed to be “false information” after being “reviewed by independent fact checkers.” When he clicked through to unblur the image, next to the image there was a small line of text saying “women are funny.” And beneath that the fact checking flag: “See why fact checkers say this is false.”

The implication of someone coming across this image with this fact check is that the fact check is on the statement, leading to the ridiculous/misogynistic conclusion that women are not funny and that an independent fact checking organization had to flag a meme image suggesting otherwise.

As Kyle discusses, however, this seemed to be an attempt to rely on fact checkers checking one part of the content, in order to create the misogynistic meme. Others had been using the same image -- which was computer generated and not an actual photo -- and claiming that it was National Geographic’s “Picture of the Year.” This belief was so widespread that National Geographic had to debunk the claim (though it did so by releasing other, quite real, images of sharks to appease those looking for cool shark images).

The issue, then, was that fact checkers had been trained to debunk the use of the photo, on the assumption it was being posted with the false claim that it was National Geographic’s “Photo of the Year,” and Instagram’s system didn’t seem to expect that other, different claims might be appended to the same image. When Kyle clicked through to see the explanation, it was only about the “Picture of the Year” claim (which was not made on this image), and (obviously) not on the statement about women.

Kyle’s hypothesis is that Instagram’s algorithms were trained to flag the picture as false, and then possibly send the flagged image to a human reviewer -- who may have just missed that the text associated with this image was unrelated to the text for the fact check.

Decisions to be made by Instagram:

  • If the caption and a picture need to be combined to be designated as false information, how should Instagram fact checkers handle cases where that information is separated?
  • How should fact checkers handle mixed media content, in which text and graphics or video may be deliberately unrelated?
  • Should automated tools be used to flag viral false information in a way that might be gamed?
  • How much human review should be used for algorithmically flagged “false” information?
Questions and policy implications to consider:
  • When there is an automated fact checking flagging algorithm, how will users with malicious intent try to game the system, as in the above example?
  • Is fact checking the right approach to “meme’d” information that is misleading, but not in a meaningful way?
  • Would requiring fact checking across social media lead to more “gaming” of the system as in the case above?
Resolution: As Kyle himself concludes, situations like this are somewhat inevitable, as the setup of content moderation works against those trying to accurately deal with content such as the piece described above:

There are many factors working against the moderator making the right decision. Facebook (Instagram’s parent company) outsources several thousand workers to sift through flagged content, much of it horrific. Workers, who moderate hundreds of posts per day, have little time to decide a post’s fate in light of frequently changing internal policies. On top of that, much of these outsourced workers are based in places like the Philippines and India, where they are less aware of the cultural context of what they are moderating.

The Instagram moderator may not have understood that it’s the image of the shark in connection to the claim that it won a NatGeo award that deserves the false information label.

The challenges of content moderation at scale are well documented, and this shark tale joins countless others in a sea of content moderation mishaps. Indeed, this case study reflects Instagram’s own challenged content moderation model: to move fast and moderate things. Even if it means moderating the wrong things.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, fact checking, memes
Companies: instagram


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    OldMugwump (profile), 11 Nov 2020 @ 4:26pm

    Fact-checker baiting

    This is a great example of somebody successfully baiting "fact-checkers". I'm tempted to sympathize with the troll who posted it.

    The larger problem is that "fact checkers" can't really check most facts in any meaningful, objective way.

    Just to use this image as an example, how is "women are funny" even a "fact" that can be true or false? Obviously some women are more "funny" than others (even for the many different meanings of the word "funny"). I don't think it's possible to rule such a statement as clearly true or false in the first place, nor should anyone try.

    Are women, as a class, "funny"? Whatever answer you give, it's an opinion, not a fact. And an opinion on a awfully vague and ill-defined question.

    I've little patience with "fact checking" in general, except perhaps in the original context of internal checking within a publication, before printing a story. Most statements aren't clearly even "facts" in the first place.

    link to this | view in chronology ]

    • icon
      Thad (profile), 11 Nov 2020 @ 4:36pm

      Re: Fact-checker baiting

      ...you...get that the point of this story is that the fact-checkers were not intentionally evaluating whether or not the statement "women are funny" was factually true or false, right?

      They were evaluating whether or not the image of the shark was National Geographic's Picture of the Year. Which, notwithstanding your handwringing, is a statement which can be objectively evaluated as true or false. It's false. That's not an opinion, it's a fact. The fact-checkers checked it; they evaluated it as false. The reason they evaluated it as false is that it is false.

      The only person in this story trying to evaluate "women are funny" as a factual statement is the troll who submitted the altered image. Who *checks notes* you just said you sympathzie with.

      link to this | view in chronology ]

      • This comment has been flagged by the community. Click here to show it
        identicon
        Lobelia 'Lobster' Sterling, 11 Nov 2020 @ 5:11pm

        Re: Re: Fact-checker baiting -- NO, it's Techdirt pretense!

        ...you...get that the point of this story is that

        The actual purpose of the story HERE is to defecate all over the place so that "moderation" looks tough, while in practice, Masnick's position is a flat and unqualified RIGHT to arbitrarily censor:

        "And, I think it's fairly important to state that these platforms have their own First Amendment rights, which allow them to deny service to anyone."

        https://www.techdirt.com/articles/20170825/01300738081/nazis-internet-policing-content -free-speech.shtml

        You're short-sighted and WRONG as usual, "Thad, the Ant-Slayer".

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 11 Nov 2020 @ 6:50pm

          Re: Re: Re:

          How's that Trump re-election campaign coming along, bro?

          link to this | view in chronology ]

        • icon
          Toom1275 (profile), 11 Nov 2020 @ 10:55pm

          Re: Re: Re: Fact-checker baiting -- NO, it's Techdirt pretense!

          The actual purpose of the story HERE is to defecate all over the place

          [Projects facts not in evidence]

          link to this | view in chronology ]

        • icon
          Scary Devil Monastery (profile), 12 Nov 2020 @ 5:28am

          Re: Re: Re: Fact-checker baiting -- NO, it's Techdirt pretense!

          "The actual purpose of the story HERE is to defecate all over the place..."

          I don't think the Copia Institute needs your help to shit all over a forum. God knows, you've been seriously incontinent around here for many years now.

          So...did you have anything actually relevant to the OP you wanted to whine about or is it just more venting your spleen because Geigner called you a nasty name nine years ago and you've spent every waking moment since to demonstrate the truth of his statement?

          link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    identicon
    Lobelia 'Lobster' Sterling, 11 Nov 2020 @ 5:06pm

    Techdirt examples abstruse edge cases to build cred, while...

    ... Masnick's actual bottom line position is that corporations have total arbitrary control:

    "And, I think it's fairly important to state that these platforms have their own First Amendment rights, which allow them to deny service to anyone."

    https://www.techdirt.com/articles/20170825/01300738081/nazis-internet-policing-content -free-speech.shtml

    link to this | view in chronology ]

    • This comment has been flagged by the community. Click here to show it
      identicon
      Lobelia 'Lobster' Sterling, 11 Nov 2020 @ 5:07pm

      Re: Techdirt examples abstruse edge cases to build cred, while..

      New readers, if any: don't be fooled by Masnick's "it's so hard to do the right thing" diversions such as this (paid for by Silicon Valley corporate "support" of his laughable "think tank").

      The block quote is one of the few times that he's been honest. Masnick believes corporations should have TOTAL ARBITRARY CONTROL without regard to The Public interest.

      link to this | view in chronology ]

      • identicon
        AC Unknown, 11 Nov 2020 @ 5:42pm

        Re: Re: Techdirt examples abstruse edge cases to build cred, whi

        Calling utter bullshit on that assertion. Where's your proof?

        link to this | view in chronology ]

      • icon
        Scary Devil Monastery (profile), 12 Nov 2020 @ 5:36am

        Re: Re: Techdirt examples abstruse edge cases to build cred, whi

        "Masnick believes corporations should have TOTAL ARBITRARY CONTROL without regard to The Public interest."

        Obviously corporations should have control over who they allow on their own property. That you personally believe the legal concept of property should be abolished in favor of "The People" is, rather, the more deranged idea.

        But hey, if you actually want to make that happen then here's how;

        1) Assemble a political party.
        2) Sell 51% of the voters on the communist manifesto you keep taking your ideas from.
        3) Win all the elections and rewrite the constitutional amendments preventing the government from nationalizing any sufficiently popular property.

        Because the only thing you accomplish here is to make people occasionally laugh at your incomprehensible hysterics. The sum of your thousands of hours of labor remains that people flag your comment after beating you over the head with the latest sack of garbage you spilled over the forum.

        That's fucking sad, Baghdad Bob, and if you weren't such a malicious mentally disabled person we'd all be inclined to show you some sympathy.

        link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 11 Nov 2020 @ 5:22pm

      For what reason should the government have the right to make any interactive web service host all legally protected speech, even if the owners/operators of that service don’t want to host certain kinds of speech?

      link to this | view in chronology ]

      • icon
        Scary Devil Monastery (profile), 12 Nov 2020 @ 5:39am

        Re:

        "For what reason should the government have the right to make any interactive web service host all legally protected speech, even if the owners/operators of that service don’t want to host certain kinds of speech?"

        If Baghdad Bob - or Koby, for that matter - had any honesty at all they'd just provide the answer. The argument is clearly outlined in both The Communist Manifesto and Mao's little red book.

        Admitting they're quoting outright communist philosophy doesn't fit their narrative, of course.

        link to this | view in chronology ]

  • identicon
    Anonymous Coward, 11 Nov 2020 @ 5:47pm

    Because... don't actually click through or anything.

    link to this | view in chronology ]

  • identicon
    NoName, 11 Nov 2020 @ 7:28pm

    Misogynistic, really??

    Branding this meme as misogynistic is an exaggeration. Sure, it's fashionable to interpret any situation in a positive light for women, and a negative light for men, whenever possible (hence the only groans on last Saturday's SNL were when Dave Chappelle dared to make a joke at the expense of women).

    If the person who posted the meme had misogynistic intent, it would have read, "women are smart" or "women are strong" or something of that nature. But "women are funny" can be interpreted itself either as a positive or a negative statement about women. For instance, it could be perceived as meaning "women are strange," or "women can be good comedians."

    And even if it was intended to be interpreted (in light of the fact-checking) as, "It is false that women can be good comedians," that hardly rises to the level of misogyny, which is defined as 'hatred or mistrust of women.'

    If you disagree with me on that, I presume, in that case, that you'd agree that the majority of gender-related articles in the media these days are misandristic.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 11 Nov 2020 @ 8:17pm

      Re: Misogynistic, really??

      That's a whole lot of words to say "not all men".

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 11 Nov 2020 @ 10:16pm

        Re: Re: Misogynistic, really??

        That's why it's important to campaign for futanari rights.

        link to this | view in chronology ]

  • identicon
    Anonymous Coward, 11 Nov 2020 @ 10:41pm

    I thought the point of fact checking was to, you know, actually check facts. Not blindly tag stuff based on assumptions.

    link to this | view in chronology ]

  • identicon
    Glenn, 12 Nov 2020 @ 4:17am

    Mods are so funny. AI mods even more so.

    link to this | view in chronology ]

  • icon
    John Pettitt (profile), 12 Nov 2020 @ 8:12am

    It's the label that's wrong

    The image is not false, it's manipulated. The issue here is that the "false" fact checking label is appropriate for fact checks on text eg "Trump won the election" but not for images. If they had labeled it "This image is not a real photograph" then the problem around implying the the text on the mage is false would not apply.

    Can't stay to chat longer, I have to go and photoshop political some comments onto shark images ...

    link to this | view in chronology ]

  • icon
    ECA (profile), 12 Nov 2020 @ 1:56pm

    Everyone here, understands

    That this is going to get WORSE.
    That every Pic/video/audio file is going to need a registry.

    This is going to bae as bad, as comparing Music, and its LIKE' this song. and sue everyone around it.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 13 Nov 2020 @ 10:43am

    Ha ha, this is awesome.

    These types of trolls are doing the yeoman's work of triggering Leftist sissies. Since Techdirt-type anti-Americans have decided to go all in on doing 'fact checks' to ensure no degenerate gets their fee-fees hurt, I'm glad to see trolls like these keep making Thought Police continue, over and over again, to publicly and embarrassingly step on their dicks.

    See also: 'Islam is Right about Women'; the OK hand signal; clovergender; etc. It's damn beautiful. Leftists just can't help themselves.

    Techdirt-type Leftists, please keep humiliating yourselves! It brings Americans such joy.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 13 Nov 2020 @ 4:29pm

      Re: Ha ha, this is awesome.

      Mhm. Not sure who sounds "triggered" here. But hey have fun on Fantasy Island.

      link to this | view in chronology ]

  • identicon
    @b, 16 Nov 2020 @ 1:30pm

    Case Study: Dr Phil is not a real doctor

    At least a social network had some chance of eventually correcting it's Fact Check label or lack thereof.

    Step 1. User flags image
    Step 2. AI triages image
    Step 3. Professional makes a ruling on image
    Step 4. User unflags image
    Step 5. Go to Step 2.

    Meanwhile, Dr Phil is not a real doctor. Pass it on.

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

Close

Email This

This feature is only available to registered users. Register or sign in to use it.