Facebook's Post-Insurrection Purge Catches A Bunch Of Left Wing Accounts In Its AI Net

from the fine-deleted-accounts-on-both-sides dept

Facebook has often been accused of having an anti-conservative bias. But its efforts to clean up its platform following the January 6th attack on the Capitol building indicate it just has an ongoing (and probably unsolvable) moderation problem.

Shortly after the DC riot, it announced it would be removing content containing certain slogans (like "stop the steal") as an "emergency measure" to stop the spread of misinformation or encourage similar election-based attacks on other government buildings.

It's not clear what other switches were flipped during this post-riot moderation effort, but it appears groups diametrically opposed to Trump and his views were swept up in the purge.

Facebook said it had mistakenly removed a number of far-left political accounts, citing an “automation error”, triggering uproar from socialists who accused the social media platform of censorship.

Last week, the social media company took down a cluster of groups, pages and individuals involved in socialist politics without explanation. These included the Socialist Workers party and its student wing, the Socialist Worker Student Society in the UK, as well as the International Youth and Students for Social Equality chapter at the University of Michigan and the page of Genevieve Leigh, national secretary of the IYSSE in the US.

Moderation is tough even when it's only hundreds of posts. Facebook is struggling to stay on top of billions of daily posts while also answering to dozens of governments and thousands of competing concerns about content. Moderation without automation isn't optional. And that's how things like this happen.

Granted, it's a less than satisfying explanation for what went wrong. It doesn't give anyone any assurance it won't happen again. And it's pretty much guaranteed to happen again because it's already happened before. Activists associated with the Socialist Workers Party saw their accounts suspended and content deleted following another Facebook moderation effort that took place in early December 2020.

Facebook has disabled the accounts of over 45 left wing activists and 15 Facebook pages in Britain. The individuals and pages are all socialist and left wing political activists and organisations who campaign against racism and climate change, and in solidarity with Palestine.

Facebook has given no reason for disabling the accounts, and has not given any genuine way of appealing what has happened.

The SWP was left to guess why these accounts and pages were targeted. One theory is that Facebook moderation was purging the site of pro-Palestinian content, which sometimes is linked to bigotry or terrorist activity. Or it could be the new AI was wary of any political postings dealing with sensitive subjects and began nuking content somewhat indiscriminately.

Or it could be part of a purge that began last August, when Facebook expanded its internal list of "Dangerous Individuals and Organizations." Anything viewed by AI as "promoting violence" was fair game, even if context might have shown some targeted posts and groups were actually decrying violence and outing "dangerous" individuals/organizations. During that enforcement effort, Facebook took down left-wing pages, including some attempting to out white supremacists and neo-Nazis.

This probably was an automation error. And the automation will continue to improve. But if the automation isn't backstopped by human moderators and usable options to challenge content removal, things like this will continue to happen, and on an increasingly larger scale.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: anti-conservative bias, content moderation, masnick's impossibility theorem
Companies: facebook


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    Ninja (profile), 27 Jan 2021 @ 9:40am

    If moderation is basically impossible without collateral damage, the absence of it has shown its huge problems for the last 4 years (and more for those who were paying attention). Damned if you moderate, damned if you don't. And I'm not even talking about Facebook here, it's the society as a whole who is facing this dilemma. I'm inclined to say it's been worse without it. At least it is clear to me that lies, public health attacks (anti-vax for instance) need to be contained as much as possible. What isn't clear is how you do it without heavy casualties.

    One thing would be to err to the side of caution therefore hitting quite a few legitimate speech while providing reliable means of requesting human review. Thousands of bots won't be able to request human review and won't be able to interact with human reviewers, at least not now. This could also serve as some sort of "karma" system where accounts that are punished by mistake are less likely to be flagged in future incidents, it becomes more and more trusted.

    This idea has its flaws and I'm sure it could be improved. But my point is: yes, moderation at scale is impossible but needs to be done and will incur in collateral damage. What can we do to lessen such problems?

    link to this | view in chronology ]

    • icon
      Upstream (profile), 27 Jan 2021 @ 10:03am

      Re:

      I think the best solution is to basically avoid the "moderation at scale" problem entirely and go with the "protocols, not platforms" idea that Mike Masnick has been promoting for quite some time. The issue then becomes: How is this best accomplished? It would be a difficult prospect in the best of situations, but even more so in the face of the entrenched interests like Facebook, Twitter, et al, who have made it quite clear that they will stop at nothing to maintain their market positions.

      link to this | view in chronology ]

      • icon
        Ninja (profile), 27 Jan 2021 @ 10:07am

        Re: Re:

        Ah yes, I also think protocols but even then, how do you moderate? With protocols it might be even more challenging.

        link to this | view in chronology ]

        • icon
          Ninja (profile), 27 Jan 2021 @ 10:15am

          Re: Re: Re:

          I mean, the protocol approach could whitstand coordinated efforts like the ones we've seen with trump end effectively stop the fake news botnet before contaminating the users to a share that could make the protocol think it's good speech?

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 27 Jan 2021 @ 11:01am

            Re: Re: Re: Re:

            The point would be that the protocol doesn't moderate - the user does. You can look at email to see an example, there a ton of companies that offer anti-spam services (proofpoint as an example) and of course Outlook and gmail use their own built-in filters as well.

            Layers of moderation that the user chooses and controls built on top of an open protocol.

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 27 Jan 2021 @ 1:53pm

              Re: Re: Re: Re: Re:

              I sent an email to you, but you didn't reply.

              Did it get caught in your spam filter? Did it get automatically deleted?

              Did your spam filter trigger because I used the word "Palestine", or "Hate Speech", because I included a link to a 5g COVID conspiracy page, or perhaps because of the one to the insurrection news aggregation site?

              ... The user can choose layers of (automated) moderation, and be the "human backup" to the moderation instead of eg Twitter or Facebook. But you've merely taken on all the problems that the service had without any of the benefits. ... and you may well have arranged that you never see the moderation errors, and thus cannot correct them.

              link to this | view in chronology ]

        • icon
          Stephen T. Stone (profile), 27 Jan 2021 @ 11:07am

          I also think protocols but even then, how do you moderate?

          Mastodon manages it well enough. In addition to moderation controls on a given instance, said instance can also choose whether it will federate with any other instances. (And those other instances can, in turn, choose to federate with the initial instance.) In this way, an instance known for what the broader Fediverse considers “bad behavior” (e.g., supporting fascists, not moderating racist speech) can be “defederated” from the “behaved” instances. Those defederated instances will still exist, sure. But nobody will have to interact with them unless they choose to. (Assuming the instance you’re on didn’t fully silence the “bad” instance, anyway.) And that’s without getting into the per-post privacy and “sensitive content” settings.

          No moderation is perfect. Even the “behaved” Masto instances get it wrong every now and again. That said: Masto moderation works well enough that it can be a starting point for discussion of new ideas.

          link to this | view in chronology ]

          • icon
            Ninja (profile), 3 Feb 2021 @ 2:23am

            Re:

            The offending federation and content will still exist and be accessible, no? How does Mastodon deal with criminal content?

            link to this | view in chronology ]

      • identicon
        Anonymous Coward, 28 Jan 2021 @ 8:49am

        Re: Re:

        Isn't that essentially the same sort of echo chamber issue complained about? Back just with forums there were and are all kinds of horrifying niches from stormfront to proanorexia boards.

        link to this | view in chronology ]

    • identicon
      Anonymous Coward, 27 Jan 2021 @ 5:33pm

      Re:

      One idea: Provide a way to ask for review. No avenue for appeal is frequently the bigger problem.

      On the other end, people need to get used to things being taken down, at least until they appeal. If providers were more open about "hey, automation (and people) can screw up", and provide a method for appeal other than "wow we are being hammered for something in enough places on the internet and may be the news that we actually noticed a thing".

      link to this | view in chronology ]

  • icon
    Bloof (profile), 27 Jan 2021 @ 10:22am

    Given their treatment of left wing news sources, I struggle to believe it's accidental.

    https://www.motherjones.com/media/2020/10/facebook-mother-jones/

    'Oh hey, this change to the algorithm impacts right wing groups because they're doing bad things, better change it to lessen that impact so it impacts the left more, for balance!'

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 27 Jan 2021 @ 3:12pm

    Mindless algorithms are reasonably good at keeping the blue pill ads and Nigerian princes out of our email inboxes. They can just barely keep teenagers from typing the F-word in videogame chat...sometimes. For anything requiring consideration of context and nuance, they're garbage.

    It's simply not possible to filter out all the bad speech from the world (even if people would agree on what speech is bad to begin with, which they won't) without indiscriminately nuking every mention of a controversial topic.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 28 Jan 2021 @ 11:21pm

      Re:

      All programming problems depend on what you are trying to define. It is easy to define 'bad speech' as 'what we don't want to see' as seen with spam filters - go to any start up board and you will see salty spammers complaining about Google blocking their 'business emails'. I suspect Facebook and Twitter would be more pleasant to use if you could train it like a spam filter when you see a post and decide you don't like it. That would by definition lead to a massive echo chamber that only tells you what you want to hear.

      That nobody can define 'bad speech' remotely clearly and yet there is so much agreement assuming there should be some universal standard should fill you with suspicion - as that implies that everything is complete bullshit telling them what flatters them.

      link to this | view in chronology ]

      • icon
        Stephen T. Stone (profile), 30 Jan 2021 @ 8:23am

        I’m pretty sure the opposite of what you’re talking about there is “a massive bullhorn for people to shout at you things you don’t want and never wanted to hear”.

        I could’ve gone my entire life without hearing about QAnon and been all the better for it. Would that have made me part of an “echo chamber”?

        link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.