Militias Still Recruiting On Facebook Demonstrates The Impossibility Of Content Moderation At Scale

from the people-will-always-find-a-way dept

Yesterday, in a (deliberately, I assume) well-timed release, the Tech Transparency Project released a report entitled Facebook's Militia Mess, detailing how there are tons of "militia groups" organizing on the platform (first found via a report on Buzzfeed). You may recall that, just days after the insurrection at the Capitol, that Facebook's COO Sheryl Sandberg made the extremely disingenuous claim that only Facebook had the smarts to stop these groups, and that most of the organizing of the Capitol insurrection must have happened elsewhere. Multiple reports debunked that claim, and this new one takes it even further, showing that these groups are (1) still organizing on Facebook, and (2) Facebook's recommendation algorithm is still pushing people to them:

TTP identified 201 Facebook militia pages and 13 groups that were active on the platform as of March 18. These included “DFW Beacon Unit” in Dallas-Fort Worth, Texas, which describes itself as a “legitimate militia” and posted March 21 about a training session; “Central Kentucky Freedom Fighters,” whose Facebook page posts near-daily content about government infringing on people’s rights; and the "New River Militia" in North Carolina, which posted about the need to “wake up the other lions” two days after the Capitol riot.

Strikingly, about 70% (140) of the Facebook pages identified by TTP had “militia” in their name. That’s a hard-to-miss affiliation, especially for a company that says its artificial intelligence systems are successfully detecting and removing policy violations like hate speech and terrorist content.

In addition, the TTP investigation found 31 militia-related profiles, which display their militia sympathies through their names, logos, patches, posts, or recruiting efforts. In more than half the cases (20), the profiles had the word “militia” in their name.

And, this stuff certainly doesn't look great:

Facebook is not just missing militia content. It’s also, in some cases, creating it.

About 17 percent of the militia pages identified by TTP (34) were actually auto-generated by Facebook, most of them with the word “militia” in their names. This has been a recurring issue with Facebook. A TTP investigation in May 2020 found that Facebook had auto-generated business pages for white supremacist groups.

Auto-generated pages are not managed by an administrator, but they can still play a role in amplifying extremist views. For example, if a Facebook user “likes” one of these pages, the page gets added to the “about” section of the user’s profile, giving it more visibility. This can also serve as a signal to potential recruiters about pro-militia sympathies.

Meanwhile, Facebook’s recommendation algorithm is pushing users who “like” militia pages toward other militia content.

When TTP “liked” the page for “Wo Co Militia,” Facebook recommended a page called “Arkansas Intelligent citizen,” which features a large Three Percenter logo as the page header. (The “history” section in the page transparency shows that it was previously named “3%ERS – Arkansas.”)

Of course, this certainly appears to be a strong contrast with what Facebook itself is claiming. In Mark Zuckerberg's testimony today before Congress on dealing with disinformation, he again suggests that Facebook has an "industry-leading" approach to dealing with this kind of content:

We remove Groups that represent QAnon, even if they contain no violent content. And we do not allow militarized social movements—such as militias or groups that support and organize violent acts amid protests—to have a presence on our platform. In addition, last year we temporarily stopped recommending US civic or political Groups, and earlier this year we announced that policy would be kept in place and expanded globally. We’ve instituted a recommendation waiting period for new Groups so that our systems can monitor the quality of the content in the Group before determining whether the Group should be recommended to people. And we limit the number of Group invites a person can send in a single day, which can help reduce the spread of harmful content from violating Groups.

We also take action to prevent people who repeatedly violate our Community Standards from creating new Groups. Our recidivism policy stops the administrators of a previously removed Group from creating another Group similar to the one removed, and an administrator or moderator who has had Groups taken down for policy violations cannot create any new Groups for a period of time. Posts from members who have violated any Community Standards in a Group must be approved by an administrator or moderator for 30 days following the violation. If administrators or moderators repeatedly approve posts that violate our Community Standards, we’ll remove the Group.

Our enforcement effort in Groups demonstrates our commitment to keeping content that violates these policies off the platform. In September, we shared that over the previous year we removed about 1.5 million pieces of content in Groups for violating our policies on organized hate, 91 percent of which we found proactively. We also removed about 12 million pieces of content in Groups for violating our policies on hate speech, 87 percent of which we found proactively. When it comes to Groups themselves, we will remove an entire Group if it repeatedly breaks our rules or if it was set up with the intent to violate our standards. We took down more than one million Groups for violating our policies in that same time period.

So, on the one hand, you have a report finding these kinds of groups still on the site, despite apparently being banned. And, on the other hand, you have Facebook talking about all of the proactive measures it's taken to deal with these groups. Both of them are telling the truth, but this highlights the impossibility of doing things well.

First, note the scale of the issue. Zuckerberg notes that Facebook has removed more than one million groups. The TTP found 13 militia groups, and 201 militia pages. At the kind of scale of Facebook some things that should be removed are always going to slip through. Some might argue that if the TTP could find these pages, then clearly Facebook could as well. But that raises two separate issues. First, what exactly are they looking for. There are so many things that could violate policies, that I'm sure Facebook trust & safety folks are constantly doing searches like these -- but just because they don't do the exact same search as the TTP does, it doesn't mean that they're not looking for this stuff. Indeed, one could argue that finding just 13 such groups is pretty good.

On top of that, what exactly is the policy violation? Facebook says that it bans militia groups "that support and organize violent acts amid protests." But that doesn't mean every group that refers to itself as a "militia" is going to violate those policies. You can easily see how many might not. On top of that, assuming that these groups recognize how Facebook has been cracking down, it's quite likely that many will simply try to "hide" behind other language to make it more difficult for Facebook to find (indeed, the TTP report points to one example of a "militia" group saying it needs to change the name of the group. In fact, in that example, it says that local law enforcement was who suggested changing the name:

So, there's always going to be some element of cat-and-mouse on these kinds of things, and some level of subjectivity in determining whether a group is actually violating Facebook's policies or not. It's easy to play a "gotcha" game and find groups like this, but that's because at scale it's always going to be impossible to be correct 100% of the time. Indeed, it's also quite likely that these efforts likely over-blocked in some cases, and took down groups that it should not have. Any effort at content moderation, especially at scale, is going to run into both Type I (false positive) and Type II (false negative) mistakes. Finding and highlighting just a few of those mistakes doesn't mean that the company is failing overall -- though it may provide some suggestions on how and where the company can improve.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, error rates, mark zuckerberg, militias
Companies: facebook


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    That One Guy (profile), 25 Mar 2021 @ 12:04pm

    Hold them to their own standards

    On the one hand it's perfectly reasonable that at Facebook's size there will be some things that slip through(though the recommendations are less defensible), so on it's own that wouldn't really be a reason to chastise them if they seemed to be working to address the issue, yet at the same time if they're trying to present themselves as this amazing social media site that should be treated as the ideal, that only Facebook has the resources to deal with the problems facing social media and therefore they should be able to direct how those efforts go it's entirely reasonable to call them on on failures like that, pointing out that even they can screw up and might not be as perfect as they'd like people to think.

    If they want to make the argument that only they are capable of handling a social media platform correctly and responsibly then it's entirely fair to hold up examples of their failure to do so, to show that even they can botch things up and maybe shouldn't be allowed to set or help craft the rules for everyone else.

    link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    icon
    Koby (profile), 25 Mar 2021 @ 12:06pm

    Hidden In Plain Sight

    it's quite likely that many will simply try to "hide" behind other language to make it more difficult for Facebook to find

    For awhile now, many creators on other platforms have been producing content, unrelated to militias or race, that use code words and lingo. It usually works to avoid demonetization or censorship. I bet it's actually helping these communities to grow, much to the chagrin of those that want to limit their reach. They get to be an edgy underground rebel, instead of a conformist.

    link to this | view in chronology ]

  • icon
    That Anonymous Coward (profile), 25 Mar 2021 @ 12:16pm

    Is this like the FBI thinking that the bad guys are required to wear black hats, so they only look at people wearing black hats?

    The 3 percenter logo should be easy for a computer to recognize.
    Having the word militia in the name should be easy to recognize.

    One would think that having the computer output a list of groups to take a peek at wouldn't take very long.

    Am I alone in getting that feeling that the claims of we removed 12 million groups is on par with the FBI parading one of their entrapped mentally challenged radicals in front of the media?

    After 911 people accepted a lot of stupid things, and around the time anyone decided to try and push back there were always reports about how they stopped a major terrorism event but couldn't reveal any details.

    One wonders how many of the 12 million groups were created by single persons just to be cool but never really attracted a following.

    Cause we've NEVER seen people online do stupid shit for the lulz..

    Or someone deciding they are going to be Nancy Drew & the Hardy Boys creating a honeypot to lure in crackpots to turn in.

    Couch warriors with delusions of grandeur LARPing online.

    Zucks has a deficit in trust when it comes to things he says vs whats actually happening on his platform.

    Despite what people believe until we all have the Elon chip in our brains detecting badthink won't be 100% & I am confused once again by people who think that anything can be made 100%

    We are not 100% safe from terrorists, despite all the stuff.
    We are not 100% safe from drunk drivers, despite all the stuff.
    We are not 100% safe from contaminated food, etc etc etc...

    At some point people need to return to reality.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 25 Mar 2021 @ 12:32pm

      Re:

      Having the word militia in the name should be easy to recognize.

      But the word does not distinguish between anti government types, and historical re-enactors, or even the DFW Beacon Unit”, who from their web site appear to be a pro-government self defence group.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 25 Mar 2021 @ 6:20pm

        Re: Re:

        Yes, but one might review a hit based on the search results for a keyword even Captain Obvious might think of.

        AI: Eventually machines and code will be smart enough to be as stupid as humans.

        link to this | view in chronology ]

    • icon
      PaulT (profile), 26 Mar 2021 @ 12:07am

      Re:

      "Having the word militia in the name should be easy to recognize."

      If that's your only criteria, it would also generate a LOT of false positives.

      "After 911 people accepted a lot of stupid things"

      Yes they did - the PATRIOT act, 2 unjust wars that killed hundreds of thousands of civilians, restrictions on rights never seen before on US soil, the invention of a new "security" force that never catches any actual criminals or explosive devices but makes life hell for travellers daily up to and including actions that would be considered rape if done by anyone else...

      I'm not sure if that's what you meant, but I also don't see how doing that stuff online as well would make you any safer. Especially when you're already admitted that a lot of innocent people would get burned along the way by your standards.

      link to this | view in chronology ]

      • icon
        That Anonymous Coward (profile), 26 Mar 2021 @ 5:13am

        Re: Re:

        "I also don't see how doing that stuff online as well would make you any safer."

        I was attacking the insane premise that anything can be 100%.
        People expect that a child will never see a boob online & demand the platforms make it so then scream when their kid sees the boob they searched for.

        If you want 100% the only options are the Elon brain chip or disabling user content online and even then... a tit might slide by.

        People cheer on the idea that it can be 100% because its just so easy, when they have no concept of how hard it actually is to do unless you hire half the planet to monitor the other half. Leaders pretend it is possible, demand it be done, & then unleash hell when the impossible isn't delivered to them by the end of the week. The masses then get a soundbite about how the platform supports the bad thing rather than they've spent millions & have entire departments devoted to trying to reduce this because there is no way to stop it without stopping the entire world.

        link to this | view in chronology ]

        • identicon
          Rocky, 26 Mar 2021 @ 9:28am

          Re: Re: Re:

          Covering 85-95% is quite easy (relatively speaking), but anything above that increases the difficulty exponentially. Considering the amount of content being generated online, even if 95% of "less desirable" content is filtered out that still means we are talking about millions of posts slipping by which means it's quite easy to find edge-cases proving that social media X "does a poor job" while ignoring everything you didn't see.

          link to this | view in chronology ]

  • identicon
    Anonymous Coward, 25 Mar 2021 @ 2:38pm

    Content moderation in the digital world is impossible because computers can only accurately display reality. Humans, however, cannot accept reality and keep trying to make the computer lie to them in the unique way they want to be lied to.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 25 Mar 2021 @ 5:48pm

    Impossible for Mike Masnick not to use the word impossible on topic of content moderation

    Human life is not a mathematical theorem. We don't care if we can't achieve 100% success rate. Sometimes >50% is enough.

    We want to achieve the greatest good with the least harm. When it comes to online speech, harm relates to real world consequences; bad things happening to people, with physical violence being the worst case. With this criteria, we can look at engagement, and the audience for that speech.

    If some guy is ranting against vaccination but only engaging with a few, then the degree of harm is low. But it's a completely different story if there is a coordinated group, say a self-declared vaccine safety organization, with tens of thousands of followers, that is spreading lies about vaccines and telling people not to get vaccinations. I'd say bring down the ban hammer.

    Impossible is not even the right choice of word when talking about content moderation. Get that arrow out of your butt.

    Impossible does not mean ineffective or futile.

    Sure, it's a cat-and-mouse, whack-a-mole game; so what? It's a constant struggle for truth to win out over lies. A lie makes it half way around the world while truth is still lacing up its shoes.

    Anyways, I'd say delete Facebook. I'm for an open internet (what content moderation would be like in that context).

    link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 25 Mar 2021 @ 6:05pm

      Impossible does not mean ineffective or futile.

      But it does mean “impossible”. In the discussion around moderation, lawmakers and political pundits seem to think there is a one-size-fits-all, always-solves-everything solution — that one specific approach can automagically sort “bad” content from “good” and such. That solution doesn’t exist, nor can it exist.

      And besides, moderation approaches that work well in smaller communities don’t scale well to larger communities. What might work for, say, a Discord server with a hundred people or so won’t work for Twitter — at least not in the sense that Twitter’s algorithms and bot-driven moderation can understand contexts and nuance that would be easy to grasp in a smaller community. (For example: Twitter repeatedly suspended a bot dedicated to reposting Donald Trump’s tweets verbatim while giving Trump a free pass on those same tweets.)

      Moderating small communities is a pain in the ass; moderating larger ones, even moreso. So what makes you think Twitter can do a far better job at moderation than someone running, say, an imageboard with a couple hundred regular users at most?

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 25 Mar 2021 @ 7:31pm

        Re:

        We can at least start at a baseline of “bad” content that’s “Holocaust denial”, “white supremacy”, and “nazism” and asking for a baseline level of moderation that consists of “use the basic Goddamn search bar and enter in keywords or the names of hate groups”. The TTP did basic shit and found groups and pages that had evaded Facebook moderation for fucking years.

        link to this | view in chronology ]

        • icon
          Stephen T. Stone (profile), 25 Mar 2021 @ 8:12pm

          We can at least start at a baseline of “bad” content

          Facebook deciding what makes for “bad contet” would inevitably piss off at least some group of powerful people — at which point Facebook would start carving out “exceptions” like the ones they kept giving Donald Trump until a couple of months ago.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 25 Mar 2021 @ 8:43pm

            Re:

            Facebook already moderates and decides what makes for “bad content” on its platform. “They’ll just piss off some powerful people off and stop” as an excuse for leaving the status-quo as is, for Facebook to keep fucking lying about how much “better” it does and only getting off its ass for real when controversy strikes (and then lying about their platform’s involvement), it’s a really shitty excuse.

            In what feels like ages ago, Mike asked “Would you like to see a better Facebook or a dead Facebook?” and the opinions came back overwhelmingly for Facebook to die. If Facebook is such a pathetic gaggle of cowards who bend over backwards to appease fascists and other pricks in ways that make us all worse off, then that just helps prove that a “better” Facebook is impossible and that the corporation needs to do us all a favor and fucking die.

            link to this | view in chronology ]

        • icon
          Mike Masnick (profile), 25 Mar 2021 @ 11:52pm

          Re: Re:

          We can at least start at a baseline of “bad” content that’s “Holocaust denial”, “white supremacy”, and “nazism” and asking for a baseline level of moderation that consists of “use the basic Goddamn search bar and enter in keywords or the names of hate groups”. The TTP did basic shit and found groups and pages that had evaded Facebook moderation for fucking years.

          And it appears that Facebook did catch way over 99% of that. And tons of other stuff as well. You say why can't they look for those words like TTP did? Well, I'm sure the list of stuff that FB moderators ARE looking for is already MASSIVE and the results then need to be reviewed. TTP looked for just one thing and found a few sites and didn't bother to do a full analysis of them. FB doesn't have that luxury. It needs to be looking for EVERYTHING ALL THE TIME and reviewing to make sure it ACTUALLY violates its policies.

          Anyone who says "why didn't they just do this search" is an idiot who has no clue what they're talking about and shouldn't be taken seriously because you have not even the first clue about how much is actually happening.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 26 Mar 2021 @ 8:14am

            Re: Re: Re:

            Anybody who thinks that Facebook is doing the best that they can on this front is a either a dupe or a corporate shill. 45-year-old white cishet Gen X-er tech bros who take the words of corps like FB at face value fit that bill pretty damn well.

            link to this | view in chronology ]

            • identicon
              Rocky, 26 Mar 2021 @ 9:35am

              Re: Re: Re: Re:

              Anyone who thinks this is easy is a frigging genius who could stand to make millions in the IT-industry making it better.

              Or just perhaps, you proved Mike's point for him.

              link to this | view in chronology ]

            • icon
              PaulT (profile), 26 Mar 2021 @ 10:01am

              Re: Re: Re: Re:

              FB can absolutely do better. The people expecting them to work literal magic are still deluded

              link to this | view in chronology ]

        • icon
          PaulT (profile), 26 Mar 2021 @ 12:13am

          Re: Re:

          "We can at least start at a baseline of “bad” content that’s “Holocaust denial”, “white supremacy”, and “nazism” "

          OK. Now, Facebook have a history of overmoderating such things and have mistakenly flagged anti-Nazi, anti-white supremacist accounts. This is what's meant by impossible - it's not possible to moderate something as huge as Facebook and neither miss something nor get false positives. That's the point.

          link to this | view in chronology ]

        • icon
          nasch (profile), 26 Mar 2021 @ 9:12pm

          Re: Re:

          We can at least start at a baseline of “bad” content

          Are you the same AC who just said you're not looking for 100% accuracy, and would be fine with >50%, or a different one?

          link to this | view in chronology ]

    • identicon
      Anonymous Coward, 25 Mar 2021 @ 6:21pm

      Re:

      You have completely missed Mike Masnick's long running point about content moderation. Good job, you.

      link to this | view in chronology ]

    • icon
      PaulT (profile), 26 Mar 2021 @ 12:09am

      Re:

      "Impossible for Mike Masnick not to use the word impossible on topic of content moderation"

      the problem with stating the truth is that there's only so many words you can use to describe it.

      "We want to achieve the greatest good with the least harm."

      So... there will be harm. In other words, even by your own admission, it's impossible not to harm.

      "Anyways, I'd say delete Facebook"

      Good for you. Now, how does that magically make moderation at scale on the many thousands of competing sites that people would go to instead not impossible?

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 25 Mar 2021 @ 5:51pm

    What is "correct" in content moderation is ultimately a matter of opinion and everyone has a different one. Before anyone could make 100% correct decisions, people would first have to agree on what the correct decisions are. This will never happen.

    Facebook can, in their own opinion, be doing a perfect job. Others will inevitably disagree.

    link to this | view in chronology ]

  • identicon
    Ken Shear, 25 Mar 2021 @ 5:55pm

    Oh please! To the extent that this is an argument, there will be mistakes (false positives and negatives) in moderation, well, excuse me. It's not just with moderation there will be mistakes -- there are always mistakes in every human activity. And it's not only "at scale" that there will be mistakes. Mistakes may be more obvious at scale, especially where (like on Facebook) much more effort is put into sharing information than into moderation. But, moderation at small scale also requires judgment calls, and mistakes go with that territory. Anyone who's ever tried to moderate comments on even a very small website knows how hard moderation can be, even not at scale.

    Also, please! That moderation is imperfect does not make it impossible. You say, moderation at scale is impossible, but you mean, moderation at scale can't be perfect. Well, obviously. Then, you hold up Facebook's clearly inadequate efforts at moderation as the best that anyone can do. And, yes, Facebook has gradually been dragged into putting some more serious resources into moderation by the intense bad publicity they've received for failing to address hate speech and incitement to violence on their platform. But no, Facebook has not put the kind of resources they could on this problem. This is a company that collects tens of billions in profits every year, and has some of the best technical talent in the world at its disposal. How much as been devoted to preventing use of the platform to incite violence? They say they're doing all they can, but oh, please, they can't do better than a simple word search when they've built the most sophisticated pattern matching systems to promote user engagement and to help advertisers find targets for products, services, and yes, hate speech and political disinformation.

    Voltaire said long ago, "the perfect is the enemy of the better" (well it was French of course, but it translates quite straightforwardly into English). That's what social media should be held accountable for. Better meaning, in this context, effort commensurate with the deep harm social media platforms are permitting, such as incitement of violence, hate speech, rampant falsehoods that undermine public health.

    Better moderation is possible. It's not impossible, even if there are gonna be mistakes.

    link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 25 Mar 2021 @ 6:11pm

      You say, moderation at scale is impossible, but you mean, moderation at scale can't be perfect.

      Tell that to lawmakers and political pundits who believe otherwise. When they get the message, the message won’t need repeating.

      you hold up Facebook's clearly inadequate efforts at moderation as the best that anyone can do

      It is the best that Facebook can do, given its size. Other similarly large services can’t/don’t fare much better.

      Better moderation is possible.

      No one has ever said otherwise. But that would require throwing far more money and man-hours at the problem. At some point, that will be costlier than the problem such an approach means to solve…even for a company like Facebook.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 25 Mar 2021 @ 7:13pm

        Re:

        How much money and man-hours would it cost to have moderators just go to Facebook’s front page, search “Militia” or “nazi” or “aryan”, then follow those results to pages that have people spewing white supremacist fascist bullshit, and then nuking those pages? Is this really the best that Facebook can do? Because it looks like Facebook delivers below the bare minimum except when a controversy strikes and they move fast to ban or delete whatever, cover their asses, and say “we need to do better” for the googolplexth time.

        link to this | view in chronology ]

        • icon
          Stephen T. Stone (profile), 25 Mar 2021 @ 7:40pm

          How much money and man-hours would it cost to have moderators just go to Facebook’s front page, search “Militia” or “nazi” or “aryan”, then follow those results to pages that have people spewing white supremacist fascist bullshit, and then nuking those pages?

          Less than Facebook might want you to believe, but more than you might think.

          And assume for a moment that “militia” brings up groups/people that aren’t spouting bigotry and fascist propaganda and anti-government sentiments (on Facebook, at any rate). Should Facebook ban such accounts only because they have the word “militia” in the display name?

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 25 Mar 2021 @ 7:52pm

            Re:

            And assume for a moment that “militia” brings up groups/people that aren’t spouting bigotry and fascist propaganda and anti-government sentiments (on Facebook, at any rate). Should Facebook ban such accounts only because they have the word “militia” in the display name?

            No, because like I said, they’d be going to the actual pages themselves, and then nuking the pages that “have people spewing white supremacist fascist bullshit”. False positives would eventually happen where bans/deletions have to get appealed, yes, but that wouldn’t invalidate the progress that’d be made by Facebook doing the bare fucking minimum that it should be doing and should’ve been doing for ages now.

            link to this | view in chronology ]

            • icon
              Stephen T. Stone (profile), 25 Mar 2021 @ 8:09pm

              False positives would eventually happen

              And that would run counter to the “perfect moderation” that lawmakers want from Facebook, so…yeah…

              (That also wouldn’t begin to address the right-wing media firestorm of Facebook going after “militia” pages, which would inevitably make Facebook bend over backwards even further to please conservatives.)

              link to this | view in chronology ]

              • identicon
                Anonymous Coward, 25 Mar 2021 @ 8:31pm

                Re:

                Which U.S. lawmakers have said they want “perfect moderation and how many of them are there? No seriously, which members of the U.S. Congress say they want “perfect moderation”; can you give examples or are these lawmakers just a rhetorical fiction constructed to support your arguments?

                link to this | view in chronology ]

                • icon
                  Mike Masnick (profile), 25 Mar 2021 @ 11:55pm

                  Re: Re:

                  Every lawmaker today who called out a single moderation decision and accused the companies of ill-intent. Any time anyone calls out a single item it shows they have no clue.

                  link to this | view in chronology ]

            • identicon
              Anonymous Coward, 26 Mar 2021 @ 3:04am

              Re: Re:

              Arrange to moderate every conversation in every pub,club and cafe in your town, and you will have will be moderating at a millionth of the scale of Facebook. Also note there is nothing said on Facebook that is not said somewhere in such public places in almost every town.

              link to this | view in chronology ]

        • icon
          Mike Masnick (profile), 25 Mar 2021 @ 11:57pm

          Re: Re:

          How much money and man-hours would it cost to have moderators just go to Facebook’s front page, search “Militia” or “nazi” or “aryan”, then follow those results to pages that have people spewing white supremacist fascist bullshit, and then nuking those pages?

          You say that as if that's the only thing they need to do. And that's why you have no fucking clue the scale of what's happening. There are probably 10,000 different searches they need to do in 125 different languages, and then they have to examine each result to make sure it actually violates a policy. And they have to do that every fucking day.

          And if they miss a few idiots like you will step in and say "how many man-hours would it take to do this search" because you're an ignorant fool who has no clue about the scale of how this works.

          I'm no fan of Facebook, but these attacks demonstrate pure ignorance and stupidity from people who have no clue.

          link to this | view in chronology ]

        • icon
          PaulT (profile), 26 Mar 2021 @ 12:18am

          Re: Re:

          "How much money and man-hours would it cost to have moderators just go to Facebook’s front page, search “Militia” or “nazi” or “aryan”"

          Many thousands of man hours per week. Which is why they use some level of automation, and mistakes are made by bots that cannot understand things like context and nuance, something which is also impossible by humans at that sort of scale.

          link to this | view in chronology ]

        • icon
          Scary Devil Monastery (profile), 26 Mar 2021 @ 2:40am

          Re: Re:

          "How much money and man-hours would it cost to have moderators just go to Facebook’s front page, search “Militia” or “nazi” or “aryan”, then follow those results to pages that have people spewing white supremacist fascist bullshit, and then nuking those pages?"

          Assume 5 minutes per page. just for those three keywords, to locate, find, and peruse a page for moderation just to see that it isn't sarcasm or some citizen watchdog journalist quoting the latest out of the white supremacy bunker. Assume literal millions of results you have to go through. Facebook isn't going to hire full-time moderators which outnumber the rest of their staff by a few orders of magnitude.

          Now add to that...child abuse, religious extremism in various flavors - from US evangelical doomsday cults quoting the text of revelations and advocating armageddon, to fundamentalist saudi nationals screaming about the necessity of the hijab, to crackpot suicide cults - etc, etc...Moderation at scale is an unending hydra always sprouting ten times as many heads as you can find moderators.

          And those moderators all have to be on the same page, so add vast educational programs on teaching them what they can allow or not, so you don't have to rely on some guy just not seeing anything wrong with phenomenon X and leaving those pages up while busting the accounts of people who, say, might be pro or con whistleblowers, BLM, LGBTQ, specific religions, etc...

          Anyone who claims moderation is possible at scale doesn't understand the concept of scale, and should be shown a beach and told to tally every individual grain of sand given very simple rules so they can get a grasp of how numbers work.

          link to this | view in chronology ]

          • icon
            PaulT (profile), 26 Mar 2021 @ 2:58am

            Re: Re: Re:

            "Facebook isn't going to hire full-time moderators which outnumber the rest of their staff by a few orders of magnitude"

            Also, they do already hire entire companies whose job it is to moderate content. Every time you read a story about something that was missed, or a story about something that was moderated by mistake, that's after hundreds or even thousands of people were already hired to do that job. Throwing tens of thousands more people at the problem isn't going to do much else other than generate more false positives and make the stuff that's missed even more newsworthy.

            Then, of course, as you rightly note, human beings don't all tend to be on the same page. There's stories of ex-employees who deliberately overmoderated left-leaning content and let right-wing propaganda fly through. Even without considering the obvious problems of cultural and social differences between moderators in different parts of the world, or even within individual US communities, you can't ignore the fact that some people just won't be doing the job properly either by omission or deliberate sabotage. All so that they can be told that they're not hiring enough people the next time someone finds an obscure page that got missed.

            link to this | view in chronology ]

            • icon
              Scary Devil Monastery (profile), 26 Mar 2021 @ 8:53am

              Re: Re: Re: Re:

              "Every time you read a story about something that was missed, or a story about something that was moderated by mistake, that's after hundreds or even thousands of people were already hired to do that job."

              Yeah, and then some Very Stable Genius toddles along and states, in full confidence, that it's "not a big thing".

              I used to assume, as a young and idealistic DBA, that people not understanding data was not an issue- I mean, that was my job.

              A few years down that road I was instead leaning to the idea that as soon as someone cracked their mouth open and showed they didn't understand the concept of numbers the expedient way to go about it would simply beat that person to death and save everyone involved the trouble his dunning-kruger would bring.

              Everyone with knowledge on how data is processed will say that moderation at scale is impossible. And yet we keep seeing village idiots and yokels claim the contrary with nothing but their dick in hand to back that assertion up.

              It makes me tired. And note that so far I haven't even mentioned that morality being relative, any ten moderators will be moderating ten different ways...

              "...or even within individual US communities, you can't ignore the fact that some people just won't be doing the job properly either by omission or deliberate sabotage."

              Yeah, and what really bugs me here is that if you were to ask any of the "of course moderation is possible" brigade if they'd trust any random ten people in their own community to moderate their media flow they'd scream in panic at the idea of the damyankee liberal a block down being the one judging their posts...

              link to this | view in chronology ]

      • icon
        kshear (profile), 25 Mar 2021 @ 11:00pm

        Re:

        It is the best that Facebook can do, given its size. Other similarly large services can’t/don’t fare much better.

        Just excuses for FB. What they're doing now is totally not the best they could do, given their tech capabilities and financial resources. Are they really using their best tech resources on this problem? Hardly - those resources go to increasing user engagement (including users who spread hate), and improving ad effectiveness. Growth (aka user engagement) and profitability have been the no.1 and no.2 goals of FB since it started. Moderation, content standards and legal compliance far behind, though, of course enough resources devoted to those things so people can say, well, they're doing the best they can.

        Take the white supremacist groups that FB claims it can't identify, even while it's matching these groups to users who are ready to engage with the supremacist content. It does this matching by using the data in a very sophisticated way that demonstrates it can indeed identify white supremacist content for purposes of user engagement. The problem isn't whether FB could moderate this more effectively, rather, moderation's just a lower priority for FB.

        But yes, let's agree the problem is making moderation better not making it perfect, so we should expect FB to make good progress improving moderation constantly, not just when it gets bad publicity for its failures in this area. How about, FB provide regular audits of its moderation efforts, and report how much it's spending and what tech resources it's applying to this problem. We're talking about hate speech advocating violence, and that stuff can cost people's lives.

        link to this | view in chronology ]

        • This comment has been flagged by the community. Click here to show it
          identicon
          Anonymous Coward, 26 Mar 2021 @ 8:24am

          Re: Re:

          But yes, let's agree the problem is making moderation better not making it perfect, so we should expect FB to make good progress improving moderation constantly, not just when it gets bad publicity for its failures in this area. How about, FB provide regular audits of its moderation efforts, and report how much it's spending and what tech resources it's applying to this problem. We're talking about hate speech advocating violence, and that stuff can cost people's lives.

          The Techdirt regulars don't actually care. It's an endless circlejerk about why content moderation at scale is "impossible" and if you actually come up with a good point Mike will call you an "idiot" or "ignorant" or "clueless" and saying "he's no fan of Facebook" simultaneously taking whatever Facebook says about how they're doing the best they can at face value, as well as advocating for "Protocols Not Platforms" which is the only solution he agrees with because he's the one who created it. White cishet Gen X-er tech bros in their 40s leaping to the defense of Facebook and every other habitually lying tech corps are fucking pathetic.

          link to this | view in chronology ]

          • icon
            Scary Devil Monastery (profile), 26 Mar 2021 @ 9:00am

            Re: Re: Re:

            " It's an endless circlejerk about why content moderation at scale is "impossible" and if you actually come up with a good point..."

            That's a novel way of describing factual reality and validated assertion.

            Of course Mike calls you an idiot when your "idea" has been disproven fifty times on this forum alone and been proven impractical or impossible a few thousand times in real life.

            "White cishet Gen X-er tech bros in their 40s.."

            Yeah. The experts. The ones who actually know what they're talking about.

            But go ahead. Prove your assertions. Better yet, make a single claim which isn't either impractical, impossible, or outright infantile. Or go count the grains on the beach to judge their merits based on any ruleset you like.

            The rest of us who in many cases have hands-on experience with moderation and mass data processing, will eagerly await your nobel prize-winning new math algorithm which - since it'll be a genuine AI - will open all kinds of new vistas for everyone.

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 26 Mar 2021 @ 10:22am

              Re: Re: Re: Re:

              Yeah. The experts. The ones who actually know what they're talking about.

              I'd wager that people who've never had to face death threats for the color of their skin, their sexual orientation and/or gender identity, or what country they came from while being heavily embedded in Silicon Valley and Valley-centric academia are shitty experts to have at the forefront in the face of many of the issues that've been coming down the pipe which span not just the country but the globe as well.

              If you disagree, feel free to start up a project where you ask people to fill out a survey about what they think about my comment, then promptly ignore that survey and do what you actually wanted to do: play a scenario-building card game with rules that y'all made up with your friends about why I'm wrong and write a book based on the scenarios that y'all come up with. It's what the "experts" like Mike & Co. did when they made Working Futures, so why not use it here?

              link to this | view in chronology ]

              • identicon
                Rocky, 26 Mar 2021 @ 11:06am

                Re: Re: Re: Re: Re:

                Seems you forgot to refute his argument and built a flaming strawman instead.

                link to this | view in chronology ]

          • identicon
            Anonymous Coward, 26 Mar 2021 @ 9:57pm

            Re: Re: Re:

            "White cishet tech-bros"? On Techdirt?

            And that's how we can tell you didn't read the article...

            link to this | view in chronology ]

    • identicon
      Anonymous Coward, 25 Mar 2021 @ 6:24pm

      Re:

      Is this a parade of people thinking they disagree with Mike, but who actually make many of the same points he does? Or is this just a long-winded semantics troll?

      link to this | view in chronology ]

    • icon
      Mike Masnick (profile), 25 Mar 2021 @ 11:54pm

      Re:

      Oh please! To the extent that this is an argument, there will be mistakes (false positives and negatives) in moderation, well, excuse me. It's not just with moderation there will be mistakes -- there are always mistakes in every human activity. And it's not only "at scale" that there will be mistakes. Mistakes may be more obvious at scale, especially where (like on Facebook) much more effort is put into sharing information than into moderation. But, moderation at small scale also requires judgment calls, and mistakes go with that territory. Anyone who's ever tried to moderate comments on even a very small website knows how hard moderation can be, even not at scale.

      You are repeating my point, so not sure why the "oh please"

      Also, please! That moderation is imperfect does not make it impossible. You say, moderation at scale is impossible, but you mean, moderation at scale can't be perfect. Well, obviously. Then, you hold up Facebook's clearly inadequate efforts at moderation as the best that anyone can do.

      I most certainly did not say that it's the "best" that anyone can do and have yelled for years about how they can do it better.

      But I'm saying that policy makers, the media, and random idiots in comments keep insisting they have to be perfect.

      What I'm saying is not that it's impossible to be perfect, but that it's impossible to do well. Because it is for exactly the reasons you stated. So you're agreeing with me while thinking you're disagreeing.

      Better moderation is possible. It's not impossible, even if there are gonna be mistakes.

      I never said that better moderation was impossible. The point I'm making is that even as they can get better, expecting it to ever be good is a mistake.

      link to this | view in chronology ]

    • icon
      PaulT (profile), 26 Mar 2021 @ 12:17am

      Re:

      "You say, moderation at scale is impossible, but you mean, moderation at scale can't be perfect"

      That's the entire point - even if Facebook moderate at 99.9999% perfection, something will be missed. They will also get false positives and ban something that should not be banned. Therefore, the expectation on the part of politicians and the media that they can do 100% complete moderation is impossible.

      "Voltaire said long ago, "the perfect is the enemy of the better" (well it was French of course, but it translates quite straightforwardly into English). That's what social media should be held accountable for."

      Nobody's saying "it's impossible to do perfectly, so why bother? They're simply saying that it's impossible to do perfectly, so stop trying to demand perfection.

      link to this | view in chronology ]

  • icon
    Ben (profile), 26 Mar 2021 @ 7:36am

    a furriner's curiosity

    Just out of curiosity, is this militia thing limited to the right wing in the USA, or are there also hoards of left-wing militias too?

    link to this | view in chronology ]

    • icon
      PaulT (profile), 26 Mar 2021 @ 8:37am

      Re: a furriner's curiosity

      Also not an American, but this is my understanding - militias have a long history of being associated with secessionist groups and racism in the US, along with other things. There are such things as left-wing and other militias (for example, the New Black Panthers), but they seem to concentrate more on the right wing.

      link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 26 Mar 2021 @ 11:25am

      While left-wing militias may exist in the U.S., they would be so few in number that their existence would be insignificant. Milita groups are closely tied to right-wing causes because right-wing/conservative ideologies in the U.S. treat unfettered gun ownership, “might makes right” thinking, and the ideas expressed in the sentence “the tree of liberty must be refreshed from time to time with the blood of patriots and tyrants” as absolute moral virtues. Left-wing groups tend to be far more non-violent in their approaches — including antifascist groups, which rarely engage in violence to further political goals or intimidate people (including lawmakers).

      Point is, left-wing groups don’t go around carrying “long guns” into public places on a regular basis as a “message” because they’re not violent dickbags.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 26 Mar 2021 @ 8:56am

    I'm so glad that Ars Technica wrote an article about what the TTP found. The discussion thread over there is full of people who actually treat FB and other tech corps like the habitual liars they are.

    I'm very excited for Techdirt's future article about why Section 230 should protect Amazon for letting fly-by-night sellers get away with selling defective goods that cause people harm.

    link to this | view in chronology ]

    • identicon
      Rocky, 26 Mar 2021 @ 9:48am

      Re:

      I see you don't understand the difference between product liability and 3rd party content liability.

      You care to elaborate how the two are connected?

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 26 Mar 2021 @ 11:05am

        Re: Re:

        Amazon is a shop. They sell stuff in their shop. If they can’t vouch for what they sell, don’t sell it. It’s entirely possible that they can vouch for the stuff they sell by having clear contact details to the manufacturer/seller which can then point to where the stuff was vetted by an official third party, that’s fine by me. But Amazon started as a little book shop. They deliberately and with careful planning and intent made themselves the size they are today - and every step of the that growth process they should have grown their vouching-for-the-stuff-we-sell process as well. Amazon's Marketplace and the infrastructure (both physical warehouses and digital listings and info about sellers) that they have 100% control over for ensuring that the Marketplace exists and that Amazon is a key part of and gets a cut of the transactions that happen there, are far different from them being a facilitator of third parties.

        If I buy a pint of milk in the supermarket, and it gets discovered that there is toxic stuff in the milk, it is the supermarkets job to do something. The supermarket didn’t milk the cow, nor am I expecting the supermarket to sample taste every carton of milk. But I expect them to take responsibility for what they sell. It’s not hard. It may take a few million dollars off of Amazon’s yearly profits, but it’s not hard. I don't expect Amazon to test all the products that come through their warehouses themselves. But I do expect them to be able to properly vet the people on their Marketplace. This is a company specializing in tools to assist deployment on scale for business and has already figured out how to control and monitor supply chains for their own labels.

        I would argue that at the very least, with regards to Amazon as a digital storefront, a critical responsibility they should have is for keeping a line of communication open between the buyer and the seller, and if the storefront cannot do that because the 'seller' skipped town, that's on the storefront for not vetting the 'seller', and they should bear liability. This creates incentive for Amazon to deal with reputable companies and to avoid selling dangerous or defective products.

        link to this | view in chronology ]

        • icon
          Stephen T. Stone (profile), 26 Mar 2021 @ 11:26am

          I see you don't understand the difference between product liability and third party content liability.

          You care to elaborate how the two are connected?

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 26 Mar 2021 @ 11:45am

            Re:

            Amazon's Marketplace and the infrastructure (both physical warehouses and digital listings and info about sellers) that they have 100% control over for ensuring that the Marketplace exists and that Amazon is a key part of and gets a cut of the transactions that happen there, are far different from them being a facilitator of third parties.

            Physical products you buy where the main corporation that owns the site where the products are listed gets a cut of that sale, the corporation owns the warehouses where the products are stored, and the corporation employs the people who drive up to your house and plop it on your doorstep, this all means the corporation has a key hand in getting it to you all throughout the process. This isn't third-party content.

            You care to elaborate on why you think Amazon should be able to get away with letting shady nameless Chinese vendors shove defective products into the US market and then disappear without a trace?

            link to this | view in chronology ]

            • identicon
              Rocky, 26 Mar 2021 @ 1:38pm

              Re: Re:

              You care to elaborate on why you think Amazon should be able to get away with letting shady nameless Chinese vendors shove defective products into the US market and then disappear without a trace?

              Now you are just silly. Nobody has said that, we where wondering why you thought this was related to Section 230, hence the question if you understand the difference between product liability and third party content liability. One pertains to consumer safety and the other to user generated content online, to conflate the two shows a severe lack of understanding of the issue at hand.

              link to this | view in chronology ]

              • identicon
                Anonymous Coward, 26 Mar 2021 @ 2:07pm

                Re: Re: Re:

                Techdirt sure loves conflating the two. In Techdirt’s articles about Oberdorf v. Amazon, they’ve gone to great lengths to try and equate product liability to third party content liability.

                link to this | view in chronology ]

                • identicon
                  Rocky, 26 Mar 2021 @ 6:28pm

                  Re: Re: Re: Re:

                  Oh, do point out where in those articles TD conflate the two. It's quite clear from what Cathy Gellis wrote that her stance was that they where separate issues and it' was 2 of the 3 judges presiding over the case that went to some lengths to stretch their reasoning in an effort to come to the foregone conclusion that Amazon was liable regardless who the seller was.

                  link to this | view in chronology ]

            • icon
              PaulT (profile), 27 Mar 2021 @ 3:19am

              Re: Re:

              "This isn't third-party content."

              You appear to be whining about Amazon Marketplace, where it's not required that Amazon ever have any physical involvement with the product and may provide nothing more than a glorified classified ad. How is that not third party content?

              link to this | view in chronology ]

          • icon
            Tanner Andrews (profile), 29 Mar 2021 @ 4:13am

            Re:

            You care to elaborate how the two are connected?

            Sure. They were both mentioned in the same opinion, Oberdorf v. Amazon, Inc, 930 F.3d 136 (US 3rd Cir. 2019). There, the court carefully distinguished the two, finding that Amazon was a ``seller'' under Pennsylvania law and Restatement S:402a. It also held that, as to failure to regulate information posted by the underlying vendor including particularly the failure to warn, Amazon was not liable due to S:230 immunity.

            It is as the connection between frozen custard and lawn care equipment. Both are mentioned in this reply and one may be consumed after use of the other, and so they are connected. Some may view the connection as tenuous.

            link to this | view in chronology ]

        • identicon
          Anonymous Coward, 26 Mar 2021 @ 2:27pm

          Re: Re: Re:

          Amazon is a shop.

          Amazon is also a market place. You can buy goods sold by Amazon, sold via Amazon and using their logistics, and sold via Amazon, where Amazon only provides ordering and payment services, and the seller deals with the logitics of delivery. Which of those groups should Amazon be held liable for?

          link to this | view in chronology ]

        • icon
          PaulT (profile), 27 Mar 2021 @ 3:17am

          Re: Re: Re:

          "Amazon is a shop."

          Amazon's largest and most profitably business unit is AWS, which provides cloud hosting services.

          link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.