The Scale Of Moderating Facebook: It Turns Off 1 Million Accounts Every Single Day

from the not-an-easy-issue dept

For years now, we've discussed why it's problematic that people are demanding internet platforms moderate more and more speech. We should be quite wary of internet platforms taking on the role of the internet's police. First, they're really bad at it. As we noted in a recent post, platforms are horrendously bad at distinguishing abusive content from those documenting abusive content and that creates all sorts of unfortunate and bizarre results, with those targeted by harassing content often having their own accounts shut down. On top of that, the only way to actually moderate content at scale requires a set of rules, and any such set of rules, as applied, will create hysterically bad results. And that's because the scale of the problem is so massive. It's difficult for most people to comprehend even slightly the scale involved here. As a former Facebook employee who worked on this stuff once told me, "Facebook needs to make one million decisions each day -- one million today, one million tomorrow, one million the next day." The idea that they won't make errors (both of the Type 1 and Type 2 category) is laughable.

And it appears that the scale is only growing. Facebook has now admitted that it shuts off 1 million accounts every single day -- which means that earlier number I heard is way low. If it's killing one million accounts every day, that means it's making decisions on way more accounts than that. And, the company knows that it gets things wrong:

Still, the sheer number of interactions among its 2 billion global users means it can't catch all "threat actors," and it sometimes removes text posts and videos that it later finds didn't break Facebook rules, says Alex Stamos.

"When you're dealing with millions and millions of interactions, you can't create these rules and enforce them without (getting some) false positives," Stamos said during an onstage discussion at an event in San Francisco on Wednesday evening.

That should be obvious, but too many people think that the answer is to just put even more pressure on Facebook -- often through laws requiring it to moderate content, takedown content and kill accounts. And, when you do that, you actually make the false positive problem that much worse. Assuming, for the sake of argument, that Facebook has to kill 10% of all the accounts it reviews, that's 10 million accounts every day. If the punishment for taking down content that should have been left up is public shame/ridicule, that acts as at least some defense to get Facebook to be somewhat careful about not taking down stuff that it shouldn't. But, on the flip side, if you add a law (such as the new one in Germany) that puts criminal penalties on social media companies for leaving up content that it wants taken down, you've changed the equation.

Now, the choice isn't between "public ridicule vs. bad person on our platform" it's "public ridicule v. criminal charges and massive fines." So the incentive for Facebook, and other platforms changes such that it's now encouraged to kill a hell of a lot more accounts, just in case. So suddenly the number of "false positives" is going to sky rocket. That's not a very good solution -- especially if you want platforms to support free speech. Again, platforms have every right to moderate content on their platforms, but we should be greatly concerned when governments are forcing them to moderate in a way that may have widespread consequences on how people speak, and where those policies can tilt the scales in often dangerous ways.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: alex stamos, choices, intermediary liability, moderation, scale
Companies: facebook


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    Anonymous Anonymous Coward (profile), 30 Aug 2017 @ 10:53am

    1,000,000,000 per day?

    Then 2000 days or 5.479452055 years till Facebook is gone? I can't wait.

    Actually, I can, but will enjoy watching their demise greatly.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 30 Aug 2017 @ 11:13am

      Re: 1,000,000,000 per day?

      It's only 1,000,000/day, and not necessarily accounts of actual people (maybe they're closing the million accounts per day created by spammers).

      link to this | view in chronology ]

    • icon
      Bamboo Harvester (profile), 30 Aug 2017 @ 12:00pm

      Re: 1,000,000,000 per day?

      If we all open 10 new accounts per day, will that kill it faster? Please?

      link to this | view in chronology ]

  • icon
    mhajicek (profile), 30 Aug 2017 @ 11:08am

    Indeed. I would prefer that they kill all accounts as quickly as possible.

    link to this | view in chronology ]

  • identicon
    SirWired, 30 Aug 2017 @ 11:08am

    Per usual, more than a bit of context left out

    Most (in fact, nearly all) of those 1M accounts per day are not for abuse, censoring, hate speech, what-have-you; most are just spammers and other robo-accounts and just like their creation, their deletion requires no human interaction.

    Figuring out what social media platforms should do about unpleasant content is an important question to ask, but neither the question nor the answer has anything to do with that 1M number.

    link to this | view in chronology ]

    • icon
      Designerfx (profile), 30 Aug 2017 @ 11:16am

      Re: Per usual, more than a bit of context left out

      This is no fault other than explicitly facebook. There are lots of anti-bot measures they could be doing and naturally do not have an interest in doing (because facebook's total user count would plummet).

      I wish I could find posts about analysis of how much of FB is bots but it is a significant amount.

      I'd hope people would realize that lack of action on preventing bots goes hand in hand with lack of quality in understanding things like filtering/free speech.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 30 Aug 2017 @ 11:57am

        Re: Re: Per usual, more than a bit of context left out

        Bot detection methods have been known for about 15 years and VERY well understood for 10. There's no excuse for Facebook allowing those accounts to exist in the first place -- except that (a) they're ignorant newbies who have no idea to how to run a service even 1% the size of Facebook (b) they're lazy and cheap (c) the user count inflates Facebook's stock price.

        I have no sympathy for them and explicitly reject the argument that gosh, it's soooo hard. They should have never built something beyond their capabilities -- but they chose to, because they're greedy assholes who only care about profit and don't give a damn about the impact on the Internet, its users, and the real world.

        The most ethical course of action for them right now would be to shut the whole thing off and apologize for their hubris. They won't, of course: sociopathic monster Mark Zuckerberg will see to that.

        link to this | view in chronology ]

        • identicon
          Scote, 30 Aug 2017 @ 12:56pm

          Re: Re: Re: Per usual, more than a bit of context left out

          "Bot detection methods have been known for about 15 years and VERY well understood for 10. There's no excuse for Facebook allowing those accounts to exist in the first place"

          They **are** using bot detection. How do you think they delete 1,000,000 accounts per day?

          FB is likely the biggest bot account target on the internet, so bot detection isn't going to be perfect, especially when many of the fake accounts may have human farms doing some of the sign up.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 30 Aug 2017 @ 1:41pm

            Re: Re: Re: Re: Per usual, more than a bit of context left out

            My point is that they shouldn't have to delete 1M accounts/day: they should never have allowed them to be created. Those of us who've been paying attention learned a long time ago that reactive measures are too little, too late, and the proactive measures actually have a fighting chance.

            link to this | view in chronology ]

            • identicon
              Wendy Cockcroft, 1 Sep 2017 @ 2:34am

              Re: Re: Re: Re: Re: Per usual, more than a bit of context left out

              What could they reasonably be expected to do to prevent accounts they don't want from being created?

              link to this | view in chronology ]

        • icon
          Cdaragorn (profile), 30 Aug 2017 @ 1:39pm

          Re: Re: Re: Per usual, more than a bit of context left out

          You act like bot detection is a solved problem. That is sooooo far from fact.

          In fact, one of the most damning facts that proves that is simply not true is that most bot detection today REQUIRES HUMAN INTERACTION. Good luck automating that so called "solved problem".

          The fact that you want to slap ridiculous, unrealistic expectations on them does not mean they built something "beyond their capabilities". It means you're being unrealistic and ridiculous.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 30 Aug 2017 @ 1:44pm

            Re: Re: Re: Re: Per usual, more than a bit of context left out

            Bot detection IS, for the most part, a solved problem. If you don't know this, then you're out-of-touch with the contemporary security environment and should probably avoid commenting on things beyond your inadequate understanding.

            Yes, there are edge cases that are tough: we're working on those. But the overwhelming majority are not only identifiable, they're EASILY identifiable.

            And here's the kicker: the bigger the operation you run, the easier this gets. (Why? Because small operations only have visibility into sparse data sets. Large operations can see enormous ones and exploit that to identify bots more accurately and faster.) So this is a case where FB's scale works highly in their favor -- if only they weren't too pathetically stupid and too lazy and too cheap to exploit it.

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 30 Aug 2017 @ 4:41pm

              Re: Re: Re: Re: Re: Per usual, more than a bit of context left out

              Bot detection IS, for the most part, a solved problem.

              The solution being what?

              Once the account has existed for a while, they can see whether it matches "normal" patterns. During creation there's not a lot of obvious difference between real and fake users, especially because many "fake" ones aren't entirely fake (CAPTCHAs can be farmed out to actual people).

              But the overwhelming majority are not only identifiable, they're EASILY identifiable.

              How do you know they're not blocking 99 million creation attempts per day?

              link to this | view in chronology ]

              • icon
                Anonymous Anonymous Coward (profile), 30 Aug 2017 @ 4:48pm

                Re: Re: Re: Re: Re: Re: Per usual, more than a bit of context left out

                I am not a technologist, per se, but I think some required human interaction on account creation would be a good thing. I understand they have those, having run into a few myself (not on Facebook as they are blocked in my hosts file).

                link to this | view in chronology ]

      • identicon
        Anonymous Coward, 30 Aug 2017 @ 12:11pm

        Re: Re: Per usual, more than a bit of context left out

        I'd hope people would realize that lack of action on preventing bots goes hand in hand with lack of quality in understanding things like filtering/free speech.

        Really? So all we need to do is get all of our judges/lawyers training on bot-prevention technology and suddenly they would all agree on free speech? Wow, think of how much court time we could save with this new method. Not that bot prevention technology means much, seeing as there is almost nowhere on the internet that's actually free of bots.

        Or maybe the fact is that if we can't even get widespread agreement on free speech within the US court system, then a company which operates in substantially every country in the world might have a wee bit of difficulty with the problem. After all, free speech in Germany and free speech in the US are vastly different animals.

        link to this | view in chronology ]

    • identicon
      Anonymous Coward, 30 Aug 2017 @ 11:55am

      Re: Per usual, more than a bit of context left out

      Figuring out what social media platforms should do about unpleasant content is an important question to ask, but neither the question nor the answer has anything to do with that 1M number.

      The number that should be considered is how many posts are made on Facebook every day, as some of those are what trigger shutdowns. It is guaranteed that those are well beyond Facebooks ability to examine individually.

      link to this | view in chronology ]

      • identicon
        JEDIDIAH, 30 Aug 2017 @ 12:01pm

        Re: Re: Per usual, more than a bit of context left out

        They also have a very low bar for banning users and no clear guidelines governing this.

        link to this | view in chronology ]

    • icon
      PaulT (profile), 30 Aug 2017 @ 1:52pm

      Re: Per usual, more than a bit of context left out

      I'm sure you have a citation for your assertion about the nature of these accounts?

      I agree with the last point, but the point is really to counter the idea that FB literally do nothing, not to say that the number itself is really important.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 30 Aug 2017 @ 2:18pm

        Re: Re: Per usual, more than a bit of context left out

        Errr... yes, I have a citation. Literally the very first sentence in the linked article to CNBC. "Facebook closes more than 1 million accounts every day, with most of those created by spammers and fraudsters, security chief Alex Stamos says."

        link to this | view in chronology ]

        • icon
          PaulT (profile), 30 Aug 2017 @ 2:52pm

          Re: Re: Re: Per usual, more than a bit of context left out

          "Literally the very first sentence in the linked article to CNBC."

          Ah, my apologies I did miss that for some reason.

          But, I still don't see actual citations for the claim they're mostly spammers. I do see a caveat that it's impossible to stop kicking off legit users, and complaints that it's both too strict and too lax.

          Given the actual visible evidence, I don't see why the assertions in the article are incorrect.

          link to this | view in chronology ]

    • identicon
      Anonymous Coward, 30 Aug 2017 @ 5:48pm

      Re: Per usual, more than a bit of context left out

      Provide proof to back your claims. Otherwise is just another biased opinion.

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 1 Sep 2017 @ 12:03pm

      Re: Per usual, more than a bit of context left out

      Who defines unpleasant content? How do we prevent this from becoming a race to the bottom, where anything that offends anyone is removed? Personally I am offended by cats, people who take selfies, and pictures of peoples food ...

      link to this | view in chronology ]

  • icon
    dave blevins (profile), 30 Aug 2017 @ 1:06pm

    Removing Accounts ...

    ... should have the same answer as Google's problem of removing "forget mes"
    and that is if a court decree or govt law "requires" such the court or govt should provide not just parameters but the EXACT account BY NAME or EXACT POST by URL in writing signed by THE JUDGE or an govt official.

    link to this | view in chronology ]

  • identicon
    Bruce C., 30 Aug 2017 @ 1:28pm

    Do what NPR did...

    Maybe Facebook should shut down its comments section. After all, multiple organizations claim it improves their interaction with their users.

    link to this | view in chronology ]

  • identicon
    OrwellsPast, 30 Aug 2017 @ 4:12pm

    The future

    Well think about it this way, censoring the internet is providing jobs.

    The way things are going with net neutrality and pulling information off the internet, it won't be long until each of us is locked in a walled garden...

    In that garden only the garden tenders can push things over the fence to feed the masses...

    The garden tenders will be AI and won't know how to differentiate between what's healthy for consumption or what will cause shock to the gardens roots...

    Eventually the garden tenders will logically decide that we can only grow within our own garden to prevent infestation of ideas.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 30 Aug 2017 @ 5:23pm

    Screw this company.. let it shut itself down.

    link to this | view in chronology ]

  • icon
    Shane (profile), 30 Aug 2017 @ 5:26pm

    Self Regulation (Twitter Sucks Worse)

    The technology is already there to simply let us filter who we want. Why can't I selectively block whoever I want, selectively remove people's posts from MY threads, and so forth?

    What's really odd to me is, why has no one DONE this already?

    P.S. I know several people who still manage to keep their FB pages that have been banned from Twitter. And I have seen full blown porn on Twitter, so what exactly offends these people?

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 30 Aug 2017 @ 6:04pm

    If Mark Zuckerberg and any other billionaire really wanted to contribute to society and help people in general they would simply donate their billions.

    Yes, they want to do a lot of stuff, help a lot, "improve peoples lives, blah blah... so long as they keep the money.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 30 Aug 2017 @ 11:25pm

    According to facebook I was a bot.
    My account was closed literally the second day after registering.
    Facebook wanted a copy of my government-issued ID.
    I told them to go pound sand instead.

    link to this | view in chronology ]

  • icon
    Narcissus (profile), 31 Aug 2017 @ 12:03am

    Selfgovernance

    I'm kind of confused why Facebook would be responsible for what appears on my feed, except for the ads obviously.

    I am the one that agrees to follow people and I can always unfriend or unfollow them. If somebody is posting stuff I find offensive I unfollow them, it's 2 seconds work.

    I guess they could make an more obvious way to flag stuff and for me it would be enough if they just hid flagged stuff behind a link (like TD does). I also think some kind of algorithm could be made that if it clears a certain number of flags in a short time it should be put in review since it seems something serious is going on.

    For the rest I can manage just fine on my own thank you.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 31 Aug 2017 @ 6:49am

    facebook policies are stupid

    My brother posted a "Titty Sprinkles" image of a woman with candy sprinkles all over her bared breasts. (to the point that you can't even see skin) Someone complained, and he got banned for a week for that. He re-shared the same picture a year later because it popped up in his "what you were doing a year ago" automatic facebook popup. He got banned for a month this time.

    The picture is fairly innocuous, too. If you want to see it, here it is on another site. https://cdn1.lockerdomecdn.com/uploads/cfb6620dfbb1a11cc26d4c21a352b86d3d49404740190907000cee55f43ee 202_large

    Yet they allow all kinds of worse things to remain.

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.