The Scale Of Moderating Facebook: It Turns Off 1 Million Accounts Every Single Day
from the not-an-easy-issue dept
For years now, we've discussed why it's problematic that people are demanding internet platforms moderate more and more speech. We should be quite wary of internet platforms taking on the role of the internet's police. First, they're really bad at it. As we noted in a recent post, platforms are horrendously bad at distinguishing abusive content from those documenting abusive content and that creates all sorts of unfortunate and bizarre results, with those targeted by harassing content often having their own accounts shut down. On top of that, the only way to actually moderate content at scale requires a set of rules, and any such set of rules, as applied, will create hysterically bad results. And that's because the scale of the problem is so massive. It's difficult for most people to comprehend even slightly the scale involved here. As a former Facebook employee who worked on this stuff once told me, "Facebook needs to make one million decisions each day -- one million today, one million tomorrow, one million the next day." The idea that they won't make errors (both of the Type 1 and Type 2 category) is laughable.
And it appears that the scale is only growing. Facebook has now admitted that it shuts off 1 million accounts every single day -- which means that earlier number I heard is way low. If it's killing one million accounts every day, that means it's making decisions on way more accounts than that. And, the company knows that it gets things wrong:
Still, the sheer number of interactions among its 2 billion global users means it can't catch all "threat actors," and it sometimes removes text posts and videos that it later finds didn't break Facebook rules, says Alex Stamos.
"When you're dealing with millions and millions of interactions, you can't create these rules and enforce them without (getting some) false positives," Stamos said during an onstage discussion at an event in San Francisco on Wednesday evening.
That should be obvious, but too many people think that the answer is to just put even more pressure on Facebook -- often through laws requiring it to moderate content, takedown content and kill accounts. And, when you do that, you actually make the false positive problem that much worse. Assuming, for the sake of argument, that Facebook has to kill 10% of all the accounts it reviews, that's 10 million accounts every day. If the punishment for taking down content that should have been left up is public shame/ridicule, that acts as at least some defense to get Facebook to be somewhat careful about not taking down stuff that it shouldn't. But, on the flip side, if you add a law (such as the new one in Germany) that puts criminal penalties on social media companies for leaving up content that it wants taken down, you've changed the equation.
Now, the choice isn't between "public ridicule vs. bad person on our platform" it's "public ridicule v. criminal charges and massive fines." So the incentive for Facebook, and other platforms changes such that it's now encouraged to kill a hell of a lot more accounts, just in case. So suddenly the number of "false positives" is going to sky rocket. That's not a very good solution -- especially if you want platforms to support free speech. Again, platforms have every right to moderate content on their platforms, but we should be greatly concerned when governments are forcing them to moderate in a way that may have widespread consequences on how people speak, and where those policies can tilt the scales in often dangerous ways.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: alex stamos, choices, intermediary liability, moderation, scale
Companies: facebook
Reader Comments
Subscribe: RSS
View by: Time | Thread
1,000,000,000 per day?
Actually, I can, but will enjoy watching their demise greatly.
[ link to this | view in chronology ]
Re: 1,000,000,000 per day?
[ link to this | view in chronology ]
Re: 1,000,000,000 per day?
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Per usual, more than a bit of context left out
Figuring out what social media platforms should do about unpleasant content is an important question to ask, but neither the question nor the answer has anything to do with that 1M number.
[ link to this | view in chronology ]
Re: Per usual, more than a bit of context left out
I wish I could find posts about analysis of how much of FB is bots but it is a significant amount.
I'd hope people would realize that lack of action on preventing bots goes hand in hand with lack of quality in understanding things like filtering/free speech.
[ link to this | view in chronology ]
Re: Re: Per usual, more than a bit of context left out
I have no sympathy for them and explicitly reject the argument that gosh, it's soooo hard. They should have never built something beyond their capabilities -- but they chose to, because they're greedy assholes who only care about profit and don't give a damn about the impact on the Internet, its users, and the real world.
The most ethical course of action for them right now would be to shut the whole thing off and apologize for their hubris. They won't, of course: sociopathic monster Mark Zuckerberg will see to that.
[ link to this | view in chronology ]
Re: Re: Re: Per usual, more than a bit of context left out
They **are** using bot detection. How do you think they delete 1,000,000 accounts per day?
FB is likely the biggest bot account target on the internet, so bot detection isn't going to be perfect, especially when many of the fake accounts may have human farms doing some of the sign up.
[ link to this | view in chronology ]
Re: Re: Re: Re: Per usual, more than a bit of context left out
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Per usual, more than a bit of context left out
[ link to this | view in chronology ]
Re: Re: Re: Per usual, more than a bit of context left out
In fact, one of the most damning facts that proves that is simply not true is that most bot detection today REQUIRES HUMAN INTERACTION. Good luck automating that so called "solved problem".
The fact that you want to slap ridiculous, unrealistic expectations on them does not mean they built something "beyond their capabilities". It means you're being unrealistic and ridiculous.
[ link to this | view in chronology ]
Re: Re: Re: Re: Per usual, more than a bit of context left out
Yes, there are edge cases that are tough: we're working on those. But the overwhelming majority are not only identifiable, they're EASILY identifiable.
And here's the kicker: the bigger the operation you run, the easier this gets. (Why? Because small operations only have visibility into sparse data sets. Large operations can see enormous ones and exploit that to identify bots more accurately and faster.) So this is a case where FB's scale works highly in their favor -- if only they weren't too pathetically stupid and too lazy and too cheap to exploit it.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Per usual, more than a bit of context left out
The solution being what?
Once the account has existed for a while, they can see whether it matches "normal" patterns. During creation there's not a lot of obvious difference between real and fake users, especially because many "fake" ones aren't entirely fake (CAPTCHAs can be farmed out to actual people).
How do you know they're not blocking 99 million creation attempts per day?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Per usual, more than a bit of context left out
[ link to this | view in chronology ]
Re: Re: Per usual, more than a bit of context left out
I'd hope people would realize that lack of action on preventing bots goes hand in hand with lack of quality in understanding things like filtering/free speech.
Really? So all we need to do is get all of our judges/lawyers training on bot-prevention technology and suddenly they would all agree on free speech? Wow, think of how much court time we could save with this new method. Not that bot prevention technology means much, seeing as there is almost nowhere on the internet that's actually free of bots.
Or maybe the fact is that if we can't even get widespread agreement on free speech within the US court system, then a company which operates in substantially every country in the world might have a wee bit of difficulty with the problem. After all, free speech in Germany and free speech in the US are vastly different animals.
[ link to this | view in chronology ]
Re: Per usual, more than a bit of context left out
The number that should be considered is how many posts are made on Facebook every day, as some of those are what trigger shutdowns. It is guaranteed that those are well beyond Facebooks ability to examine individually.
[ link to this | view in chronology ]
Re: Re: Per usual, more than a bit of context left out
[ link to this | view in chronology ]
Re: Per usual, more than a bit of context left out
I agree with the last point, but the point is really to counter the idea that FB literally do nothing, not to say that the number itself is really important.
[ link to this | view in chronology ]
Re: Re: Per usual, more than a bit of context left out
[ link to this | view in chronology ]
Re: Re: Re: Per usual, more than a bit of context left out
Ah, my apologies I did miss that for some reason.
But, I still don't see actual citations for the claim they're mostly spammers. I do see a caveat that it's impossible to stop kicking off legit users, and complaints that it's both too strict and too lax.
Given the actual visible evidence, I don't see why the assertions in the article are incorrect.
[ link to this | view in chronology ]
Re: Per usual, more than a bit of context left out
[ link to this | view in chronology ]
Re: Per usual, more than a bit of context left out
[ link to this | view in chronology ]
Removing Accounts ...
and that is if a court decree or govt law "requires" such the court or govt should provide not just parameters but the EXACT account BY NAME or EXACT POST by URL in writing signed by THE JUDGE or an govt official.
[ link to this | view in chronology ]
Do what NPR did...
[ link to this | view in chronology ]
Re: Do what NPR did...
[ link to this | view in chronology ]
Re: Re: Do what NPR did...
[ link to this | view in chronology ]
The future
The way things are going with net neutrality and pulling information off the internet, it won't be long until each of us is locked in a walled garden...
In that garden only the garden tenders can push things over the fence to feed the masses...
The garden tenders will be AI and won't know how to differentiate between what's healthy for consumption or what will cause shock to the gardens roots...
Eventually the garden tenders will logically decide that we can only grow within our own garden to prevent infestation of ideas.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Self Regulation (Twitter Sucks Worse)
What's really odd to me is, why has no one DONE this already?
P.S. I know several people who still manage to keep their FB pages that have been banned from Twitter. And I have seen full blown porn on Twitter, so what exactly offends these people?
[ link to this | view in chronology ]
Yes, they want to do a lot of stuff, help a lot, "improve peoples lives, blah blah... so long as they keep the money.
[ link to this | view in chronology ]
My account was closed literally the second day after registering.
Facebook wanted a copy of my government-issued ID.
I told them to go pound sand instead.
[ link to this | view in chronology ]
Selfgovernance
I am the one that agrees to follow people and I can always unfriend or unfollow them. If somebody is posting stuff I find offensive I unfollow them, it's 2 seconds work.
I guess they could make an more obvious way to flag stuff and for me it would be enough if they just hid flagged stuff behind a link (like TD does). I also think some kind of algorithm could be made that if it clears a certain number of flags in a short time it should be put in review since it seems something serious is going on.
For the rest I can manage just fine on my own thank you.
[ link to this | view in chronology ]
facebook policies are stupid
The picture is fairly innocuous, too. If you want to see it, here it is on another site. https://cdn1.lockerdomecdn.com/uploads/cfb6620dfbb1a11cc26d4c21a352b86d3d49404740190907000cee55f43ee 202_large
Yet they allow all kinds of worse things to remain.
[ link to this | view in chronology ]