Facebook AI Moderation Continues To Suck Because Moderation At Scale Is Impossible
from the car-washes-and-mass-shootings dept
For several years now, we've been beating the idea that content moderation at scale is impossible to get right, otherwise known as Masnick's Impossibility Theorem. The idea there is not that platforms shouldn't do any form of moderation, or that they shouldn't continue to try to improve the method for moderation. Instead, this is all about expectations setting, partially for a public that simply wants better content to show up on their various devices, but even more so for political leaders that often see a problem happening on the internet and assume that the answer is simply "moar tech!".
Being an internet behemoth, Facebook catches a lot of heat for when its moderation practices suck. Several years ago, Mark Zuckerberg announced that Facebook had developed an AI-driven moderation program, alongside the claim that this program would capture "the vast majority" of objectionable content. Anyone who has spent 10 minutes on Facebook in the years since realizes how badly Facebook failed towards that goal. And, as it turns out, failed in both directions.
By that I mean that, while much of our own commentary on all this has focused on how often Facebook's moderation ends up blocking non-offending content, a recent Ars Technica post on just how much hate speech makes its way onto the platform has some specific notes about how some of the most objectionable content is misclassified by the AI moderation platform.
Facebook’s internal documents reveal just how far its AI moderation tools are from identifying what human moderators were easily catching. Cockfights, for example, were mistakenly flagged by the AI as a car crash. “These are clearly cockfighting videos,” the report said. In another instance, videos livestreamed by perpetrators of mass shootings were labeled by AI tools as paintball games or a trip through a carwash.
It's not entirely clear to me just why the AI system is seeing mass shootings and animals fighting and thinking its paintball or carwashes, though I unfortunately have some guesses and they aren't fun to think about. Either way, this... you know... sucks! If the AI you're relying on to filter out extreme and violent content labels a mass shooting as a trip through the carwash, well, that really should send us back to the drawing board, shouldn't it?
It's worse in other countries, as the Ars post notes. There are countries where Facebook has no database of racial slurs in native languages, meaning it cannot even begin blocking such content on the site, via AI or otherwise. Polled Facebook users routinely identify hate on the platform as its chief problem, but the company seems to be erring in the opposite direction.
Still, Facebook’s leadership has been more concerned with taking down too many posts, company insiders told WSJ. As a result, they said, engineers are now more likely to train models that avoid false positives, letting more hate speech slip through undetected.
Which may actually be the right thing to do. I'm not prepared to adjudicate that point in this post. But what we can say definitively is that Facebook has an expectations setting problem on its hands. For years it has touted its AI and human moderators as the solution to the most vile content on its platform... and it doesn't work. Not at scale at least. And outside of America and a handful of other Western nations, barely at all.
It might be time for the company to just say so and tell the public and its representatives that this is going to take a long, long while before the company gets this anywhere close to right.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: ai, content moderation, impossibility
Companies: facebook
Reader Comments
Subscribe: RSS
View by: Time | Thread
Or...
Facebook continues to suck.
[ link to this | view in thread ]
Can’t it be both?
[ link to this | view in thread ]
AI has a fundamental unsolved problem
Mr Stone,
Let's find a "Bad" post from our favorite miscreant commenter on Techdirt here... nobody except him has a problem with that post being hidden.
Now, let's have you, one of the best commenters here, write the retort to it that wins the "best post of the week" contest, but only works if it includes the original.
So we have, on the one hand: Bad material. On the other hand, Bad Material plus a tiny dash of your magic touch = win best comment of the week!
Now how is any one gonna build an AI thatcan figure that out???
[ link to this | view in thread ]
Well we could have trained the AI what genocide looks like, but the YT AI was busy taking down the videos uploaded trying to show the world what was happening when nations turn on their own people.
People keep making demands that tech 'nerd harder' & that magically they will manage to make a leap that should take 100 years to reach... and the nerds try to do it, fail miserably and the people demanding they nerd harder pretend its super simple they just don't want to.
Facebook is a business (Metaverse... AYFKM?)
People demand more & more from Facebook that they do not demand from other businesses.
We have laws making it a crime for people to report on shortcuts & outright evil acts in the processing of our food and I'm pretty sure we've had way more illness & death from that then Facebook's AI missing a cockfight.
Oil leak detected in a pipe, they didn't shut it down for hours increasing the spillage, then lied about how long it was leaking & how much could have been dumped. But FB missed a slur in another language & that gets Congress in a bunch.
Families in housing on military bases are forced to live in substandard, dangerous housing. Way to repay our troops, giving their families medical problems because saving a couple bucks ignoring the leak that created the black mold helped their bottom line.
Yes there is a problem on FB, but FB isn't creating it out of thin air... humanity is a bunch of fucked up people, thinking and posting horrible things. Yes FB should do better, but pretending that the users aren't throwing endless amounts of shit at other people's walls hoping some of it sticks makes it impossible for FB to do anything that pleases anyone.
The AI was programmed with a simple premise, if they show interest in A they might also like D E or F. The AI nudges them, offers up more things because they then liked 2 things from F, which branches again, and somehow we are at a page promoting Jewish Space Lasers.
No one told the AI that humans are stupid, racist, xenophobic, & sometimes are just generally evil to people not just like them.
Even the BEST AIs today are retarded. (Oh noes a bad word)
Its not even something like 50% as good as a human making decisions, yet we all pretend AI can be all seeing all knowing just like in the scifi movies/tv.
Yes the Breast Cancer AI has great success at flagging images for humans to review, but the only dataset involved is images of tits & spots that turned cancerous. There are only 2 possible outcomes... I don't see cancer & I think I see cancer please check these images.
But we can't pretend another AI can be as effective when there are literally 100's of ways to see things & the connotation taken away by many sometimes isn't the denotation intended.
If we want FB to have a perfect system, they just need to hire everyone on the planet. Let each half watch the other half. Yep native speakers of Japanese might miss some meanings that might offend those in the US, but the error rate would be on par with the AI they use now... maybe even slightly better but never the perfect utopia people imagine it could be... if they just nerd harder.
椅子 is a 'crime' in China, its also a word common in catalogs selling furniture. (Chair)
維尼熊 gets you a trip to a reeducation camp for insulting the leader... Pity you were just posting photos of your trip to EuroDisney & your children hugging Winnie the Pooh.
They can keep trying to train AI's, but they literally need to code 1000's of AI's with information from every nation, faction, etc to train them with then get that system to sign off on every single post made so that no one ever gets offended or mad...
[ link to this | view in thread ]
Re: AI has a fundamental unsolved problem
And we know that there are humans who also can't figure that out.
[ link to this | view in thread ]
or beating up English comprehension.... methinks thy fingers and brain lost synchronization there Tim.
[ link to this | view in thread ]
Yes, avoiding false positives is the right direction to go. It's important for them to recognize when trying to do the impossible is harming people. People getting falsely banned because the mod-bots suck is far worse an outcome than some bad speech getting through. People can always block the assholes manually if they want.
It would be best for social media to have a realistic assessment of what they can and can't moderate well. Do a proper job with the things they can, and admit it when they can't instead of trying the impossible and botching things in the name of "doing something."
[ link to this | view in thread ]
Why is FB or any social media moderating at all? Are they platform or publisher?
Nothing should be censored. No one should have their account canceled.
Only a court order should be allowed to remove illegal content.
Social platforms should be prohibited from interfering in elections. Their algorithms need to be transparent.
[ link to this | view in thread ]
Re:
And asshole can create new accounts and orchestrate attacks on people. For individuals to avoid them can be a game of whack a mole.
[ link to this | view in thread ]
Re:
If you have to ask, even in a rhetorical fashion, you don't understand the issue at all.
Please name your social media accounts so I can fill them with goatse.
Translation: Property owners should have no control of how others use their property, even if users break the owners rules. Doesn't sound particularly reasonable to me.
If it's illegal, it's illegal - no need for a court order, let illegal content stay up, expect legal consequences. Also, see previous comment about property.
Define interfering. Does it include campaign contributions? Does it include endorsement of candidates or parties? Does it include what users posts on their account? Why only social platforms? Traditional media "interfere" the whole time.
Yes and no. It would be good for them to be transparent, but due to human nature it would be worse if they where transparent.
On the whole, you seem to have an extremely naïve and simple grasp of the issues being discussed.
[ link to this | view in thread ]
Facebook has a right to moderate content on its servers. You don’t have a right to make Facebook host your content. Losing your spot on Facebook is not censorship.
None of those statements are factually incorrect and you can’t prove they are.
[ link to this | view in thread ]
Re:
"People getting falsely banned because the mod-bots suck is far worse an outcome than some bad speech getting through"
Depends on your idea of "bad speech".
To give a random example - a couple of people I know are "edgy" characters who fairly frequently get timeouts on Facebook because of the racist, homophobic or transphobic things they share. They know they're saying things they shouldn't and IRL seem to get along OK with the minorities in their peer group. They just seem blissfully unaware that the reason they get occasional days or weeks-long timeouts on Facebook is because they are offending people they know who have to put up with daily abuse from people who are not them.
Those guys will moan about blocks, the people they associate with enjoy the silent periods.
"People can always block the assholes manually if they want."
There's always new assholes. So, what's the right balance? Force the people under constant attack to play whack a mole with all the people lining up to make their lives worse? Or, ensure that the platform is as asshole free as possible, with the caveat that occasionally someone might be temporarily blocked for something they didn't consider to be wrong (most people only get blocked for a few hours or days on their first offenses, not banned).
The problem is that this is all completely subjective. One man's banter is another man's abuse, and a platform that operates globally with billions of users cannot possibly find any kind of balance that acceptable to everyone, no matter what they choose to do.
[ link to this | view in thread ]
Re:
"Why is FB or any social media moderating at all?"
Have you seen the state (and financial irrelevance) of the platforms that don't?
[ link to this | view in thread ]
Moderation also sucks if they aren't moderating what they...
Moderation also sucks if they aren't moderating what they claim to be moderating.
They publicly claim to be moderating, COVID misinformation, election misinformation, hate speech, etc. Privately (at least according to various whistleblowers) they are moderating to maximize engagement, and therefore profits.
It just so happens that; COVID misinformation, election misinformation, hate speech, etc. drive engagement.
The fact that the AI and algorithms misidentify engagement driving content isn't a bug, it's a feature.
[ link to this | view in thread ]
Re: Moderation also sucks if they aren't moderating what they...
Who watches the watchers?
The ultimate problem with moderation is that it's subjective. One man's acceptable content is another's form of abuse. No company that spans billions of users across the globe can come up with a standard strategy that pleases everyone - and people from different parts of the globe use the platform to talk with each other.
How we remove the profit motive from misinformation and hate speech without infringing on the things people use the platforms for that are actually useful and acceptable is a major issue. But, it's one that's removed from the question about moderation. Even if Zuckerberg decided to personally finance Facebook for a year out of his own pocket without any hands on involvement, thus removing any profit motive, the central challenges remain.
[ link to this | view in thread ]
Re:
Yikes, thank you for pointing that out. Either my synapses misfired there or the AI I built to write these posts for me sucks as bad as Facebooks!
[ link to this | view in thread ]
big data and small data
The reason their AI tags mass shootings and cockfights as car washes and paintball is that there are a lot more videos of innocuous stuff than dangerous or illegal stuff, and their training algorithms match what they're trained on. It's the same reason that facial recognition s/w doesn't recognize black people, because the training images are mostly white people.
Facial recognition is fixable because training images are easy to collect, mass shootings much less so.
[ link to this | view in thread ]
Re: Re:
Blissfully unaware that the modern internet bully doesn't say bigoted things, they accuse others of saying them. Never get banned calling people ists and phobes.
Or maybe they aren't actually trying to bully people at all. I don't know and don't want to assume.
What's your source on the penalties most people are given? In any case, I'd still argue that getting banned for a few days for nothing is far worse outcome than having to click a button to hide posts you don't like.
I know lots of people like to imagine themselves celebrities on the internet, but in reality there are very few who have people lining up to talk to them.
And ultimately moderation doesn't make platforms asshole-free at all, it just directs them to moderator-approved forms of abuse. Why calling people ists and phobes is today the most effective way of bullying.
Of course not. Everyone wants the moderator to side with them every time, no compromise. But that's impossible and therefore not worth considering.
Morality is not all subjective and setting the system up to punish people just in case if it can't accurately tell if they did anything wrong is definitely not on the right side of it.
[ link to this | view in thread ]
Re: Re: Re:
"Or maybe they aren't actually trying to bully people at all"
They weren't, that was the point of the anecdote. They dropped jokes about trans/gay people unaware that people they know IRL would take offence (bear in mind that pre-COVID this area held one of the biggest Pride parades in this area and they have friends heavily involved in that), but they didn't consider the repercussions when they posted what they did. In this area at least it's all in good fun to a certain degree, it's just amazing to me how little certain people understand about an audience when it's not just the people in front of them at that moment.
"What's your source on the penalties most people are given?"
I can only go with anecdotes, but the people I mentioned above only tended to have a few days, then a few weeks. Then, for the few who were actually banned they signed up with another account and started again. Not a good sign for the attacked, right?
"I'd still argue that getting banned for a few days for nothing is far worse outcome than having to click a button to hide posts you don't like."
Then, you appear to underestimate the abuse people get online.
"And ultimately moderation doesn't make platforms asshole-free at all, it just directs them to moderator-approved forms of abuse"
Which is basically good. Kicking assholes out of your bar who start fights every Friday night doesn't necessarily make them asshole free, but it gets rid of the predictable asshole, makes life easier for everyone else and if the other bar owner who gets him wants to put up with him that's also his choice.
"Everyone wants the moderator to side with them every time, no compromise. But that's impossible and therefore not worth considering."
Indeed. So what you do is take a general test of what your community wants to put up with and get rid of the outliers. Remember - with social media, the actual customers are not the users, it's the advertisers, and it's harder to sell if half the community have been convinced to leave by one obnoxious dickhead.
"Morality is not all subjective"
Oh, but sadly it is, and you'd be amazed at how some peoples' opinion of it differs.
[ link to this | view in thread ]
Re: Re: Re: Re: Respectful Disagreement
PaulT, I disagree with you on two points:
a) Morality can probably be objective, if you define it as "doing what's best for the most people over the long run". Of course, that proceeds to turn immediately into any number of subjective judgements....we are busy arguing about the "best" moderation here, and yet we don't even know how to measure how good a moderation system is!
b) It's been pointed out that large audiences tend to create abuse by accident, as follows: Say something just slightly problematic to 1000 or so viewers, maybe your spelling is bad today. 30 of them criticise you for that...and it feels like abuse on your end, even though none of those 30 meant to abuse, and if only one of them had said the same thing face to face, it would not have bothered you.
c) Finally, who said it matters quite a bit, consider the case of "The slants", covered here awhile back. I'm distinctly not asian, so I'll get in trouble for calling someone a slant at work, but these asian guys decided it was a great name for their band -- nobody is gonna bother them about using the word!
AI will never work for moderation unless all of this context is part of the model!
Breast cancer screening AI works because it runs in a single context.
[ link to this | view in thread ]
Moderation sucks in any large enough context...
Not only is moderation subjective, its context dependent.
take your favorite comment about Elon Musk and SpaceX and what will happen to the stock price. Here on Techdirt, I expect that to get moderated and made invisible, it's not good for Techdirt.
On the other hand, there's lots of SpaceX oriented forums where the exact same content might be welcome.
Start thinking like that, and the difficulty of moderating facebook at scale becomes obvious -- if the whole world gets pointed at some bit of content, it's not going to work for some; the personal contexts are just too diverse.
[ link to this | view in thread ]
Executing serial killers who would otherwise be serving life sentences might be “doing what’s best for the most people over the long run”, but is it an objectively moral act?
A good moderation system keeps a community from devolving into a free-for-all of abuse and trolling—i.e., from turning into a 4chan ripoff. Moderation is community curation; if mods don’t curate the community they want, they’ll end up with the community they deserve.
That sucks, but that’s life.
[ link to this | view in thread ]
Re: Re: Re: Re: Re: Respectful Disagreement
"Morality can probably be objective"
No, it really can't. There's almost nothing you can say or do without someone else finding it immoral, even if within your own community it's not. That's by definition subjective and can never be totally objective by nature.
"It's been pointed out that large audiences tend to create abuse by accident"
There's a massive difference between creating something that proves to be problematic in a cultural climate decades later, and sharing an "edgy" meme that immediately offends people. If the latter proves to be the case, the correct action is to inform the person that they were over the line and take appropriate action, not to put up with their shit in the vague hope that they're come to their senses.
The reason moderation at scale is impossible is because of the context issue, but the deeper issue is that it's impossible to have a global community where everyone agrees on what is moral or acceptable. Therefore, for good or bad, someone has to make a judgement as to what is acceptable in the community and its members react accordingly, either by conforming to the agreed standard or going somewhere else that agrees with their own judgment.
[ link to this | view in thread ]
Re: Re: Re: Re: Re: Re: Respectful Disagreement
so, under what set of facts is your reply to my post immoral according to what definition of immorality?
To have moral standing, something has to be "good" or "bad" for at least one person, but your post (and this one) will be lost in the noise. I claim that for something to be moral or immoral, it has to change the greater good in some way, and I don't think by itself, that is subjective. However, "greater good" is very subjective -- what scope? what's good? what's harm? what quantum of harm is sufficient to be "bad"?
Now, also, your response to me, paraphrased a hundred different ways by a hundred people, and also posted here, would feel like abuse -- but that abusiveness would be a network effect, where what came out of the network was not what went into it.
[ link to this | view in thread ]
Re: Re: Re: Re: Re: Re: Re: Respectful Disagreement
"so, under what set of facts is your reply to my post immoral according to what definition of immorality?"
Whatever subjective definition you chose to use. You may or may not be the only person with that definition, but there is no objective definition to use, hence the problem.
I wouldn't personally consider anything I said to be moral or immoral, but that's not the point. If I were gay, some people would consider my very existence immoral no matter what words I wrote. It's not an objective thing, and needn't even make sense to anyone but the people having a problem with it.
[ link to this | view in thread ]
Re: Re: Re:
By that time, you've already seen the post. So whatever you didn't want to experience by seeing it, you've already experienced. Hiding it after the fact is slightly better than nothing, but it's not much.
[ link to this | view in thread ]
Re: Re: Re: Re:
That is an important point - some people seem to labour under the delusion that it's fine for everyone to see everything and individually categorise posts. That won't work since by the time you see the post the damage is already done, and you'll inevitably lose a lot of users if you're a platform that forces offensive content on to people like that.
No, the sensible way forward is to gather a community consensus on what's acceptable, filter content that violates those rules and reprimand those people responsible for content that regularly does so. This is a system that often works quite well, both offline and on, it's just that some people have decided that being told they're wrong is more offensive than what they're posting.
[ link to this | view in thread ]