Content Moderation At Scale Is Impossible: Facebook Kicks Off Anti-Racist Skinheads/Musicians While Trying To Block Racists
from the brings-me-back dept
So, this one brings me back. A few decades ago, I spent a lot of time hanging out with skinheads. And back then, it was all too common to have to go through the standard explanation: no skinheads are not all racists. Indeed, original skinheads in the 1960s were working class Brits with an affinity for Jamaican music, immortalized in songs like Skinhead Girl and Skinhead Moonstomp by the Jamaican band Symarip -- and that meant that many of the original skinheads were also immigrants to the UK from the Caribbean. It was only in the 1980s that a group of newer skinheads started associating with various fascist movements in the UK. Of course, as with so many things, the media picked up those neo-nazi skinheads, and ignored the roots of the movement. In response to the media suddenly believing that all skinheads were nazis, many started associating with the "SHARP" movement (Skinheads Against Racial Prejudice -- though also a play on the fact that skinheads like to dress "sharp"). There's a lot more to all of this and a lot of sub-cultures and sub-groupings, and there are plenty of skinheads who are neither racists nor officially "SHARPs" but I'd kinda thought I'd left all that debate and culture behind many years ago, only to have it crashing back into my consciousness last week with the news that Facebook had kicked off a ton of anti-racist, and SHARP skinhead accounts, believing that they were racists.
Hundreds of anti-racist skinheads are reporting that Facebook has purged their accounts for allegedly violating its community standards. This week, members of ska, reggae, and SHARP (Skinheads Against Racial Prejudice) communities that oppose white supremacy are accusing the platform of wrongfully targeting them. Many believe that Facebook has mistakenly conflated their subculture with neo-Nazi groups because of the term “skinhead.”
The suspensions occurred days after Facebook removed 200 accounts connected to white supremacist groups...
Somewhat incredibly, this included the personal account of Neville Staple, the Jamaican born frontman of the famed UK ska band, The Specials (the band who literally launched the 2 Tone label and musical movement, which was named, in part, because most of the bands associated with it had both black and white members).
Of course, all of this seems like just another example of the Masnick Impossibility Theory at work. If you don't know much about the subculture, you might actually believe that skinheads are, by definition, racist. It's a common enough belief. It's not at all historically accurate, but you have to actually understand the culture and the history and the context to know that. If you've just heard somewhere -- as many have -- that skinheads are racists, then it's easy to think that any "skinhead" page or account should be shut down.
And, of course, some of the removals (like Staple) were particularly absurd if anyone had bothered to look at the pages in question:
The account of Clara Byrne, singer of Brighton hard reggae band Dakka Skanks and a musician of color, was also temporarily disabled. Byrne’s most recent Facebook posts support Black Lives Matter and the uprisings against police brutality.
But, of course, part of the issue is that this is not Facebook employees going one-by-one and examining each and every page. That's not how these big automated sweeps work. And, that's just part of the issue with the scale here. It simply can't manually review every page like that, because while you can figure out why each of these is absurd, or why there's confusion, for every bit of time you spend doing that, you have thousands of trolls, bots, and actual racists signing up and creating mayhem as well.
Inevitably, mistakes are made.
To Facebook's credit, soon after this started getting attention, the company reinstated all of those accounts, and there was some other confusion about all of this (for one account, they apparently demanded the user upload ID to prove who they were). Obviously, Facebook can and should continue to improve its systems to avoid situations like this (though, most of the people impacted in the article took it in a good-natured way). But it's yet another example of the impossibility of moderating so much content at scale without inevitable mistakes.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: content moderation, content moderation at scale, impossibility theorem, neville staple, racism, sharps, skinheads
Companies: facebook
Reader Comments
Subscribe: RSS
View by: Time | Thread
Wonder if the Dead Kennedys "Nazi punks, Fuck Off!!" would get steamrolled too.
[ link to this | view in chronology ]
The assumption about what "skinheads" means is not much differet than how the term "hackers" has shifted meanings from someone who likes to tinker with things to someone who commits crimes with a computer. However, this is something that happens to lots of words. Meanings evolve.
As for moderation at scale, perhaps the problem is one of scale in a different way. Like microsoft, facebook is as large as it is because of user lock in. There is facebook and there is ____ ? That makes it ideal for spreading any kind of message at all and make any sort of belief seem more popular than it really is. If facebook weren't ubiquitous, spreading disinformation, whatever, would be less attractive because it would require doing so across many platforms to reach the same audience, which is a lot more effort. Different platforms that handled the issue of moderation independently would let people choose to go wherever they were most comfortable. Perhaps the solution to facebook would be antitrust laws.
[ link to this | view in chronology ]
Re:
Myspace, Mastodon...
[ link to this | view in chronology ]
Re:
Meanings don't just evolve, multiple meanings coexist -- until the stupid brigade chooses to ignore everything else.
Dumbing down language isn't evolution, it's more like genocide.
[ link to this | view in chronology ]
Re: Re:
Apparently, you know little about how languages evolve and you probably use words everyday that have meanings different from what they originally meant. Look up "Symantic Change" on wikipedia. It has nothing to do with dumbing down.
[ link to this | view in chronology ]
Re: Re: Re:
Sure, i know that, and yet disagree. Ignorant (and proudly ignorant) people do ignorant things all the time, and one of those things is popularizing abuse of meaning. It's been happening at an increasing rate, and forced loss of meaning is not a net positive. It's stupid.
[ link to this | view in chronology ]
This seems to be a downside of the Facebook model -- free (there are upsides, too, of course). If it were a paid software product, the purchase cost could become a revenue source for Facebook. I'm sure that even at a one time upfront price point of, let's say $20 it would easily pay for a human to do manual reviews. How fast could a moderator ban spam bot accounts for $20 per pop? I would love to try a business model like that.
Alas, the reality is people on the internet are anonymous, and social media products are "free". So because we can't push the costs of moderation onto the rules-breakers, it probably is always going to be impossible to moderate at scale.
[ link to this | view in chronology ]
Re:
What you are proposing is that every post by every user is moderated, which is absurd as it would require a moderation team of 1% to 5% or more the size of their user base, that is a moderator for every 20 to 100 users. Even at that, some world event, like the current protests, would put the team days behind with hours if not minutes.
[ link to this | view in chronology ]
Re: Re:
No, I wouldn't do that. I would only have human review for posts which get flagged by an automated system.
Human review might greatly improve moderation at scale and prevent false-positives such as in the story. But the number flagged posts seems impossible due to the overwhelming volume. If there was a cost to the bad actors, then the amount of moderation necessary could decrease across the board. As an example, few people get kicked out of a nightclub within the first few minutes after paying a cover charge. Not many folks are obtuse enough to misbehave immediately after paying $20 at the door.
[ link to this | view in chronology ]
Re: Re: Re:
Facebook has about 1.5 billion daily users, some of whom post more than once a day. So, at least 3 billion post a day. If 0.01% of those get flagged, that is 30 million flagged post a day. At a minute per post to review it, that is 21,000 man days a day to deal with them. That is a team, with management, human resources and technical support that is north of 110,000 people, and they could still be overwhelmed by something like the current protests.
Just where do you find and train 100,000 plus people to moderate to a common standard?
Do you still think human review is reasonable possible, and note more smaller sites increases the number of people required overall?
[ link to this | view in chronology ]
Re: Re: Re: Re:
I found a statistic from omnicoreagency dot com which claims there are 5 billion comments posted to facebook monthly. It may not be as much as 3 billion posts per day, and also I bet the flagged posts are actually fewer than 0.01%. We're getting into some Drake Equation stuff here.
But I get your point. You're probably right that human review for all flagged posts is still too difficult to achieve for everything. A reasonable number of moderators would probably be overwhelmed by some kind of emergency crisis event. And there's probably a whole lot of other problems with a paid comment model, such as privacy. I'm not saying that it would work.
Mostly what I'm trying to get at is that there might be a way to make life easier for moderators, or decrease the chances of problems occurring as in the main story by increasing the human-review to automated-scrub ratio. It will never be perfect, and so the Masnick Impossibility Theorem will still always hold true. I'm just trying to point out the tradeoffs of the current system, and explain how our current free comment model involves a relatively high moderation cost compared to the relatively low cost of bad actors creating a new account to abuse. A better system design will probably require a significant model overhaul.
[ link to this | view in chronology ]
Ain’t no “probably” about it, and you know it.
Still wouldn’t change much. Automated moderation runs into issues like Scunthorpe; human moderation runs into issues like subjective human biases that can’t be drummed out of people even with the promise of a paycheck. If a human moderator sympathetic to anti-LGBTQ causes is in the position to review a post with those same sympathies that was flagged for using “f⸺t”, that moderator saying “eh, I’ll let this one slide” is not entirely out of the question.
You can’t design a system without designing around human behavior, and no system can 100% account for that. You can improve moderation tools and whatnot, but you can’t change people with regex wordfilters.
[ link to this | view in chronology ]
Re: Re: Re:
"No, I wouldn't do that. I would only have human review for posts which get flagged by an automated system."
I believe there is ample evidence that this system has been tried and simply does not work as advertised. For example, the dude whose nature video containing wild bird song was taken down because the automated system detected music. This was disputed which triggered the human review requirement which was apparently skipped because the appeal was rejected.
Guy Gets Bogus YouTube Copyright Claim... On Birds Singing In The Background
[ link to this | view in chronology ]
Re:
I'm sure that even at a one time upfront price point of, let's say $20 it would easily pay for a human to do manual reviews.
So, first a question, then a counter-point.
First, the question: Would you have created an account and posted here if you had to pay to do so, with all that would entail?
Second, the counter-point: A one-time $20 per user cost would only even possibly pay for moderation fees if you took into account that that fee would drive off an enormous number of people such the platform would be drastically smaller, if it ever got off the ground in the first place(even now if they tried to introduce that they'd likely lose a huge chunk of their user base).
Moderation is an ongoing and ever increasing issue, so to pay for moderation costs you would therefore need an ever increasing source of income, and if you tie that to sign-up costs you basically ensure that you will not be able to meet those costs as the cash-flow from incoming accounts will not be able to keep up with even current moderation costs, never mind the need to moderate old and new accounts.
So because we can't push the costs of moderation onto the rules-breakers, it probably is always going to be impossible to moderate at scale.
Those that deliberately break the rules seldom find themselves facing significant penalties for it might be a part of it, but the biggest part is simply scale and context, where there's simply too much content, some of it where context is vital as seen in this article, and too many ideas of what counts as 'acceptable' to realistically vet all of it and make sure that everyone is happy.
If you've got one million things to look over but you've only got the time, money and manpower to give a tenth of that a good look over you either reduce the time given to each piece and therefore increase the odds of false/negative hits, automate it, which can deal with the obvious cases but which carries it's own problems, or do a mix of the two, which has it's own ups and downs.
Alternatively you can reduce the amount of content you have to deal with, and while that will shift most of the problem off of you the act of doing so leaves the users/public in a worse position, as now many of them find themselves facing either no platform to post on, or a scattering of smaller platforms which rather undercuts the usefulness of social media for that 'social' thing.
[ link to this | view in chronology ]
Re: Re:
I'm not saying that a paid comment model would succeed. Free commenting DOES have its advantages. I'm just trying to say that the Masnick Impossibility Theorem is correct under the current environment.
Perhaps not for just one website, but I could imagine a system where someone could pay to have their identity verified, and then this login might work for comments or forum discussions across a number of affiliated websites. I might go for that.
You're right, it couldn't pay for ongoing moderation. My hope would be for such a system to prevent human moderators from being overwhelmed by bad actors creating a new account, and misbehaving on their first post.
And this is also why the Masnick Impossibility Theorem is still always going to be correct. However, part of the theorem is that we ought to try and improve our existing moderation systems, even if we never achieve a perfect system. Can a system with paid access to forum discussions be developed to be superior to a free access system filled with spam and bots and trolls? I'm not sure, but I'd like to throw out an idea for consideration, and even if it's not practical, then it at least explains the cost shifting phenomena which makes like miserable for the moderators.
[ link to this | view in chronology ]
Re: Re: Re:
Perhaps not for just one website, but I could imagine a system where someone could pay to have their identity verified, and then this login might work for comments or forum discussions across a number of affiliated websites. I might go for that.
Possibly even more than the money that would be a huge problem, stripping people of the ability to comment anonymously(if not directly then by the knowledge that there is a record of who you are in at least one place). While the ability to post anonymously allows trolls to trash comment sections and/or post garbage it also allows those that would not dare speak under their real name to do so(whether from fear of familial or social shunning or because it could literally cost them their life), which would be a significant loss were it to be limited, such that I'd consider having to put up with trolls a worthwhile costs weighed against the gains.
However, part of the theorem is that we ought to try and improve our existing moderation systems, even if we never achieve a perfect system.
Coming up with new ideas, even ones that end up not holding up is a worthwhile action to be sure, as there's always room for improvement, so while I disagree with this particular idea as not worth the cost I would agree that throwing out ideas to test is still a good idea itself.
Can a system with paid access to forum discussions be developed to be superior to a free access system filled with spam and bots and trolls? I'm not sure, but I'd like to throw out an idea for consideration, and even if it's not practical, then it at least explains the cost shifting phenomena which makes like miserable for the moderators.
I mean... kinda? I could see some people being interested in a much smaller, much more curated platform, I just don't see it replacing the current more open ones, as by it's very nature it would also be much more restrictive both in the number of people who could use it and the content(speech or otherwise) that would be allowed on it.
[ link to this | view in chronology ]
That depends. How do you feel about the Something Awful forums?
[ link to this | view in chronology ]
Re: Re: Re:
"I'm not saying that a paid comment model would succeed. Free commenting DOES have its advantages."
Wait a sec ... I thought you had in the past supported the claim of first amendment violations caused by moderation and that some large platforms should be public spaces ... no? And now you want to charge money for the privilege of speaking your mind on a blog that by your claim should be a public space - this is not a first amendment violation? How so?
[ link to this | view in chronology ]
Scaling is hard. Nuance is also hard
It's good to see the culture of my teen years covered here but nuance is hard. I was a teen skinhead in the late sixties in the UK so I can speak about my own experience.
Yes, white skinheads loved West Indian music. We were the early adopters of bluebeat, ska, rock steady and reggae in the world outside at large. We also admired the local black kids who were born on Caribbean islands and moved with their parents to Britain during the fifties [1]. The style, the music and slang were adopted by us white boys. During the late sixties the love didn't go both ways though. If you tried to go into a black pub to hear the latest Jamaican imports, well, let's say it was not advisable.
While the white skinheads may have wanted to be as cool as their West Indian counterparts the same attitude didn't apply to other immigrants. The adjacent London suburb to my location had a very high immigrant population. Not just West Indians but also Pakistanis (East and West) and Indians. That's also a mix of religions and culture in one small area. The skinheads didn't like the Asians arrivals at all. I think the cultural differences were too great compared to the Caribbean migrants.
At this point the local sport for the white British teens became "Paki Bashing". The name isn't even accurate, any Asian was a target.
By the seventies things had changed for the better, I'm pleased to say. Left wing skinheads realised they had more in common with the migrants than their right wing counterparts. Police bullying was something they all had to deal with and by the time the punk movement arrived you had a good coalition of races and cultures working against the forces of oppression.
Today many Britons' favourite food is likely to be curry washed down with a Red Stripe than fish and chips and people are more suspicious of the EU than their local Asian newsagent.
As for me, I was 16 when the Paki bashing started and I didn't want to be part of that. I made other friends of a more tolerant nature.
[1] Migrants from all over the Commonwealth came to the UK at the invitation of the government to help fill the job vacancies of the 1950s' boom period.
[ link to this | view in chronology ]
Wait, if content moderation is impossible, then why do you practice it here? Why do you hide comments, especially interesting ones written by famous authors, inventors, and practitioners of healthy herb products and such?
[ link to this | view in chronology ]
Re:
I'll Tell You Why! Because they hate Melania here! She is the most beautiful woman who EVER LIVED, and they HATE HER FOR IT! SO they Censor comments about POTUSW!
[ link to this | view in chronology ]
Re: Re:
You two seriously need to get a room.
Hey, Harder - how's Shiva living up your failure to prove that he invented email, chucklefuck?
[ link to this | view in chronology ]
Re:
You're a middle-aged troll who needs to get what's left of his life together.
[ link to this | view in chronology ]
Re:
At scale is the key words here. If the user base of Facebook were a planet, the user base here would be the size of a small piece of gravel at best.
[ link to this | view in chronology ]
Re: https://v9.bet/
a comment-bot trying to beat moderation by copying legitimate (legitimate enough) posts as camouflage automatically selecting one that comments on the tradeoffs and costs of moderation.
heh.
[ link to this | view in chronology ]
Moderation with representation
Instead of the one keyword kill trigger where a number of activists I know have run afoul of, here are some suggestions
Arbitrary number greater than one on the trigger words
User/reader moderation where each reader can hide either the post or the poster
Again some arbitrary number could be set for account warning or kill
I would like the trigger for account kill to go by a human with enough resources to tag Real People doing bad things and summarily punish them. The rest would get the standard "you are being stupid, stop it" warning.
[ link to this | view in chronology ]
Re: Moderation with representation
This site uses user voting to hide comments, and the trolls still complain about how unfair it is, and how they are being censored.
[ link to this | view in chronology ]
Re: Moderation with representation
This is the thing. Sure, the automatic system did the thing, but someone put "skinhead" as an account-killing keyword into the system. And that person is a fuckwit.
[ link to this | view in chronology ]
Re: Re: Moderation with representation
To be fair this article is literally the first time I learned that 'skinhead' had connotations other than 'racist loser', and as I doubt I'm the only one who didn't know that while it was a mistake it was an understandable one.
[ link to this | view in chronology ]
Re: Re: Moderation with representation
I seriously doubt the word "skinhead" was enough to trigger anything. Given that it's possible to use monte carlo markov chains to create coherent phrases and sentences and software can identify plagiarism and give probability of authorship by comparing an unknown sample to known samples, I would guess that the algorithm used in this case was much more sophisticated than just keyword identification.
The people claiming they thought it was just that one keyword were probably mistaken, unless I missed where facebook admitted that. Apparently, the algorithm, whatever it was got it wrong, but the one thing an algorithm can't do is determine meaning even if it tries to identify words and context. For that matter, people seem to have enough difficulty doing that, given how often people fail to recognize sarcasm.
[ link to this | view in chronology ]