Be Careful What You Wish For: TikTok Tries To Stop Bullying On Its Platforms... By Suppressing Those It Thought Might Get Bullied
from the this-shit-ain't-that-easy dept
Be careful what you wish for when you demand that internet platforms police the internet for any and all bad stuff. There was a lot of fuss and cringing when this story broke that part of TikTok's content moderation strategies included suppressing videos by disabled, queer, and fat creators.
Leaked documents reveal how TikTok hid videos of people with disabilities. Queer and fat users were also pushed out of view.
No matter how you look at it, this looks bad. And for good reasons. But, as the company itself claims, it had good intentions behind this, even if the execution was atrocious. There have been tons of reports of bullying on the platform -- and like with so many social problems that are making themselves more widely known thanks to technology, the first reaction of many is to blame the tech platform, and to demand they "fix it."
And, a la the infamous paperclip maximizer thought experiment, what's the most efficient way to stop bullying? Some figured it might be to hide the likely-to-be-bullied rather than the actual bullies:
The relevant section in the moderation rules is called "Imagery depicting a subject highly vulnerable to cyberbullying". In the explanations it says that this covers users who are "susceptible to harassment or cyberbullying based on their physical or mental condition“.
According to the memo, mobbing has negative consequences for those affected. Therefore, videos of such users should always be considered as a risk and their reach on the platform should be limited.
TikTok uses its moderation toolbox to limit the visibility of such users. Moderators were instructed to mark people with disabilities as "Risk 4". This means that a video is only visible in the country where it was uploaded.
And, yes, there is a very reasonable argument that the content moderation team at TikTok/ByteDance should have recognized that this is a horrible way to deal with bullying, you can see how those desperate to deal with "the bullying problem" might end up thinking that this is the simplest path to get people to stop screaming at them about bullying.
This is a key point that we keep trying to raise in the mad dash currently happening to put responsibility on platforms to "clean up" whatever mess politicians and the media see. There's this weird belief that the platforms can wave a magic wand and make bad stuff go away -- when the "easier" solution (if a morally questionable one) is to just figure out a way to hide the real problems or sweep them under the rug.
This is why I keep trying to argue that if we're highlighting societal problems that are manifesting themselves on social media, expecting tech platform companies to magically solve societal problems is not just going to fail, but it's going to fail in spectacular and awful ways. This TikTok "hide the people we think might get bullied" is just one example of sweeping a societal problem under the rug to avoid having to improperly answer for it.
Unfortunately, I fear most people will just blame TikTok for it instead.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: bullies, bullying, content moderation, content moderation at scale
Companies: tiktok
Reader Comments
Subscribe: RSS
View by: Time | Thread
Well that's one way to deal with bullies/trolls...
... take our their targets before they get the chance. As counter-bully tactics go it's certainly unique at least.
Unfortunately, I fear most people will just blame TikTok for it instead.
While the social/political pressure and those pushing it to 'Do Something' certainly carries the majority of the blame, in this case TikTok does deserve a good portion of the blame too, as desperate or not the strategy they went with here is beyond absurd, and if anything seems likely to just encourage bullies/trolls as it provides a way for them to not just act like jackasses but get content they don't like restricted, adding insult to injury to their would-be victims.
People need to stop losing their minds and heaping blame on the wrong targets and demanding the impossible to be sure, but that doesn't fully absolve companies/platforms of responsibility when they respond with actions making the problem worse, and if nothing else they should be called out so that others don't follow suit down the line.
[ link to this | view in chronology ]
Re: Well that's one way to deal with bullies/trolls...
It's not that unique - schools have been doing this same tactic for many decades. Separate the people who might get bullied into different classes, then ignore any bullying until the bullied party throws a punch, then expel the bullied party. It's always much easier to "deal" with the few bullied people than all the bullies. Less screaming from A-Type parents as well. (A-Type in this case meaning Asshole).
[ link to this | view in chronology ]
Re: Well that's one way to deal with bullies/trolls...
Our schools have been this bad for decades, but "Kingston-area parent believes elected trustee was punished for trying to represent him" ( from a local TV station, CKWS-DT 11 globalnews.ca/news/6258432/ldsb-school-trustee-censure-reaction/ ) suggests that they've reached a new low. A teen was being bullied, the vice-principal of the school responded by asking the victim point-blank "Are you gay?", the victim's parent complained to a school board trustee. The trustee DID THEIR JOB by enquiring at the school as to what was going on and WAS PROMPTLY CENSURED BY THE REST OF THE BOARD and forced to apologise for interfering in the day-to-day operation of the system. The offending instructor, meanwhile, was promoted to headmaster.
That's the same school board (or a successor) to the one in which a school did nothing when I was being bullied in the early 1980's and told me to ignore the problem because the bullies "just want attention". When I fought back, they suspended me for two days.
Clearly, nothing has been learned. All that has changed post-Columbine is that it's possible to be kicked out of class for mentioning Columbine High School. The bullying, meanwhile, continues unimpeded. It's teaching students a valuable lesson for when they enter a real world where bullies with money and lawyers silence their victims with non-disparagement agreement, non-disclosure agreements and strategic lawsuits against public participation... and in which Russia chooses the president of the Excited States over the objections of 2868692 Americans (the gap in the popular vote, where even a weak candidate is better than this).
[ link to this | view in chronology ]
Re: Re: Well that's one way to deal with bullies/trolls...
Might Makes Right is the problem here. We haven't learned a damn thing and we don't want to.
[ link to this | view in chronology ]
Re: Re: Well that's one way to deal with bullies/trolls...
EXCELLENT POINT.
This is really how it works in the real world. Bars are the same, policing is the same. Nearly every mass shooter/butter knifer/incel car crasher is the by-product of this social contagion.
The victims are most frequently expelled/arrested/disciplined/blamed.
Its bizarre to expect Socmed to somehow overcome that social problem by itself, especially when a huge portion of Socmed harassers are actually institutional bullies and cyberstalkers from within the military/private contractor CVE milieu, and even the corporations themselves.
[ link to this | view in chronology ]
Re: Re: Re: Well that's one way to deal with bullies/trolls...
"Nearly every ( badguy) is the by-product of this social contagion"
This is an amazing revelation, have you published yet?
[ link to this | view in chronology ]
Re: Re: Re: Re: Well that's one way to deal with bullies/trolls.
Thats great, coming from one of you in a thread about-get this-bullying.
Anything to say of substance? Yeah, I didnt think so.
The cognitive dissonance on your part is itself stifling, but add into that you trolling/cyberstalking my every post, and there you have it.
None of those that you posit as bad guys (aside from the fact that some are women or girls, like Sol Pais) was convicted of any crime, nor openly accused either, so your thesis is the equivalent of justifying lynching.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Well that's one way to deal with bullies/tro
It was a simple question, why are you so defensive?
You made a claim, provided nothing in support of same, and then protest a lack of substance - lol.
and you think I am bullying you - wow
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Well that's one way to deal with bullies
Defensive? No, not in the least, because I have repeatedly supported that claim.
But you could say that I am AWARE that an extremely high percentage of ACs online are potentially dangerous.
And so, I am properly responsive to the patterns of military/police/intel/NGO cyberstalking on forums just like this.
https://www.computerworld.com/article/2475679/online-gaming-surveillance--so-many-nsa---cia-sp ies--they-were-spying-on-each-other.html
I havent published since 2009, other than my blogs.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Well that's one way to deal with bul
"an extremely high percentage of ACs online are potentially dangerous"
How does one measure the quantity of anonymous cowards surfing the internet at any particular instant and what specifically makes them dangerous?
What does "properly responsive" mean in this context?
You are not feeling defensive in the least but accuse others of cyber stalking - lol.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Well that's one way to deal with
Yeah, some 40% of ACs online, and especially on Twitter, are often military /intelligence /police afilliated profilers and behavioral analysts who weaponize words, and apply psychic driving techniques on unsuspecting web kiddies, in order to create pretext and predicate to then manufacture terrorists, by endlessly harrassing these guys under the watchful eyes of the Good Guys®.
One recent example was the Pensacola shooter, who was being cyberstalked by none other than FBI -Mossad -CIA affiliated Rita Katz and her Muslim Hatin® organization SITE intelligence, AFTER he filed a sexual harassment complaint against his trainer, and then had to endure six more months of juvenile bullying there, finally targeting his aggressors .
These intel sadists /sickos /whackjobs are nearly ALWAYS ironically close to mass shooters, butter knifers, and Incel car crashers lives immediately before they go ballistic; and, frequently fill the ranks of those guys Twitter /Facebook /other Socmed.
This pattern is SO CLEAR, that these guys web presence gets scrubbed immediately following those events, in order to hide this pattern of malicious and outrageous government conduct, cuz, turrerisms depend from religious mythmaking not fact or empirical evidence .
So, measurement in this case could be derived from the immediate information deficit that follows these events, and compared to the OTHER group of ACs online who always seem to have screencaps for the hungry media; and many of THOSE people, exactly who I say they are, whoever, they are.
Only problem is, we cant get that data after the fact to analyze the content and commenters,, because CVE is designed to avoid scrutiny of its practices. So I work with what I have.
So, lets start with your binary supposition about offense /defense.
In re: threat, yeah I have been cyberstalked numerous times, including dangerous, and life threatening activity that then came offline, by people like you, who exhibit patterns of behavior, in forums that also exhibit “forum behavior. ”
So, I know the patterns, and you are exhibiting them. YOU and your commenting style and lack of overall substantive engagement, lack of empathetic response, Freudian projector rolling like a remote viewing station in the Panoptical sea of data, etc. yes, YOU personally fit a pattern
Then, because you fit a pattern that exists in psychological profiles OF profilers and provocateurs from both weaponized psychology, and that other kind of psychology (and that pattern one that can.be mimicked and replicated by others ) that actually helps people ,I applied that.
I mean, for all I know, youre just another pimple faced twat with an exploding pork burrito in your cumshoot. Maybe you were conceived on your mothers period.
But those bastards usually give up by now, having been easily pacified by said burrito.
So properly responsive in this context is that I am in a relative position of security, with zero actual fear of you, and knowing that I am using YOU to set an example to OTHERS who are also watching this, and YOU.
But especially remember: I am utilizing demonstrative speech, while you began as 1.an illiterate or disingenuos tone troll, and 2.progressed to a flagger, and then 3.an accuser.
While each of these roles deserve their own essay, the presence of three in rapid succession fits a pattern
See how that works?
See how that works, single celler?
See how that works HEY WHY IS THAT BLACK HELICOPTER FOLLOWING ME NOW? !
I make no claim that profiling weaponized commenters, air -gapped comment systems and forum analysis that profiles the profilers is anywhere near an exact science, but ROGS Analysis is getting much, much better at weeding them out, whoever they are.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Well that's one way to deal with bullies
You gonna reply, or leave this thread for posterity and later analysis?
[ link to this | view in chronology ]
Re: Re: Re: Re: Well that's one way to deal with bullies/trolls.
LIBERATE pCOWARDSWHO USE GENDERED PRONOUNS WHEN DESCRIBING BAD BEHAVIOR OR THIS AC COWARD GETS A PACK OF OSCAR MEYER WEINIES CRAMMED UP ITS ASS!©
-I REALLY MEAN IT THIS TIME
[ link to this | view in chronology ]
Re: Well that's one way to deal with bullies/trolls...
If the goal is to stop bullying, is there another way? Everything else would be a reaction to bullying that had already happened. That's part of the "careful what you wish for" thing; the idea that a wish for world peace could be fulfilled by eliminating humanity is much older than the paperclip maximizer.
[ link to this | view in chronology ]
Yes, it's called 'punishment for acting like a thug'
If the goal is to stop bullying, is there another way
'You've been reported for abusive behavior, and after investigation it has been determined that the claims were valid. As this is your first offense your account will be suspended for X number of days. Repeat offenses will increase this amount, and after X number of repeat offenses your account will be terminated, with any attempts to create a new account to bypass this block resulting in immediate termination of any such accounts.'
[ link to this | view in chronology ]
Re: Yes, it's called 'punishment for acting like a thug'
a.k.a., the system often used by minecraft servers...
No idea how that works to scale, but given the player base of that game is filled with kids and adults alike maybe we could learn something from them. Granted, you still need attentive staff.
[ link to this | view in chronology ]
Re: Yes, it's called 'punishment for acting like a thug'
That's a reactive measure to bullying, not a preventative one.
Lots of sites have tried it, and found it to be not so easy. Maybe people can work around the bans, maybe there are too many bullies, maybe nobody agrees what constitutes "bullying"...
[ link to this | view in chronology ]
Re: Re: Yes, it's called 'punishment for acting like a thug'
'If you do X, you will suffer a punishment for it' seems pretty preventative to me, as unless you pre-vet everything any punishment is going to by necessity be applied after the bullying has taken place, such that your best bet to cut down on it is to make clear that acting in a particular manner will come with penalties, so those that might be tempted to act that way have to weigh the desire versus the concern that it could cost them.
[ link to this | view in chronology ]
Re: Re: Re: Yes, it's called 'punishment for acting like a thug'
Because getting kicked off platforms has been so effective at preventing Sargon of Akkad from being an asshole. /s
Outside of the obvious examples of actual bullies using sock puppet accounts to maintain campaigns of harassment even as they get suspended and banned, you seem to misunderstand how Bullying of this type is a malformation of normal "ribbing" behaviors in which reciprocity can not be achieved, which is part of why those who do not suffer bullying have a difficult time understanding the problematic behavior - they see the normal behavior and do not precieve how that behavior is bad. This means your behavioral rules may get the same backlash we keep seeing over moderation over all. Inconsistent applications lead to false positives and false negatives and can embolden bad actors.
[ link to this | view in chronology ]
Re: Re: Yes, it's called 'punishment for acting like a thug'
There is no such thing as a preventative measure to bullying. There never will be. All measures to deal with bullies will be reactive. If those measures are severe enough then they may act as a deterrent but there is nothing you can do to prevent it that is not discriminatory to someone else.
[ link to this | view in chronology ]
Re: Re: Well that's one way to deal with bullies/trolls...
Except this doesn't stop bullying, it participates in this behavior by telling the potential victims that they don't have the right to speak up. Or appear in public. Or exist online. They should just hole up somewhere inconspicuous.
That was not a solution, it only serves to mask the problem by pretending it doesn't exist since the possible victims are not there to complain about it.
Simply put, it is a case of "worse than doing nothing".
[ link to this | view in chronology ]
Re: Re: Re: Well that's one way to deal with bullies/trolls...
If you consider preventing people from posting certain types of content "bullying", that people are getting bullied on any website that bans "bad" language, pornography, whatever. (Based on past Techdirt stories, sex workers really are getting widely bullied.)
Of course this was a bad idea. But it has that genie-wish "malicious compliance" level of logic to it.
Wouldn't banning bullies also just be masking the problem? Sure, it makes the website more pleasant for the remaining users, but it does that by giving them the illusion of a bully-free world. Except the bullies will have just moved somewhere where people are less likely to point out their bad behavior.
[ link to this | view in chronology ]
Re: Re: Re: Re: Well that's one way to deal with bullies/trolls.
Two problems in your response.
It wouldn't be bullying just by itself. It's more of a "adding insult to injury" type of conduct. But it's definitely a wrong move because you tell them "we don't want you here". Even if that's not the intent of the service provider, they are participating in the bullying by actually enforcing the very message the bullies were already sending.
I only agree in that there was likely no actual malice in this move. It's just a way to "remove bullying" in a seemingly efficient way. Except it's not because you keep the actual bad elements in, and send the wrong message to both sides, namely that "bullies are free to target another group of people... that you would then have to also ban".
[ link to this | view in chronology ]
The lady behind GlitterAndLazers has figured out how to avoid the censorship: run a Kool-Aid "ad" disguised as a Christmas video.
The damn 30 second nightmare is everything wrong with entertainment on this planet: fat shaming comments of an obese person pushing a sugar-laced drink on a social media platform run by companies and advertisers justifying censorship based on imaginary threats.
How the fuck did it get this far.
[ link to this | view in chronology ]
Re:
We let corporations take over the Internet. Now it's just another version of TV, only useful to push products.
[ link to this | view in chronology ]
Re:
Do you feel bullied by those ads?
[ link to this | view in chronology ]
Re: How the fuck did it get this far.
How the fuck did it get this far?
Simple. The sheeple drank the Kool-Aid.
[ link to this | view in chronology ]
Re: Re: How the fuck did it get this far.
It was Flavor Aid.
[ link to this | view in chronology ]
Re: Re: Re: How the fuck did it get this far.
I thought it was Brawndo.
[ link to this | view in chronology ]
Sounds like preventive bullying to me
Get them first!
[ link to this | view in chronology ]
Well, that's illegal for everyone involved.
Their first big misstep is documenting people's suspected disabilities. It's enough to get them shut down.
[ link to this | view in chronology ]
Re:
Citation needed. Remember we're talking about Chinese law here.
[ link to this | view in chronology ]
Re: Re:
Are you sure it's Chinese law? I don't know where their servers are based.
Chinese are indicted in the US all the time, though they often don't get arrested if they live and stay in China.
I suppose if the Chinese executives living in China needed to be indicted the proper process would be through the US-China mutual legal assistance treaty though usually companies try to comply with the laws in the country their target audience is in.
[ link to this | view in chronology ]
Re: Re: Re:
No, but it's a Beijing-based company and Americans tend to be averse to the idea that foreign laws should constrain the services they use themselves. It would be a double-standard to try to apply American laws to TikTok.
[ link to this | view in chronology ]
Re: Re: Re: Re:
The US actually does indict Chinese in China. It doesn't matter to me much one way or the other but the DOJ does it all the time.
It's not really a double standard.
Anyway, I can't control who gets indicted or how lawsuits played out, I just pointed out there is a law against it just like there is a law against everything.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
When China tells Americans that something they've posted is illegal in China (e.g. the "Tank Man" photo), American companies and American courts pay little attention to that. Telling a Chinese company their policies violate American law, and expecting them to care, would be a double standard.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
That's true. I don't expect industrial or any other kind of espionage to stop anywhere.
Most countries encourage their citizens to do it within reason.
I doubt their home government will care.
[ link to this | view in chronology ]
Remove one persecuted group from visibility on the platform and the people doing it will become emboldened and go after the next to try and make the site conform to their vision of what the world should be. We've seen how easy it is for organised groups to game the system to punish people on other platforms, after all.
[ link to this | view in chronology ]
So TikTok's solution to bulling is the same 8 bit theaters' black mage's solution to diseases ("[You can stop an epidemic by killing all the patients, but then some other jerk will just get sick]").
Personally I don't expect TikTok to solve a problem all the rest of human civilization has never be able to solve in recorded history (even if it's a slightly narrower subset). However this... I think this would generally be considered creating more problems rather than solving any.
[ link to this | view in chronology ]
Like "shoot the hostage" from speed.
https://www.youtube.com/watch?v=U5P7Ck2aWFA
[ link to this | view in chronology ]
Just like Public Education
Punish the Victim for upsetting the status quo
[ link to this | view in chronology ]
Hello Mike, interesting logic but it doesn't become true if it is repeated: The fact that companies make wrong decisions again and again does not imply that content moderation is not possible. (But please feel free to explain.) It only shows that companies make wrong decisions again and again.
[ link to this | view in chronology ]
Re:
Perhaps you can enlighten us how to make the right decisions then. How do you implement content moderation in such a way that everyone is happy with it?
Just so you know, as long as some group isn't happy with the solution it means it doesn't work.
With that, I'll leave you to your Sisyphean task.
[ link to this | view in chronology ]
Of course content moderation is possible. But effective content moderation isn’t possible at the scale of a service like TikTok. Because that kind of moderation doesn’t scale.
[ link to this | view in chronology ]
Pick 1 blue marble out of 10 red? Easy. From 100,000 red though?
(But please feel free to explain.)
No problem.
Moderation at large scale is 'impossible' both because people are demanding the impossible(whether that be keeping 'bad stuff' off without significantly impacting 'good stuff', despite the fact that what falls into each category can often change or be context-based, or blaming the platform for the human nature of those that use it), and because the sheer scale means that even superhumanly effective moderation would still result in vast numbers of false positives and missed negatives.
Assume 100 posts per day, with a moderation that is 99% accurate, resulting in either one 'bad' post staying up, or one 'good' post taken down mistakenly every day. Even then you'd still have people complaining that said 'bad stuff' was able to be viewed for a short amount of time and demanding that the platform 'Do Better'.
Now, add a few zeros so that instead of 100 posts per day the platform is faced with 1,000,000. Sticking with the same impossibly good 99% accuracy rate that leaves 10,000 'failures' of moderation daily, either bad stuff staying up, good stuff being taken down, or more likely a mix of the two. Given the sheer amount of content dealt with this would be an impossible accomplishment to 'only' botch ten thousand posts, yet you can be sure people would still be losing their minds, pointing to the thousands of 'bad posts' that slip through daily and demanding that the platform 'Do Something', showing how even a near perfect moderation effort still wouldn't be enough.
Content moderation 'is not possible' at large scale because the sheer volume ensures that even impossibly accurate moderation efforts will still result in large amounts of mistakes in one direction or the other, whether that be letting bad stuff stay, or good stuff getting the boot. While this doesn't mean platforms shouldn't try(within reason) to deal with problems those that are demanding that they do so need to understand that far too often what they are demanding simply isn't reasonable or even possible, and scale back their demands accordingly.
[ link to this | view in chronology ]
Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red tho
That kind of logic can work for something with well-defined rules, like when we say that a firearm is never allowed in the passenger cabin of a commercial flight that goes into, out of, or over the USA. For content moderation, a numeric "accuracy rate" is simply nonsense. Every time we think a rule is clear, someone finds a counterexample to take it into a gray area. We need to agree on a fully objective set of rules before we can pretend to calculate the accuracy.
[ link to this | view in chronology ]
And now you know why effective content moderation at scale is impossible.
[ link to this | view in chronology ]
Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red tho
There is a bit of a flaw in the numbers you listed. To evaluate this properly we also need to know the rate of posts_that_should_be_moderated.
In your example of 100 posts per day, how many should be moderated? If we assume a rate of 10% that should be moderated (far higher than reality on most platforms, I suspect) then the 99% accuracy rate for moderation will let 0.1 "bad" posts through, effectively none. For the 1m post example 100k should be moderated and 1k will be let through.
A problem in moderation at scale still exists but we need to avoid this kind of inaccuracy lest we give naysayers ammunition or obvious means by which to dismiss the whole problem.
[ link to this | view in chronology ]
Re: Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red
How do you decide which posts should be considered for moderation? Making that decision is itself a form of moderation, and subject to false positives and negatives.
[ link to this | view in chronology ]
Re: Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red
A fair point, as I typed it out rather quick it was a little rough, so it could certainly do with a little polish, though even in your correction you seem to have highlighted the problem rather well, as 100K posts moderated daily is still an insane number to deal with even if it's only a fraction of the total number of posts, and even then you'd be looking at a solid thousand 'mistakes' on a daily basis with impossible moderation 'accuracy'.
[ link to this | view in chronology ]
Re: Re: Re: Pick 1 blue marble out of 10 red? Easy. From 100,000
Moderation has to consider every post made, and decide whether it stays up, is taken down, or maybe is passed on to a human for a decision. Therefore the number of posts to be moderated is the number of post made.
[ link to this | view in chronology ]
Re: Re: Re: Re: Pick 1 blue marble out of 10 red? Easy. From 100
That's not how moderation works. Moderation is a human activity.
There are automated filters in the large platforms. These do their thing to some degree of accuracy. Undoubtedly they sometimes find things that are referred to humans for evaluation. Atop that are the reports coming in from users for both missed posts and improperly filtered posts. This is the subset that are moderated. The rest were filtered. There is an important difference between those two.
Therefor the number of posts to be moderated is a small fraction of the whole.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Pick 1 blue marble out of 10 red? Easy. From
As filters are used to replace human moderators, and are making decisions on what to do with a post, including maybe passing it to a human, calling filters different from moderation is a distinction that does not matter as far as the poster is concerned. A filter is simply an automated moderation system. Also, it usually looks like the human moderators deal with reported posts. and maybe challenges to moderation/filter decisions.
[ link to this | view in chronology ]
Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red tho
(commenting on title, not post)
I don't know... pretty sure some of those marbles are purple.
[ link to this | view in chronology ]
Re: Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red
Add to that task a time limit because every second another 1 blue and 9999 red marbles are dumped into the pile. But you still must be sure to always only get the blue marble and no red ones, so grabbing handfuls hoping the blue one is in the pick wont work. And if a blue marble stays in the pile longer than 1 hr you have also failed. I mean thats plenty of time right? Oh but you are human too so at some point you must sleep, eat, and do all the other human things you normally do. Meanwhile each second more marbles are being dumped into the pile.
Yeah, try to keep up with that with 100% accuracy.
It would be much better if people just became more civil and respectful enough to not throw in their blue marbles all the time. But that requires self responsibility and I don't think the people of the world are adult enough to handle that.
[ link to this | view in chronology ]
Re: Pick 1 blue marble out of 10 red? Easy. From 100,000 red tho
Good enough analogy.
Can be improved with a reminder that those marbles are not just straight "blue" and "red", but shades of blue, red... and purple. Not to mention how some of them will appear more blue or red depending on the lighting.
There has been an experiment described in an article here.
It was about how a few "posts" were given to a team of "moderators" to decide if they should be moderated or not. I don't remember anything about a time limit, but there were only a few samples of content, so "scale" was not a factor.
It still ended with different results for each sample. It was impossible to have the whole team make unanimous decisions about approving or rejecting the content.
So, If I give you the task of rejecting all blue marbles and keep all red marbles, what are you going to do about this single marble I give you to sort, which happens to be purple?
Perfectly moderating at scale is indeed impossible because... of the scale.
But it's already impossible because of subjectivity.
It shouldn't stop sites from improving their moderation process, but the public should stop asking for the impossible. Notify the site when an error is made, and hope they have the resource to investigate. On the other hand, if they don't have the resources, they should not promise that they will. They should definitely be clear about their limits so that the public can be set their expectations accordingly.
[ link to this | view in chronology ]
Re:
Since you brought it up ..
What do you consider to be Content Moderation? Describe it with several examples so we all can understand your point of view.
Please describe the things that companies get wrong again and again and again and show why these companies do nothing to correct their errors.
[ link to this | view in chronology ]
Re:
Masnick's Impossibility Theorem: Content Moderation At Scale Is Impossible To Do Well
(the thesis being that nobody agrees on what "well" should mean, because you'll never be able to get everyone to agree on everything)
[ link to this | view in chronology ]
I'm going to have to spend some time processing this one.
On the face of it... I mean, the folks that are overly sensitive make up a very tiny portion of any social network. Is it right to give that tiny fringe minority cancelling power? Doesn't it make more sense to protect those sensitive people by shielding them from the real world? Wouldn't that be about ten times easier?
It still feels like I'm missing something about how elegant that all seems, and much easier than doing the opposite.
[ link to this | view in chronology ]
Re: I'm going to have to spend some time processing this one.
Why would we want to shield Donald from criticisms?
[ link to this | view in chronology ]
The obvious thing to do is kick bullies and bigots out, then tell them to go fuck themselves somewhere else. I don’t know why you can’t grasp how that’s the best solution, but here we are.
[ link to this | view in chronology ]
Re:
"The obvious thing to do is kick bullies and bigots out, then tell them to go fuck themselves somewhere else"
Yes, but unfortunately Twitter lets bullies and bigots run amok, just because US Presidents appointed by High Lord Putin himself should benefit from a Nixonian-style "executive privilege" and be held beyond account.
It makes it really hard to keep a straight face when Melania claims to be against bullying.
[ link to this | view in chronology ]
Re: Re:
Bini-do you mind if I call you that, Mr. Netanyahu?
Please explain why you are so mad at Russia, and The Chosen One considering all the special favors Trump has curried for the Israel lobby and you, specifically?
Then, theres this recent evil executive order that no one is talking about:
https://www.breakingisraelnews.com/141446/trump-sign-executive-order-declaring-jewish-people- nation-just-written-genesis/
[ link to this | view in chronology ]
Re:
You effectively can't kick the bullies and bigots out of a platform like TikTok. Bans are easily circumvented. Moderating the comments and content of bullies and bigots at scale is no easier than moderating the content of the non-asshole users.
[ link to this | view in chronology ]
For the record, I said it was the obvious thing to do. I never said doing it would be easy.
[ link to this | view in chronology ]
Re:
One might even say that content moderation at scale is impossible.
[ link to this | view in chronology ]
Re:
You also said "that’s the best solution". It's not so obvious to me. Banning people feeds into the us-vs.-them mentality, and the narrative of people being persecuted for their views. It does nothing to do address the systemic problems that lead people to take those views and make those comments. Maybe it will even hurt, when they move to an echo-chamber of a platform where there's nobody to refute their views (or, if it's a closed platform, even to watch what they're up to).
[ link to this | view in chronology ]
Re: Re:
Bullying is a societal problem, not a systemic one on TikTok.
[ link to this | view in chronology ]
Letting abusive people stay on a platform to further abuse people won’t help anything, either, so you tell me what the fuck should be done.
[ link to this | view in chronology ]
Re:
I'm not the AC you're looking for hand wave, but I gave this some thought anyway.
At least for text heavy content, a platform could invest in smarter content filtering that can detect things like offensive comments and prevent them from being posted. This would work about as well as it does in most video game chat lounges, but it would be an improvement over the current situation.
Of course, my idea has the the usual privacy concerns of flagging content, and the questions of who can make the company block content they don't want other people to see, etc. etc. you know the drill.
[ link to this | view in chronology ]
Re: Re:
You can catch bad words, but can you catch context with auto filters? Because there's always a Scunthorpe problem.
[ link to this | view in chronology ]
Re:
The obvious thing to do is kick bullies and bigots out, then tell them to go fuck themselves somewhere else. I don’t know why you can’t grasp how that’s the best solution, but here we are
First define the bullies and bigots. There's the rub. For every Progressive complaining about abuse towards [protected group] there's a gang of right-wingers complaining they're being discriminated against. See Hobby Lobby, Chick-Fil-A, etc., for details.
Yes, I know it's a case of "Stop hitting my fist with your face!" but whoever holds the levers of power can say and do what they want with impunity, so let's not be giving the tools to make our lives worse.
[ link to this | view in chronology ]
The difference between people boycotting/shittalking Chick-fil-A and people trying to push, say, queer people out of the public eye is only one group is trying to actively hurt an entire segment of the population.
[ link to this | view in chronology ]
Re:
Boycotting/shittalking is done with the explicit intent of hurting whoever you are boycotting/shittalking. So both groups are trying to actively hurt an entire segment of the population. The only actual difference is which group you happen to agree with.
[ link to this | view in chronology ]
"I'm not shopping there" isnt the same as "I think gay people should be third-class citizens because God".
[ link to this | view in chronology ]
Re:
Boycotting is a perfectly legitimate way to express your displeasure. I won't read any newspaper owned by Rupert Murdoch, and avoid doing anything that might enrich him further. Most lefties would agree with me on principle because he's a horrible man. I also slag him and his collection of rancid rags off as and when the subject arises.
If every left-wing/progressive wants to boycott Chick-Fil-A because they think the company is run by horrible people, let them. I can understand people wanting to protect their interests and wouldn't criticise them for doing so.
[ link to this | view in chronology ]
Re: Re:
Are you willing to apply that statement about boycotting to the #BDS movement also?
[ link to this | view in chronology ]
Re: Re:
Are you willing to apply that statement about boycotting to the #BDS movement also?
[ link to this | view in chronology ]
Re: Re:
^This.
only one group is trying to actively hurt an entire segment of the population.
People of faith are being clobbered because we're not a protected group. I'm sorry for your troubles and agree that it's no fun being a target. I've been one myself and can sympathise. I don't think it's right to bash any group of people and make their lives a misery. I daresay you'd say the same thing, but at that point terms and conditions apply. Anyone we believe to be actively harmful to vulnerable people tends to be put on the unwritten list of people we won't bother protecting, e.g. white supremacists.
Unfortunately, deciding who deserves to be put in a protected class is subject to the prevailing fashions of the day. It's currently fashionable to bash people of faith, radical feminists (who annoy me too, as it happens. I hate cruelty), and people who pick a side in the culture wars, depending on who's in the majority/minority.
I often vote your comments insightful, Stephen, but today I think you're calling it wrong; the story is much bigger than your own experiences.
[ link to this | view in chronology ]
Re:
Chick fil A is delicious.
[ link to this | view in chronology ]
Re: Scarlets Web
Brand their foreheads with the letter B!
Make them shave their heads, and wear a red, five pointed star patch!
[ link to this | view in chronology ]
Re: I'm going to have to spend some time processing this one.
What you're missing is that by excluding your "overly sensitive folks" entirely you are engaging is the worst kind of bullying. Everybody deserves the same rights and abilities as everyone else, limited only by their own limitations and whether they've proven themselves incapable of interacting peaceably with others. This latter category are often known as sociopaths. Such as yourself.
[ link to this | view in chronology ]
Re: I'm going to have flush this zof 10-20 times
“Is it right to give that tiny fringe minority cancelling power”
But enough about you little wannabe fascists.
[ link to this | view in chronology ]
Re: I'm going to have to spend some time processing this one.
Yeah, but your theory would upset the power imbalance of the tyranny of the minority, who coincidentally of course, control 70% of the worlds resources, aka, the One Percent, whoever they are, and who are extremely sensitive about criticism of their shitty, genocidal, slaughtering ways.
I suspect these hidden entities might be pink, hairless talking rats, but I cannot yet provide evidence. Logically, only something that pathetic and scurrilous could be that sensitive.
[ link to this | view in chronology ]
Distilled, it comes down to this:
1) Do I placate the wishes of a tiny fraction of my users, and change the entire world for them?
or
2) Do I identify the handful of overly sensitive people and hide the world from them?
I'm really leaning towards two now. Imagine how much better twitter would be.
[ link to this | view in chronology ]
Re: Distilled, it comes down to this:
False dichotomy?
[ link to this | view in chronology ]
Re: Distilled, it comes down to this:
You missed one:
3) Do I placate the advertiser at the expense of throwing the users, good or bad, under the bus.
That's the reality... these companies aren't in business to support free expression, justice, liberty, equality or anything else - they're in business to push intrusive display advertising in your face. They're a mercenary front, nothing more. And that's pretty much the entire commercial Internet.
[ link to this | view in chronology ]
Re: Re: Distilled, it comes down to this:
Free expression, justice, liberty, and equality are all intangible ideas, they're not products. How can a company be in business with no products?
[ link to this | view in chronology ]
Re: Distilled, it comes down to this:
define 'overly sensitive'.
Remember, this plan included people with physical disabilities too. If anything, they rightfully should be upset if they were being bullied for things completely out of their control. So the response is to isolate the potential victim (in a way that may not even stop the potential bully might I add)?
Or is anybody who would get upset by bullying automatically 'overly sensitive'?
[ link to this | view in chronology ]
Re: Distilled, it comes down to this:
after the bullied leave who do you think the bully will go after next, a different minority on your platform. Then after you drive them out the bully picks the next group and it keeps going until only the bully is left. Then the bully leaves and your platform is dead.
Congrats you just demolished your business and income source because you didnt stand against bad behavior on your platform.
[ link to this | view in chronology ]
Re: Re: Distilled, it comes down to this:
Cyberbullying isn't going to kill a public internet platform as popular as TikTok. TikTok's problems with bullies and bigots aren't even as bad as Youtube comments. The trolls are outnumbered by a factor of thousands, if not more.
TikTok has hundreds of millions of users and it is growing rapidly. Even if 1% of those users were bullies that spent their entire time on TikTok being awful to everyone else, they still could never reach a significant portion of the rest of the user base. Even when brigade-ing they can only manage to reach individuals or smaller groups in the community for any extended time, which is what they do.
So TikTok tried to make it harder to discover and find the people that are frequent targets. It's arguably the only moderation option that can be done proactively, though that choice should really be opt-in and left to their users.
[ link to this | view in chronology ]
possible but unlikely
it is possible to disagree without name calling and hating, but it's unlikely because our natural inclination is "self 1st"
[ link to this | view in chronology ]
Re: possible but unlikely
Fuck you, you're wrong.
(sorry, couldn't help it)
[ link to this | view in chronology ]
Unfortunately, I fear most people will just blame TikTok for it
That's easy enough - just suppress them too.
[ link to this | view in chronology ]
These cases of little babies being framed, especially our boys, stalked, and harassed by law enforcement make any rational person puke.
https://m.newser.com/story/284457/suspect-in-murder-of-nyc-student-flees-on-way-to-questioning.html
[ link to this | view in chronology ]