There Is No Magic Bullet For Moderating A Social Media Platform
from the it's-not-so-easy dept
It's kind of incredible how frequently we see people who seem to think that the fact that social media platforms are so bad at moderating the content on those platforms is because they just don't care or don't try hard enough. While it is true that these platforms can absolutely do a much better job (which we believe often involves providing the end user more tools themselves), it's still amazing at how many people think that deciding what content "belongs" and what content doesn't belong is somehow easy. Earlier this month, in Washington DC there was the Content Moderation at Scale "COMO" conference. It was a one day event in which a bunch of companies revealed (sometimes for the first time) how they go about handling questions around content moderation. It was a followup to a similar event at Santa Clara University held back in February (for which we published a bunch of the papers that came out of the event).
For the DC event, we teamed up with the Center for Democracy and Technology to produce a live game for everyone at the event to play -- turning them all into a trust & safety team, tasked with responding to "reported" content on a fictional social media platform. Emma Llanso from CDT and I ran the hour-long session, which included discussions of why people chose their decisions. The video of our session has now been posted which helpfully edits out the "thinking/discuss amongst yourselves" part of the process:
Obviously, many of the examples we chose were designed to be challenging (many based on real situations). But the process was useful and instructive. With each question there were four potential actions that the "trust & safety" team could take and on every single example at least one person chose each option. In other words, even when there was a pretty strong agreement on the course of action to take, there was still at least some disagreement.
Now, imagine (1) having to do that at scale, with hundreds, thousands, hundreds of thousands or even millions of pieces of "flagged" content showing up, (2) having to do it when you're not someone who is so interested in content moderation that you spent an entire day at a content moderation summit, and (3) having to do it quickly where there are trade-offs and consequences to each choice -- including possible legal liability -- and no matter which option you make, someone (or perhaps lots of someones) are going to get very upset.
Again, this is not to say that internet platforms shouldn't strive to do better -- they should. But one of the great things about attending both of these events is that it demonstrated how each internet platform is experimenting in very, very different ways on how to tackle these problems. Google and Facebook are trying to throw a combination of lots and lots of people plus artificial intelligence at the problem. Wikipedia and Reddit are trying to leverage their own communities to deal with these issues. Smaller platforms are taking different approaches. Some are much more proactive, others are reactive. And out of all that experimentation, even if mistakes are being made, we're finally starting to get some ideas on things that work for this community or that community (and remember, not all communities work the same way).
As I mentioned at the event, we're looking to do a lot more with this concept of getting people to understand the deeper questions involved in the tradeoffs around moderating content. Setting it up as something of a game made it both fun and educational and we'd love some feedback as we look to do more with this concept.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: como, content moderation, game, tough choices, trade offs
Reader Comments
Subscribe: RSS
View by: Time | Thread
This sounds to me like a close approximation of hell.
Etc.
[ link to this | view in thread ]
[ link to this | view in thread ]
Well, it's the plain ordinary bullets of brute force censorship,
AND then, after doing NOTHING, Masnick has the chutzpah to say how hard moderating is!
And then fanboys (who may well be mostly astro-turfing) jump in and allege that they know exactly how Techdirt works and why I'm censored (it's the way I write, anything but censoring of my VIEWS), even though I've asked and get no official response.
Techdirt does "moderation" ONLY against dissenters and by sneaking rather than the right common law way: stated rules enforced on ALL, and any actions done out in the open.
[ link to this | view in thread ]
What part of common law forces Mike to host your brain drool?
[ link to this | view in thread ]
Re: Well, it's the plain ordinary bullets of brute force censorship,
Would you like some cheese with that whine?
Ever think that he doesn't do anything because he knows how hard moderating is?
How do you look in a mirror and take yourself seriously? Nothing in your post is based in any kind of reality or logical facts. You only don't know what the rules are because you 1) deliberately ignore them after being told, and 2) don't understand technology otherwise you would know the rules since the code and technology that allows websites like this to exist has predefined rules on how you can use it.
[ link to this | view in thread ]
Re: Well, it's the plain ordinary bullets of brute force censorship,
[ link to this | view in thread ]
Re: Well, it's the plain ordinary bullets of brute force censorship,
You cannot force Techdirt admins to reveal their moderation tactics any more than you can force the site to host your speech. And your comments get hidden because they are mostly irrelevant to the article at hand, and that is because your comments are mostly a way for you to take out your anger at Techdirt and Mike Masnick because ¯_(ツ)_/¯
[ link to this | view in thread ]
Re: Well, it's the plain ordinary bullets of brute force censorship,
I'm not sure what are the criteria for effectively blocking a comment here other than your regular spam justifications but the fact that you are able to post your drivel is clearly a sign that you are not being blocked or censored by TD staff or systems. However you are clearly being flagged by the community repeatedly. I could try to explain why it happens (you are an asshole) but let's just focus on the fact that you are being flagged but your comments remain there. I just flagged you by the way. Not sure if it's going to be hidden.
The system TD uses allow anonymous comments so there's no reliable way of blocking someone specifically. Considering you have never contributed in any meaningful way with any discussion and you are generally an asshole it's pretty amusing you are making such accusations. Oh well.
[ link to this | view in thread ]
I'm just throwing random thoughts, I wouldn't possibly know what to do as I don't even have any blog/site/community to manage lmao
[ link to this | view in thread ]
Re: Well, it's the plain ordinary bullets of brute force censorship,
[ link to this | view in thread ]
Re: Well, it's the plain ordinary Troll!
[ link to this | view in thread ]
Re: Re: Well, it's the plain ordinary bullets of brute force censorship,
If it's spam, however detected, it goes to dev/nul
If it's offensive it gets flagged by the community and hidden, assuming Techdirt agrees with the community vote
If it's great, then it gets upvoted -- remember, there's a "best of the week" contest every week, with categories for both what Techdirt and what the crowd thought were the *best* comments.
[ link to this | view in thread ]
Re: Well, it's the plain ordinary bullets of brute force censorship,
You don't dissent. You troll and throw immature fits - like this one. You try to turn one of the most open forums I've seen on a private website into your attention-fest and act like it's one jackboot shy of Nazi Germany if anyone disagrees with you. I've dissented on several articles over the last ten plus years of commenting. I've never had a single comment flagged by the community. You don't dissent in a respectful way. You don't do anything in a respectful way. You act like an entitled child in a chocolate factory.
"no statement ever from Masnick that uncivil remarks are not wanted"
You're making demands. Since you like to cite not-legal "common law" bullshit as if it dictates human interactions, cite what law, common or otherwise, requires Masnick to answer your questions. You haven't issued a subpoena. What legal right do you have to expect an answer?
[ link to this | view in thread ]
Re:
I would say that part of the solution is never letting a social network service get as big as Facebook or Twitter. Moderation of a network that size cannot happen without shortcuts like keyword blocks that will run into the Scunthorpe problem.
[ link to this | view in thread ]
Re: tl;dr
But they have anyway.
You wrote a pretty good summary of the moderation process back in February; Mike responded and confirmed that everything you said was correct.
Various people have, indeed, explained comment moderation to Blue on many, many occasions. Like all trolls, always, he ignores explanations and then whines that nobody ever explains anything to him.
I recall suggesting to him that he start a blog, not just for all the usual reasons I tell him to start a blog but because he appears not to understand even the most basic facts about how comment moderation works, and starting a blog would help him learn.
He is, of course, not interested in learning. Only in whining about what a poor innocent victim he is.
[ link to this | view in thread ]
Re: Re:
[ link to this | view in thread ]
Re: Re: Well, it's the plain ordinary Troll!
I don't remember for sure, but I think it was five.
If OOTB seriously finds it hard to believe that five people would be willing to flag his posts, well, that's because he's very very stupid.
(I don't flag his posts anymore; I've blocked them entirely. He and the other trolls convinced me, long ago, that writing a Greasemonkey script to hide anonymous posts was a better use of my time than reading any more delusional rants about zombies/pirates/delicious, delicious paint chips.)
[ link to this | view in thread ]
Re: Re: Re:
One of the reasons I like Mastodon is that it leaves federation mainly to the admins and moderators. Also, it is an open source protocol instead of a service, which means anyone can make their own Masto instance—even a single-user instance—and alter the software as they wish instead of using a service like Twitter that silos information, runs on outrage, and cares more about whether people use it than how they use it.
[ link to this | view in thread ]
Re: Re: Re: Re:
[ link to this | view in thread ]
Re: Re: Re: Re: Re:
“Protocols instead of platforms” is a very IndieWeb approach to social media, yes. Someone is already working on a federated Instagram-style protocol, too. A federated Tumblr-style protocol might not be too far behind.
[ link to this | view in thread ]
I'm just an example faggot.
:)
The biggest problem is there is no perfect set of 'rules'.
As we've learned faggot doesn't even give me the tiniest bit of pause, but someone else might be reduced to tears.
There is no way to protect the crying person & my right to use a word.
(One can also insert the N word & other things into this cloudy point)
The platform should never be in the position to have to do a deep dive into the users to see if they qualify for merit badges that give them a pass for certain words/ideas.
If the word offends you, it is possible to let users have their own list of verboten words that bother them & they never have to see them.
This would improve over the current system where a word can be declared offensive & people can gang up and report an account for using it even if they aren't offended, they want to get the user dinged for some reason.
If the persons ideas offend you, block them & move on. Mass reporting to "win" just keep the war going back and forth as each side wants to be the winner... not noticing that even if they won they destroyed what they were fighting over in the first place.
But for several decades we've told everyone its a zero sum game, everyone wins, everyone gets a ribbon, you are special, & it is never your fault.
They got R Kelly off of spotify... now they have presented their 2nd round of demands to be removed from the service. It's a pity they seem to have forgotten that if they dislike an artist, they don't have to listen... but they aren't the center of the universe who should be allowed to decide for everyone else.
But again... what do I know... I'm just a faggot.
[ link to this | view in thread ]
Re:
...if everyone wins, then it's not a zero sum game.
[ link to this | view in thread ]
Re:
Herein lies the problem: Twitter already has a wordfilter feature. If your solution was as great as you think it to be, Twitter would not be the flaming hellbird it is today. (Spoilers: Twitter is a hellbird sitting atop a flaming tree in the middle of a lake of lava.)
And what your idea fails to take into account is that while users do not have to see they words they have filtered, the users who are using those words are still saying them. This method of moderation—leaving it in the hands of users to “self-moderate” by way of user-side filters—makes posting on Twitter no better than posting on 4chan. Twitter needs to say “we don’t do that here” to people who break the rules, then boot those people if they keep breaking the rules. Without showing consequences for bad behavior on the platform, Twitter would lose—and has possibly already lost—control of what it deems “acceptable user behaviour”.
Your idea also does not take aim at other potential forms of harassment, such as sockpuppet accounts and routing around wordfilters via screenshots and Unicode characters. Moderation tactics for smaller communities do not scale, and Twitter is the proof. Any set of moderation tactics for a social media service should have a section on harassment, including examples of potential harassing actions and ways to both punish and prevent those actions.
Moderating a community, regardless of size, is a thankless pain-in-the-ass job. (I speak from experience.) But it is also a responsibility that requires actual effort to do well. Telling users that they must moderate themselves will send an implicit message: “I don’t care about your experience here.” If mods refuse to care, the users will, too. And we have seen what happens when mods refuse to care—after all, 4chan has been around for more than a decade.
[ link to this | view in thread ]
Re: Re:
Being able to not see the word wasn't enough, they needed to run the evil people off the platform.
Twitter gave in & screwed it all up.
The people screaming how they were under attack during gamergate were really good at playing the victim card... but they were just as toxic as those they dogpiled.
Leslie Jones (?) the comedian from SNL was called the N word & got people banned for doing it... funny she used it in a derogatory way towards people and never faced anything.
Twitter gave into the expectation if enough of us whine you have to ban them!!
So what if the account never tweeted at me...
So what if I never read the tweet in question...
So what if if 300 people suddenly report 1 tweet...
My buddy said this person sucks & I need to report them!!!!
If the first response was - we're sorry you were offended you can block offensive words & if you feel the person is toxic you can block the account.
Instead Twitter gave in & gave birth to the tit for tat reporting suspending of people. Some people could say way worse things than those they reported and never get punished, while people they targeted were taking out over and over.
The moderation makes no sense, its not uniformly adhered to.
The punishments are just fuel to the fire b/c you have SJW celebrating they got an Alt-Right account booted for a comment that had nothing to do with the SJW crowd they just didn't like them.
I don't like reading the crap Blue spews on here, I'm perfectly happy for his crap to vanish into the void. My ass isn't chapped that he can still post here, and this is the giant flaw.
If some jackass wants to scream Nazi over & over why does it matter if they are still on the platform?
We have actual death threats & people doxed on Twitter... those people need bans...
He made a comment about Transpeople I didn't like doesn't need a ban.
I've had morons join into conversations & come for me, after I hand them their ass they then look for ways to inflict damage on my account & hey you said faggot a year ago... get out.
You can care about the user experience without trying to cater to every groups unique individual demands.
Targeted harassment is 1 thing, but one needs to look beyond a single tweet without context... often you discover the person reporting stuck their dick in the hornets nest & have fallen back to reporting to "win" & deleting tweets that were much more offensive to play victim better.
911 used to be for emergencies, then we had people complaining fast food places were out of nuggets or not enough pickles... idiots who do that get fined and punished, perhaps maybe Twitter needs to try to be more like 911.
If you are reporting stupid shit enjoy your own timeout.
The current system is lock the reported account so the 'victim' doesn't goto the media with how Twitter doesn't care about them (when if you read the whole tweet exchange, they were telling the banned guy off who ignored them & that pissed them off more).
Twitter isn't a community, Twitter is a shitty video game where you score points getting people put on time out, silencing ideas you disagree with, and victimhood.
[ link to this | view in thread ]
Re: Re:
maybe a ;
Politics for the last very long time has been focused on the, it's a zero sum game where you have to have total victory
Mix that with teaching kids not to compete, everyone wins, no one has to feel bad...
And people wonder why kids are fscked up these days.
[ link to this | view in thread ]
Re: Re: Re:
Twitter admins should not have to make people use wordfilters.
If you want to improve a social media service, getting rid of people who act like shitheads is a good place to start.
Last time I checked, Anita Sarkeesian did not continually harass and threaten violence against every one of her critcs on a daily basis. Unless you have some proof to the contrary, the Gators were more of a problem than their targets—most, if not all, of whom just wanted to use Twitter without worrying about a constant stream of abuse in their timelines.
Leslie Jones is a Black woman. She has far more right to use that word than the people who said it back at her.
Again: Blocking an account and filtering words do nothing to actually stop someone who breaks the rules. Those tactics only push the rulebreaker’s shitty behaviour out of sight, and that does no one any good.
People abusing an easily abusable system tends to break a system into nonsense. The inability of moderation tactics to scale alongside the service does not help, either.
If someone gets the boot for breaking the rules, why should it matter who reported them and why they filed the report?
Silence is complicity. If Twitter refuses to ban Nazis and White supremacists even after they are reported, that refusal sends a message to those groups: “You are welcome here.” I do not know about you, but I would like my social media timelines to be as free of Nazis as possible.
Depends on the comment and the context. (And FYI, “trans people” is two words.)
You can also care about the user experience without forcing moderation upon a userbase that barely knows what they want from social media.
Retributive moderation for “false” or “annoying” reports, especially on a service as large as Twitter, would suck as much as the hands-off moderation you think the service should use. If I report a tweet that ends up deleted before Twitter can get to the report—what should happen to me because I filed a “frivolous” report?
If it is not a community as a whole, it is at least a service home to several unofficial sub-communities (Black Twitter, MAGA Twitter, Weird Twitter, Furry Twitter, Sports Twitter, Film Twitter…you get the point).
Why do you care so much if no one is forcing you to either pay attention to or play the game?
[ link to this | view in thread ]
Re: some newer efforts.
https://kotaku.com/racist-twitch-trolls-defeated-by-talking-banana-1826115980
[ link to this | view in thread ]
Re:
That assumes your are a broad minded person. A narrow minded person is offended by the idea that somebody could be saying something that is offensive, and some of those dedicate their lives to destroying that which offends them. Indeed they go out of they way to find the offensive just so that they can act all offended.
(Just look how much time and effort blue puts into being offended by this site).
[ link to this | view in thread ]
Re: Re:
If it doesn't offend somebody, it couldn't possibly interest anybody.
[ link to this | view in thread ]
Re: Re: Re:
Good lord, we finally found the perfect motto for Twitter.
[ link to this | view in thread ]
Who hated the process of due
Each film that he'd paid
Was DMCAed
And shoved up his ass with a screw
[ link to this | view in thread ]
Re: Re: tl;dr
[ link to this | view in thread ]
You kind of have to want to solve the problem.
You kind of have to WANT to solve the problem to begin with.
Other sites do much better but they are sincere about the problem. Although it still requires ongoing effort and some vigilance.
[ link to this | view in thread ]
Re: (was Re:)
If you aren't offending someone, you probably aren't saying anything significant or meaningful.
[ link to this | view in thread ]
Re: Re:
[ link to this | view in thread ]
Re: Re: tl;dr
[ link to this | view in thread ]
Re: Re: Well, it's the plain ordinary bullets of brute force censorship,
Also, I don't make up lies about the staff here, push conspiracy theories, or slag people off for the sake of it.
Stop your whining and shove off. We flag your posts because we don't want to see them, Blue.
[ link to this | view in thread ]
Re: Re: Re: Well, it's the plain ordinary bullets of brute force censorship,
What legal right do you have to expect an answer?
Something something common law, AC. ;P
[ link to this | view in thread ]
Re: Re: Re: Re: Re:
[ link to this | view in thread ]