There Is No Magic Bullet For Moderating A Social Media Platform

from the it's-not-so-easy dept

It's kind of incredible how frequently we see people who seem to think that the fact that social media platforms are so bad at moderating the content on those platforms is because they just don't care or don't try hard enough. While it is true that these platforms can absolutely do a much better job (which we believe often involves providing the end user more tools themselves), it's still amazing at how many people think that deciding what content "belongs" and what content doesn't belong is somehow easy. Earlier this month, in Washington DC there was the Content Moderation at Scale "COMO" conference. It was a one day event in which a bunch of companies revealed (sometimes for the first time) how they go about handling questions around content moderation. It was a followup to a similar event at Santa Clara University held back in February (for which we published a bunch of the papers that came out of the event).

For the DC event, we teamed up with the Center for Democracy and Technology to produce a live game for everyone at the event to play -- turning them all into a trust & safety team, tasked with responding to "reported" content on a fictional social media platform. Emma Llanso from CDT and I ran the hour-long session, which included discussions of why people chose their decisions. The video of our session has now been posted which helpfully edits out the "thinking/discuss amongst yourselves" part of the process:

Obviously, many of the examples we chose were designed to be challenging (many based on real situations). But the process was useful and instructive. With each question there were four potential actions that the "trust & safety" team could take and on every single example at least one person chose each option. In other words, even when there was a pretty strong agreement on the course of action to take, there was still at least some disagreement.

Now, imagine (1) having to do that at scale, with hundreds, thousands, hundreds of thousands or even millions of pieces of "flagged" content showing up, (2) having to do it when you're not someone who is so interested in content moderation that you spent an entire day at a content moderation summit, and (3) having to do it quickly where there are trade-offs and consequences to each choice -- including possible legal liability -- and no matter which option you make, someone (or perhaps lots of someones) are going to get very upset.

Again, this is not to say that internet platforms shouldn't strive to do better -- they should. But one of the great things about attending both of these events is that it demonstrated how each internet platform is experimenting in very, very different ways on how to tackle these problems. Google and Facebook are trying to throw a combination of lots and lots of people plus artificial intelligence at the problem. Wikipedia and Reddit are trying to leverage their own communities to deal with these issues. Smaller platforms are taking different approaches. Some are much more proactive, others are reactive. And out of all that experimentation, even if mistakes are being made, we're finally starting to get some ideas on things that work for this community or that community (and remember, not all communities work the same way).

As I mentioned at the event, we're looking to do a lot more with this concept of getting people to understand the deeper questions involved in the tradeoffs around moderating content. Setting it up as something of a game made it both fun and educational and we'd love some feedback as we look to do more with this concept.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: como, content moderation, game, tough choices, trade offs


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. icon
    Roger Strong (profile), 18 May 2018 @ 1:59pm

    Earlier this month, in Washington DC there was the Content Moderation at Scale "COMO" conference.

    This sounds to me like a close approximation of hell.

    "No censorship. At all. No matter what."

    "HOLD THE MODERATORS RESPONSIBLE!!!"

    "Blockchain. We must use blockchain. You've got to have blockchain."

    "Only natural persons may...."

    "Lockheed Martin has a proven drone strike based solution."

    "Outsource moderation to China."

    "Our AI product can already do it."

    Etc.

    link to this | view in thread ]

  2. identicon
    Anonymous Coward, 18 May 2018 @ 2:02pm

    Who moderated the conference?

    link to this | view in thread ]

  3. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 18 May 2018 @ 2:06pm

    Well, it's the plain ordinary bullets of brute force censorship,

    as here on TD: no rules, no accountability, no way to know who has "reported" you or what reason (if any), no "transparency" as to whether an Administrator approves, no up votes even possible, just tacit allegations that one has somehow offended "the community" (that's ideologically uniform of leftist / liberal" views; no recognition that dissent is valued or protected here, no statement ever from Masnick that uncivil remarks are not wanted, indeed, the one time he responded to me being called "ignorant motherfucker" by his (now, perhaps then too) paid minion, that was "a joke".

    AND then, after doing NOTHING, Masnick has the chutzpah to say how hard moderating is!

    And then fanboys (who may well be mostly astro-turfing) jump in and allege that they know exactly how Techdirt works and why I'm censored (it's the way I write, anything but censoring of my VIEWS), even though I've asked and get no official response.

    Techdirt does "moderation" ONLY against dissenters and by sneaking rather than the right common law way: stated rules enforced on ALL, and any actions done out in the open.

    link to this | view in thread ]

  4. identicon
    Anonymous Coward, 18 May 2018 @ 2:23pm

    What part of common law forces Mike to host your brain drool?

    Friendly reminder. You have to take your meds every day or they don’t work.

    link to this | view in thread ]

  5. identicon
    Anonymous Coward, 18 May 2018 @ 2:26pm

    Re: Well, it's the plain ordinary bullets of brute force censorship,

    Would you like some cheese with that whine?

    AND then, after doing NOTHING, Masnick has the chutzpah to say how hard moderating is!

    Ever think that he doesn't do anything because he knows how hard moderating is?

    How do you look in a mirror and take yourself seriously? Nothing in your post is based in any kind of reality or logical facts. You only don't know what the rules are because you 1) deliberately ignore them after being told, and 2) don't understand technology otherwise you would know the rules since the code and technology that allows websites like this to exist has predefined rules on how you can use it.

    link to this | view in thread ]

  6. identicon
    Ed, 18 May 2018 @ 2:27pm

    Re: Well, it's the plain ordinary bullets of brute force censorship,

    Just a heads up, walking is best done in private.

    link to this | view in thread ]

  7. icon
    Stephen T. Stone (profile), 18 May 2018 @ 2:27pm

    Re: Well, it's the plain ordinary bullets of brute force censorship,

    You cannot force Techdirt admins to reveal their moderation tactics any more than you can force the site to host your speech. And your comments get hidden because they are mostly irrelevant to the article at hand, and that is because your comments are mostly a way for you to take out your anger at Techdirt and Mike Masnick because ¯_(ツ)_/¯

    link to this | view in thread ]

  8. icon
    Ninja (profile), 18 May 2018 @ 2:39pm

    Re: Well, it's the plain ordinary bullets of brute force censorship,

    "And then fanboys (who may well be mostly astro-turfing) jump in and allege that they know exactly how Techdirt works and why I'm censored (it's the way I write, anything but censoring of my VIEWS), even though I've asked and get no official response."

    I'm not sure what are the criteria for effectively blocking a comment here other than your regular spam justifications but the fact that you are able to post your drivel is clearly a sign that you are not being blocked or censored by TD staff or systems. However you are clearly being flagged by the community repeatedly. I could try to explain why it happens (you are an asshole) but let's just focus on the fact that you are being flagged but your comments remain there. I just flagged you by the way. Not sure if it's going to be hidden.

    The system TD uses allow anonymous comments so there's no reliable way of blocking someone specifically. Considering you have never contributed in any meaningful way with any discussion and you are generally an asshole it's pretty amusing you are making such accusations. Oh well.

    link to this | view in thread ]

  9. icon
    Ninja (profile), 18 May 2018 @ 2:45pm

    I think the solution is multi-layered including technical measures and general participation of authors and staff in the discussion. Big networks (ie: Facebook) can't possibly apply some of these measures in a general way. Maybe they could offer some overkill tools that simply remove anything remotely offensive or something for those who don't want to moderate actively letting the community do it's job but give the owners of pages and profiles more fine-tuned mechanisms that they could couple with your usual social engineering (ie: participating in the discussion). Which would make them closer to what TD is, a smaller, more manageable community.

    I'm just throwing random thoughts, I wouldn't possibly know what to do as I don't even have any blog/site/community to manage lmao

    link to this | view in thread ]

  10. identicon
    Anonymous Coward, 18 May 2018 @ 2:51pm

    Re: Well, it's the plain ordinary bullets of brute force censorship,

    It's funny because people literally tell you why they reported you.

    link to this | view in thread ]

  11. icon
    Gary (profile), 18 May 2018 @ 2:54pm

    Re: Well, it's the plain ordinary Troll!

    Transparency: I've flagged you many times for trolling. But since you have all the answers, please point us to your website so we can see how it should be done. Looking forward to how you run things on your end.

    link to this | view in thread ]

  12. identicon
    Christenson, 18 May 2018 @ 2:54pm

    Re: Re: Well, it's the plain ordinary bullets of brute force censorship,

    Might not be able to *force* Techdirt to reveal it's tactics, but they are nonetheless pretty obvious....and *not* at scale. I doubt the technique would work if Putin decided to put smart people up to ruining the place.

    If it's spam, however detected, it goes to dev/nul
    If it's offensive it gets flagged by the community and hidden, assuming Techdirt agrees with the community vote

    If it's great, then it gets upvoted -- remember, there's a "best of the week" contest every week, with categories for both what Techdirt and what the crowd thought were the *best* comments.

    link to this | view in thread ]

  13. identicon
    Anonymous Coward, 18 May 2018 @ 3:42pm

    Re: Well, it's the plain ordinary bullets of brute force censorship,

    "no recognition that dissent is valued or protected here,"

    You don't dissent. You troll and throw immature fits - like this one. You try to turn one of the most open forums I've seen on a private website into your attention-fest and act like it's one jackboot shy of Nazi Germany if anyone disagrees with you. I've dissented on several articles over the last ten plus years of commenting. I've never had a single comment flagged by the community. You don't dissent in a respectful way. You don't do anything in a respectful way. You act like an entitled child in a chocolate factory.

    "no statement ever from Masnick that uncivil remarks are not wanted"

    You're making demands. Since you like to cite not-legal "common law" bullshit as if it dictates human interactions, cite what law, common or otherwise, requires Masnick to answer your questions. You haven't issued a subpoena. What legal right do you have to expect an answer?

    link to this | view in thread ]

  14. icon
    Stephen T. Stone (profile), 18 May 2018 @ 3:55pm

    Re:

    I would say that part of the solution is never letting a social network service get as big as Facebook or Twitter. Moderation of a network that size cannot happen without shortcuts like keyword blocks that will run into the Scunthorpe problem.

    link to this | view in thread ]

  15. identicon
    Thad, 18 May 2018 @ 4:09pm

    Re: tl;dr

    You cannot force Techdirt admins to reveal their moderation tactics

    But they have anyway.

    You wrote a pretty good summary of the moderation process back in February; Mike responded and confirmed that everything you said was correct.

    Various people have, indeed, explained comment moderation to Blue on many, many occasions. Like all trolls, always, he ignores explanations and then whines that nobody ever explains anything to him.

    I recall suggesting to him that he start a blog, not just for all the usual reasons I tell him to start a blog but because he appears not to understand even the most basic facts about how comment moderation works, and starting a blog would help him learn.

    He is, of course, not interested in learning. Only in whining about what a poor innocent victim he is.

    link to this | view in thread ]

  16. identicon
    Anonymous Coward, 18 May 2018 @ 4:23pm

    Re: Re:

    Social networks sites grow to Facebook and Twitter size because the social networks do not divide up neatly, and a single common service is mush easier to use than multiple federated sites.

    link to this | view in thread ]

  17. identicon
    Thad, 18 May 2018 @ 4:28pm

    Re: Re: Well, it's the plain ordinary Troll!

    We even counted the number of flags it takes to hide a post once.

    I don't remember for sure, but I think it was five.

    If OOTB seriously finds it hard to believe that five people would be willing to flag his posts, well, that's because he's very very stupid.

    (I don't flag his posts anymore; I've blocked them entirely. He and the other trolls convinced me, long ago, that writing a Greasemonkey script to hide anonymous posts was a better use of my time than reading any more delusional rants about zombies/pirates/delicious, delicious paint chips.)

    link to this | view in thread ]

  18. icon
    Stephen T. Stone (profile), 18 May 2018 @ 4:32pm

    Re: Re: Re:

    One of the reasons I like Mastodon is that it leaves federation mainly to the admins and moderators. Also, it is an open source protocol instead of a service, which means anyone can make their own Masto instance—even a single-user instance—and alter the software as they wish instead of using a service like Twitter that silos information, runs on outrage, and cares more about whether people use it than how they use it.

    link to this | view in thread ]

  19. identicon
    Thad, 18 May 2018 @ 5:14pm

    Re: Re: Re: Re:

    I've never been into the whole microblogging thing (I think it continues a disconcerting trend of dumbing down complex issues into quotable soundbites, though it's useful for jokes and sharing simple information like "The meeting is at the following location and time"), but I find Mastodon's approach to it pretty interesting.

    link to this | view in thread ]

  20. icon
    Stephen T. Stone (profile), 18 May 2018 @ 5:46pm

    Re: Re: Re: Re: Re:

    “Protocols instead of platforms” is a very IndieWeb approach to social media, yes. Someone is already working on a federated Instagram-style protocol, too. A federated Tumblr-style protocol might not be too far behind.

    link to this | view in thread ]

  21. icon
    That Anonymous Coward (profile), 18 May 2018 @ 5:52pm

    ... but what do I know...
    I'm just an example faggot.

    :)

    The biggest problem is there is no perfect set of 'rules'.
    As we've learned faggot doesn't even give me the tiniest bit of pause, but someone else might be reduced to tears.
    There is no way to protect the crying person & my right to use a word.
    (One can also insert the N word & other things into this cloudy point)
    The platform should never be in the position to have to do a deep dive into the users to see if they qualify for merit badges that give them a pass for certain words/ideas.

    If the word offends you, it is possible to let users have their own list of verboten words that bother them & they never have to see them.

    This would improve over the current system where a word can be declared offensive & people can gang up and report an account for using it even if they aren't offended, they want to get the user dinged for some reason.

    If the persons ideas offend you, block them & move on. Mass reporting to "win" just keep the war going back and forth as each side wants to be the winner... not noticing that even if they won they destroyed what they were fighting over in the first place.

    But for several decades we've told everyone its a zero sum game, everyone wins, everyone gets a ribbon, you are special, & it is never your fault.

    They got R Kelly off of spotify... now they have presented their 2nd round of demands to be removed from the service. It's a pity they seem to have forgotten that if they dislike an artist, they don't have to listen... but they aren't the center of the universe who should be allowed to decide for everyone else.

    But again... what do I know... I'm just a faggot.

    link to this | view in thread ]

  22. identicon
    Thad, 18 May 2018 @ 5:57pm

    Re:

    its a zero sum game, everyone wins

    ...if everyone wins, then it's not a zero sum game.

    link to this | view in thread ]

  23. icon
    Stephen T. Stone (profile), 18 May 2018 @ 7:45pm

    Re:

    If the word offends you, it is possible to let users have their own list of verboten words that bother them & they never have to see them. This would improve over the current system where a word can be declared offensive & people can gang up and report an account for using it even if they aren't offended, they want to get the user dinged for some reason.

    Herein lies the problem: Twitter already has a wordfilter feature. If your solution was as great as you think it to be, Twitter would not be the flaming hellbird it is today. (Spoilers: Twitter is a hellbird sitting atop a flaming tree in the middle of a lake of lava.)

    And what your idea fails to take into account is that while users do not have to see they words they have filtered, the users who are using those words are still saying them. This method of moderation—leaving it in the hands of users to “self-moderate” by way of user-side filters—makes posting on Twitter no better than posting on 4chan. Twitter needs to say “we don’t do that here” to people who break the rules, then boot those people if they keep breaking the rules. Without showing consequences for bad behavior on the platform, Twitter would lose—and has possibly already lost—control of what it deems “acceptable user behaviour”.

    Your idea also does not take aim at other potential forms of harassment, such as sockpuppet accounts and routing around wordfilters via screenshots and Unicode characters. Moderation tactics for smaller communities do not scale, and Twitter is the proof. Any set of moderation tactics for a social media service should have a section on harassment, including examples of potential harassing actions and ways to both punish and prevent those actions.

    Moderating a community, regardless of size, is a thankless pain-in-the-ass job. (I speak from experience.) But it is also a responsibility that requires actual effort to do well. Telling users that they must moderate themselves will send an implicit message: “I don’t care about your experience here.” If mods refuse to care, the users will, too. And we have seen what happens when mods refuse to care—after all, 4chan has been around for more than a decade.

    link to this | view in thread ]

  24. icon
    That Anonymous Coward (profile), 18 May 2018 @ 8:23pm

    Re: Re:

    Twitter never made people use the damn feature.
    Being able to not see the word wasn't enough, they needed to run the evil people off the platform.

    Twitter gave in & screwed it all up.

    The people screaming how they were under attack during gamergate were really good at playing the victim card... but they were just as toxic as those they dogpiled.

    Leslie Jones (?) the comedian from SNL was called the N word & got people banned for doing it... funny she used it in a derogatory way towards people and never faced anything.

    Twitter gave into the expectation if enough of us whine you have to ban them!!
    So what if the account never tweeted at me...
    So what if I never read the tweet in question...
    So what if if 300 people suddenly report 1 tweet...
    My buddy said this person sucks & I need to report them!!!!

    If the first response was - we're sorry you were offended you can block offensive words & if you feel the person is toxic you can block the account.
    Instead Twitter gave in & gave birth to the tit for tat reporting suspending of people. Some people could say way worse things than those they reported and never get punished, while people they targeted were taking out over and over.
    The moderation makes no sense, its not uniformly adhered to.
    The punishments are just fuel to the fire b/c you have SJW celebrating they got an Alt-Right account booted for a comment that had nothing to do with the SJW crowd they just didn't like them.

    I don't like reading the crap Blue spews on here, I'm perfectly happy for his crap to vanish into the void. My ass isn't chapped that he can still post here, and this is the giant flaw.
    If some jackass wants to scream Nazi over & over why does it matter if they are still on the platform?

    We have actual death threats & people doxed on Twitter... those people need bans...
    He made a comment about Transpeople I didn't like doesn't need a ban.
    I've had morons join into conversations & come for me, after I hand them their ass they then look for ways to inflict damage on my account & hey you said faggot a year ago... get out.

    You can care about the user experience without trying to cater to every groups unique individual demands.

    Targeted harassment is 1 thing, but one needs to look beyond a single tweet without context... often you discover the person reporting stuck their dick in the hornets nest & have fallen back to reporting to "win" & deleting tweets that were much more offensive to play victim better.

    911 used to be for emergencies, then we had people complaining fast food places were out of nuggets or not enough pickles... idiots who do that get fined and punished, perhaps maybe Twitter needs to try to be more like 911.
    If you are reporting stupid shit enjoy your own timeout.

    The current system is lock the reported account so the 'victim' doesn't goto the media with how Twitter doesn't care about them (when if you read the whole tweet exchange, they were telling the banned guy off who ignored them & that pissed them off more).

    Twitter isn't a community, Twitter is a shitty video game where you score points getting people put on time out, silencing ideas you disagree with, and victimhood.

    link to this | view in thread ]

  25. icon
    That Anonymous Coward (profile), 18 May 2018 @ 8:25pm

    Re: Re:

    maybe I needed a different character there...
    maybe a ;

    Politics for the last very long time has been focused on the, it's a zero sum game where you have to have total victory

    Mix that with teaching kids not to compete, everyone wins, no one has to feel bad...

    And people wonder why kids are fscked up these days.

    link to this | view in thread ]

  26. icon
    Stephen T. Stone (profile), 18 May 2018 @ 8:54pm

    Re: Re: Re:

    Twitter never made people use the damn feature.

    Twitter admins should not have to make people use wordfilters.

    Being able to not see the word wasn't enough, they needed to run the evil people off the platform.

    If you want to improve a social media service, getting rid of people who act like shitheads is a good place to start.

    The people screaming how they were under attack during gamergate were really good at playing the victim card... but they were just as toxic as those they dogpiled.

    Last time I checked, Anita Sarkeesian did not continually harass and threaten violence against every one of her critcs on a daily basis. Unless you have some proof to the contrary, the Gators were more of a problem than their targets—most, if not all, of whom just wanted to use Twitter without worrying about a constant stream of abuse in their timelines.

    Leslie Jones (?) the comedian from SNL was called the N word & got people banned for doing it... funny she used it in a derogatory way towards people and never faced anything.

    Leslie Jones is a Black woman. She has far more right to use that word than the people who said it back at her.

    If the first response was - we're sorry you were offended you can block offensive words & if you feel the person is toxic you can block the account.

    Again: Blocking an account and filtering words do nothing to actually stop someone who breaks the rules. Those tactics only push the rulebreaker’s shitty behaviour out of sight, and that does no one any good.

    The moderation makes no sense, its not uniformly adhered to.

    People abusing an easily abusable system tends to break a system into nonsense. The inability of moderation tactics to scale alongside the service does not help, either.

    The punishments are just fuel to the fire b/c you have SJW celebrating they got an Alt-Right account booted for a comment that had nothing to do with the SJW crowd they just didn't like them.

    If someone gets the boot for breaking the rules, why should it matter who reported them and why they filed the report?

    If some jackass wants to scream Nazi over & over why does it matter if they are still on the platform?

    Silence is complicity. If Twitter refuses to ban Nazis and White supremacists even after they are reported, that refusal sends a message to those groups: “You are welcome here.” I do not know about you, but I would like my social media timelines to be as free of Nazis as possible.

    He made a comment about Transpeople I didn't like doesn't need a ban.

    Depends on the comment and the context. (And FYI, “trans people” is two words.)

    You can care about the user experience without trying to cater to every groups unique individual demands.

    You can also care about the user experience without forcing moderation upon a userbase that barely knows what they want from social media.

    Twitter needs to try to be more like 911. If you are reporting stupid shit enjoy your own timeout.

    Retributive moderation for “false” or “annoying” reports, especially on a service as large as Twitter, would suck as much as the hands-off moderation you think the service should use. If I report a tweet that ends up deleted before Twitter can get to the report—what should happen to me because I filed a “frivolous” report?

    Twitter isn't a community

    If it is not a community as a whole, it is at least a service home to several unofficial sub-communities (Black Twitter, MAGA Twitter, Weird Twitter, Furry Twitter, Sports Twitter, Film Twitter…you get the point).

    Twitter is a shitty video game where you score points getting people put on time out, silencing ideas you disagree with, and victimhood.

    Why do you care so much if no one is forcing you to either pay attention to or play the game?

    link to this | view in thread ]

  27. identicon
    bob, 18 May 2018 @ 9:57pm

    Re: some newer efforts.

    Check out the article on a talking banana on twitch and the owners efforts to stop people from making it say bad words.

    https://kotaku.com/racist-twitch-trolls-defeated-by-talking-banana-1826115980

    link to this | view in thread ]

  28. identicon
    Anonymous Coward, 19 May 2018 @ 1:44am

    Re:

    If the persons ideas offend you, block them & move on.

    That assumes your are a broad minded person. A narrow minded person is offended by the idea that somebody could be saying something that is offensive, and some of those dedicate their lives to destroying that which offends them. Indeed they go out of they way to find the offensive just so that they can act all offended.

    (Just look how much time and effort blue puts into being offended by this site).

    link to this | view in thread ]

  29. icon
    discordian_eris (profile), 19 May 2018 @ 3:50am

    Re: Re:

    If it doesn't offend somebody, it couldn't possibly interest anybody.

    link to this | view in thread ]

  30. icon
    Stephen T. Stone (profile), 19 May 2018 @ 6:20am

    Re: Re: Re:

    Good lord, we finally found the perfect motto for Twitter.

    link to this | view in thread ]

  31. identicon
    Anonymous Coward, 19 May 2018 @ 8:26am

    There once was an out of the blue
    Who hated the process of due
    Each film that he'd paid
    Was DMCAed
    And shoved up his ass with a screw

    link to this | view in thread ]

  32. icon
    Toom1275 (profile), 19 May 2018 @ 1:24pm

    Re: Re: tl;dr

    Well if OOTB were to ever slip up and tell the truth, it'd damage his painstakingly cultivated reputation as a batshit-insane pathological liar.

    link to this | view in thread ]

  33. identicon
    JEDIDIAH, 19 May 2018 @ 3:16pm

    You kind of have to want to solve the problem.

    Big Networks like Facebook go out of their way to promote trolling. They have no interest in promoting useful discussion. Although they are happy to censor people for utterly bizarre reasons.

    You kind of have to WANT to solve the problem to begin with.

    Other sites do much better but they are sincere about the problem. Although it still requires ongoing effort and some vigilance.

    link to this | view in thread ]

  34. identicon
    JEDIDIAH, 19 May 2018 @ 3:18pm

    Re: (was Re:)

    The original version is far too weak.

    If you aren't offending someone, you probably aren't saying anything significant or meaningful.

    link to this | view in thread ]

  35. identicon
    Anonymous Coward, 20 May 2018 @ 12:25pm

    Re: Re:

    While this is generally true, there are cases in which the offending party crosses the line into what is considered assault. This maybe continuous calling, texting, and/or other form of harassment which many times eventually leads to a physical assault.

    link to this | view in thread ]

  36. identicon
    Wendy Cockcroft, 22 May 2018 @ 7:09am

    Re: Re: tl;dr

    Blue has no intention of starting a blog. Nobody would read it. I remember when I was convinced of his/her identity and outed it. The blog I read had sod all comments on because nobody is interested in the rantings of a sociopath who puts corporations' rights above people's.

    link to this | view in thread ]

  37. identicon
    Wendy Cockcroft, 22 May 2018 @ 7:11am

    Re: Re: Well, it's the plain ordinary bullets of brute force censorship,

    I dissent often enough. I'm not sure if any of my comments have ever been hidden but if they were I'm grown up enough to accept that at least five people considered them spam.

    Also, I don't make up lies about the staff here, push conspiracy theories, or slag people off for the sake of it.

    Stop your whining and shove off. We flag your posts because we don't want to see them, Blue.

    link to this | view in thread ]

  38. identicon
    Wendy Cockcroft, 22 May 2018 @ 7:12am

    Re: Re: Re: Well, it's the plain ordinary bullets of brute force censorship,

    What legal right do you have to expect an answer?

    Something something common law, AC. ;P

    link to this | view in thread ]

  39. identicon
    Wendy Cockcroft, 22 May 2018 @ 7:14am

    Re: Re: Re: Re: Re:

    There's Twitlonger for longer posts, you can thread your posts, or you can link to a blog post, etc.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.