If You're Complaining About COVID-19 Misinformation Online AND About Section 230, You're Doing It Wrong

from the section-230-is-helping-quell-disinformation dept

I remain perplexed by people who insist that internet platforms "need to do more" to fight disinformation and at the same time insist that we need to "get rid of Section 230." This almost always comes from people who don't understand content moderation or Section 230 -- or who think that because of Section 230's liability protections that sites have no incentive to moderate content on their platforms. Of course platforms have tons of incentive to moderate: much of it social pressure, but also the fact that if they're just filled with garbage they'll lose users (and advertisers).

But a key point in all of these debates about content moderation with regards to misinformation around COVID-19, is that for it to work in any way, there needs to be flexibility -- otherwise it's going to be a total mess. And what gives internet platforms that flexibility? Why it's that very same Section 230. Because Section 230 makes it explicit that sites don't face liability for their moderation choices, that enables them to ramp up efforts -- as they have -- to fight off misinformation without fear of facing liability for making the "wrong" choices.

Without Section 230, these businesses would have had to vet every single post’s truthfulness and legality. Not only would that have bogged down businesses’ response, it also would have been impossible — we knew little about coronavirus when it first hit and don’t know much more today.

Put simply, Section 230 helps make the internet safer, and that, in turn, has let us all rely on it to keep life moving, even while we’re stuck inside.

I'd argue it's even more stark than that article lays out. Not only did we know little about the coronavirus at the beginning, we still don't know very much and many of the early messages from official sources turned out to be wrong. Indeed, one of the ways that we've zeroed in on more accurate information is by being able to discuss ideas freely and zero in on what makes the most sense.

This whole process involves experimentation on both sides of this market. The platform players get to experiment with different methods and ideas for content moderation, while users get to discuss and debate different ideas about COVID-19. But both of those only happen with the structural balance provided by Section 230. Platforms can experiment to figure out what works best to enable reasonable debate and move people towards more accurate analysis -- while minimizing the impact of blatantly wrong information, misinformation, and disinformation. And users get to discuss and debate ideas to get closer to the truth themselves. Without the balance of Section 230, you create massive structural problems that prevent most of that from happening.

Without 230, companies face the classic moderator's dilemma. Doing no moderation at all is one option -- but then that lets disinformation flow freely, and companies might face liability for that disinformation. Alternatively, they could moderate very thoroughly, and pull down lots of information. But that might actually include good and useful information. For example, the discussion over whether or not people should wear masks as the pandemic began was all over the place with the WHO and the CDC initially urging people not to wear masks. However, in part because of widespread discussions and evidence presented on social media, the narrative shifted and eventually the CDC and WHO came on board with the recommendation to wear masks.

Without 230, what would a platform do regarding the mask discussion? Someone at the company could unilaterally decide that masks are a good thing -- but then face outrage from those who supported the WHO and CDC, and they would argue that the platform is spreading dangerous misinformation that could lead to hoarding and fewer masks for medical professionals. And that, alone might create lawsuits (in the absence of 230). Or they could follow what the WHO and CDC said initially... and then might feel obligated to silence and delete the conversations which argued, persuasively, why masks actually are valuable. And that would create all sorts of problems as well. At the same time, there is actual misinformation about what types of masks to wear and how -- and there are strong arguments for why platforms should be able to moderate that.

But all of that becomes much trickier, and much riskier, without a Section 230 -- and the greatest likelihood is that platforms will seek to avoid liability, and that will mean censoring plenty of good and important information (such as how to make or wear masks and why they're so important). It's Section 230 that has enabled both platforms to adjust their moderation techniques and the important public discussions that allow people to share, debate, and discuss as we figure out what is going on and how best to deal with it.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: cda 230, content moderation, content moderation at scale, covid-19, disinformation, misinformation, section 230


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Anonymous Coward, 1 May 2020 @ 1:52pm

    Those people who would remove section 230 are not interested in discussion or objective truth, but rather in forcing their politics onto everybody else, while hiding their actions from everybody else.

    link to this | view in chronology ]

    • icon
      Aaron Wolf (profile), 1 May 2020 @ 4:52pm

      Re: forcing politics

      It seems far more likely to me that the anti-230 folks are just ignorantly grasping at the idea of regulation.

      They may have their political views, but they are not so thoughtful about forcing their politics and hiding their actions etc. Rather, they just want magic. It's like that classic YouTube skit about "the expert".

      The anti-230 folks are just non-experts asking the experts to do magic.

      link to this | view in chronology ]

      • icon
        Stephen T. Stone (profile), 1 May 2020 @ 5:21pm

        It seems far more likely to me that the anti-230 folks are just ignorantly grasping at the idea of regulation.

        Possible, but unlikely. The anti-230 side, from what I can tell, seems more focused on the idea of “fairness”. They want to know why “liberal” speech is allowed on Twitter, Facebook, etc. but “conservative” speech (allegedly) isn’t. Such fools think 230 is a barrier to “fairness”. But they don’t stop to think about what speech is being banned, why it’s being banned, and how 230 allows any platform to make that decision for itself.

        link to this | view in chronology ]

        • icon
          Mike Masnick (profile), 2 May 2020 @ 12:12am

          Re:

          I think Aaron actually may have a good point. There are two groups of people attacking 230. He's describing one, and you're describing the other (of course, it's too simplistic to say there are just two motivations... but... for simplicity's sake we'll run with it).

          link to this | view in chronology ]

      • icon
        Scary Devil Monastery (profile), 4 May 2020 @ 5:08am

        Re: Re: forcing politics

        "The anti-230 folks are just non-experts asking the experts to do magic."

        Not usually, no. The most vocal anti-230 crowd stands out as having one of two common denominators.

        The first one is the type who hold opinions which would have them banned from most platforms where a sizeable audience exist and are incensed that the platforms in questions are free to deny them their use of the platforms soapbox. That crowd sees the death of 230 as the guarantee that Twitter will henceforth be forced to let them spew bile on whatever minority they've a hateboner for.

        The second type thinks getting rid of 230 will allow them to SLAPP any unfavorable mention of themselves out of existence - and here we find the by now rather plentiful industries of fraud and conmanship engaged in peddling whatever snake oil they can - whether that's copyright/patent trolls, ambulance chasers, or purveyors of silver tonics with the lamentable side effect of coloring people permanently a vivid hue of zombie smurf.

        Very few people today nagging about section 230, encryption, or any other legal or technical enabler of mass communication actually lacks an agenda.

        link to this | view in chronology ]

    • identicon
      Anonymous Coward, 2 May 2020 @ 2:55am

      Re:

      Really Complaints about Section 230 all mean the same thing "How dare you not have my opinion! You should be punished for it!"

      link to this | view in chronology ]

      • icon
        PaulT (profile), 2 May 2020 @ 3:10am

        Re: Re:

        As I often say, it more like they have seen the amount of extra traffic (and thus revenue) they get by piggy backing on a general mainstream platform than they could ever generate on their own. So, they want the platform to be forced to keep them there against their wishes.

        There will be some true believers who genuinely don't understand why their favoured politics are treated unequally (in the rare cases where that's even remotely true), but a lot of them are grifters who don't like their free ride on the gravy train coming to an end.

        link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 1 May 2020 @ 3:00pm

    utubr

    utube can tke down any video for any reason. once video is taken down how prove whether it was truth? utube judge.

    when video shows reporters attacking doctors, methinks motive is avoid reporters looking bad in face of science. not violation of self guidelines

    link to this | view in chronology ]

  • icon
    tz1 (profile), 1 May 2020 @ 6:09pm

    At what point does moderation become publisher editoral?

    The original reason for Sec. 230 was to protect PLATFORMS. If people post errant or fraudulent classified ads or such it make no sense to hold the printer liable. There can be UNIFORM moderation to remove offensive words (which should be listed - your post contains XXX which violates our ToS). The error which is growing is the Trust and Safety is now the Ministry of Truth picking not the most reasoned posts based on evidence with shown work, but the narrative. So they would censor lockdowns on 2/1/20, but censor anti-lockdown two months later. The truth didn't change. A "We Disagree (and/or disapprove) note is different than outright censorship. If these companies wish to be editors for their publication, they are publishers (and have claimed 1st amendment protection!) and not platforms. There is a line, but they have crossed it a long time ago and wish to have it both ways. Protections both as publishers and platforms depending on the context. They should be forced to choose one or the other.

    link to this | view in chronology ]

    • icon
      Stephen T. Stone (profile), 1 May 2020 @ 6:58pm

      If these companies wish to be editors for their publication, they are publishers (and have claimed 1st amendment protection!) and not platforms.

      Insofar as it concerns 47 U.S.C. § 230, that distinction doesn’t matter. Any platform that moderates user-generated content is covered by 230. Twitter can ban basically any kind of speech it wants without legal consequence. So can any other platform.

      Let’s say you run a small Mastodon instance. You decide that you don’t want racial slurs used on that instance, under any circumstances. What would you do, then, if the government said “you must allow people to say those things”? Because 230 stops the government from doing exactly that.

      link to this | view in chronology ]

      • This comment has been flagged by the community. Click here to show it
        identicon
        Anonymous Coward, 1 May 2020 @ 9:38pm

        Re:

        A "platform" is just that: a "dumb pipe" for speech that is NOT moderated based on political views. A PUBLISHER alters content to suit its editorial method, and selective censorship is one such way.

        Masnick clearly censors postings that make sound attacks against Section 230, and each of his decisions is being logged for future reveal at a place he cannot censor.

        link to this | view in chronology ]

        • icon
          Mike Masnick (profile), 3 May 2020 @ 5:22pm

          Re: Re:

          A "platform" is just that: a "dumb pipe" for speech that is NOT moderated based on political views.

          There's a lot going on in this statement, none of which is correct. The law does not refer to "platforms" so your legal distinction between platform and publisher is not only wrong, but meaningless.

          Second, if you actually meant an interatcive computer service -- which is what the law talks about, you're still totally wrong. The law makes no reference to "dumb pipes" and the history and intent of the law shows that the purpose is actually the exact opposite of what you claim. I mean, the law literally calls out that it's designed to encourage the "development and utilization of blocking and filtering technologies." So, at no point was it supposed to be to allow all things through. Indeed, the purpose behind the law was the reverse.

          Indeed, this is doubly enforced by the parts of the law that specifically call out that an ICS may not be held liable for its moderation choices -- and courts have long held that to be quite broad. So your claims are simply silly.

          Masnick clearly censors postings that make sound attacks against Section 230, and each of his decisions is being logged for future reveal at a place he cannot censor.

          I do no such thing, but if I did it would be perfectly legal and allowed under 230. The fact that your dumb posts sometimes get flagged by the community and/or caught by the spam filter does not change any aspect of that. We do not "censor" anyone, nor could we, since you have every right to post elsewhere -- as you indicate you intend to do. And I eagerly await you revealing this "log" because it's not going to show anything nefarious in the slightest, because it literally cannot.

          Indeed, we're not only obviously well within what 230 protects, we are much more open and allowing of insanely ignorant takes such as yours -- which we do allow on the site and do not remove. Plenty of sites would delete such delusional comments outright -- again, with no legal consequences whatsoever.

          link to this | view in chronology ]

        • identicon
          Anonymous Coward, 3 May 2020 @ 6:21pm

          Re: Re:

          How's that mailing list Richard Liebowitz defense fund coming along, John Smith?

          link to this | view in chronology ]

    • icon
      PaulT (profile), 1 May 2020 @ 11:56pm

      Re: At what point does moderation become publisher editoral?

      Your premise is wrong, and section 230 does not say what you think it says. But even if it did, the correct remedy is "use a competing service", not "continue to provide revenue for the service you dislike while whining that they're not bowing to your opinions".

      link to this | view in chronology ]

    • icon
      Mike Masnick (profile), 2 May 2020 @ 12:18am

      Re: At what point does moderation become publisher editoral?

      The original reason for Sec. 230 was to protect PLATFORMS.

      No. The original reason for Section 230 was to protect free speech on the internet, and to enable internet services to feel free to create family friendly spaces. I mean, this history is pretty well known.

      There can be UNIFORM moderation to remove offensive words (which should be listed - your post contains XXX which violates our ToS).

      Spoken like someone who has never had to moderate a platform. If you had a list of forbidden words, people would immediately figure out how to write the same thing using slight modifications. This is an extremely naive take.

      The error which is growing is the Trust and Safety is now the Ministry of Truth picking not the most reasoned posts based on evidence with shown work, but the narrative.

      Oh, sorry. This is even more naive.

      So they would censor lockdowns on 2/1/20, but censor anti-lockdown two months later. The truth didn't change.

      Actually, it did. "Truth" in this context is our collective understanding of what is likely to make everyone safest. And in a world where information is constantly changing and we don't yet know what's accurate, the way this works is as more information comes out, we adjust our ideas and theories about it.

      That and, oh, no one ever censored news about lockdowns on 2/1/20. But whatever.

      If these companies wish to be editors for their publication, they are publishers (and have claimed 1st amendment protection!) and not platforms.

      First, there is no legal distinction, so this is a silly point. Second, yes, they are publishers for any content they create themselves, but they are not publishers of content they moderate.

      Protections both as publishers and platforms depending on the context. They should be forced to choose one or the other.

      Again, this is an incredibly naive take. Again, there is no legal distinction here. The issue is that some actions they take as interactive computer services -- hosting content for others. Some actions they may do as publishers -- creating their own content. You seem to be confusing the two and assuming that one can't be the other. It depends on the specific action -- and moderating content is firmly within the arena of being an interactive service, not creating your own content.

      link to this | view in chronology ]

      • icon
        PaulT (profile), 2 May 2020 @ 12:46am

        Re: Re: At what point does moderation become publisher editoral?

        Spoken like someone who has never had to moderate a platform. If you had a list of forbidden words, people would immediately figure out how to write the same thing using slight modifications. This is an extremely naive take.

        Also, you run into this issue: https://en.wikipedia.org/wiki/Scunthorpe_problem. A person would have to be very naive to think that such a filter has not already been tried, and must be pretty new to the internet to not have seen examples of these problems in the wild.

        Also, any attempt to react to the ways people try to get around the blocks would inevitably also block perfectly legitimate content, and correctly identifying context is very difficult even for human readers at times.

        Once again, someone who thinks that this is a simple problem to fix does not understand the issue, and that's even without to false conflation of publisher and platform.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 2 May 2020 @ 4:19am

          Re: Re: Re: At what point does moderation become publisher edito

          On that topic the TV Tropes page has a pretty hilarious example of how a boy got around the censors while he was testing it.

          One of the developers of Toontown Online, wanting to get around this problem while at the same time allowing players to interact, suggested using a list of approved words and sentence fragments that a user could string together to form full sentences. This idea was shot down by one of the other developers who had tried the approach in another game. The 14-year old boy who was testing the software was able to, within a minute, construct the following sentence: "I want to stick my long-necked Giraffe up your fluffy white bunny".

          link to this | view in chronology ]

          • icon
            PaulT (profile), 2 May 2020 @ 4:38am

            Re: Re: Re: Re: At what point does moderation become publisher e

            Yeah, as the saying goes "For every complex problem there is an answer that is clear, simple, and wrong". Euphemisms are deadly ground for this kind of thinking, and if the subject is sex or money the user will have all the incentives they need to bypass any such filter.

            link to this | view in chronology ]

        • icon
          That One Guy (profile), 2 May 2020 @ 1:01pm

          Re: Re: Re: At what point does moderation become publisher edito

          Also, any attempt to react to the ways people try to get around the blocks would inevitably also block perfectly legitimate content, and correctly identifying context is very difficult even for human readers at times.

          An example that comes immediately to mind regarding the problems with blocks/filters, and one that has been covered on TD before is the anti-vaxxer lunatics vs real science. Since both of them are going to be using a lot of the same words and terminology a platform is going to have a hell of a time just creating up a filter to block the plague cultists without also hitting legitimate science and/or people pointing out how dangerously wrong they are.

          link to this | view in chronology ]

      • This comment has been flagged by the community. Click here to show it
        icon
        Koby (profile), 3 May 2020 @ 7:01am

        Re: Re: At what point does moderation become publisher editoral?

        In an excellent example, yesterday, conservative political commentator Candace Owens was suspended from Twitter. Basically for tweeting that she believes the governor of Michigan should allow people to go back to work.

        https://thehill.com/blogs/blog-briefing-room/news/495814-candace-owens-twitter-account-suspend ed

        There was nothing to moderate. There was no obscene language that needed to be toned down. This was pure political censorship. Every time one of the tech corporations censor a benign tweet like this, it is the corporations themselves who take a chip out of the Section 230 wall because of their willingness to engage in abuse.

        link to this | view in chronology ]

        • icon
          That One Guy (profile), 3 May 2020 @ 7:59am

          'My conservative views were censored!' 'Which?' 'Uhh...'

          There was nothing to moderate.

          Other than someone posting a wildly dangerous idea during a pandemic?

          There was no obscene language that needed to be toned down.

          Obscenity isn't the only thing that gets moderated, so that point it moot.

          This was pure political censorship.

          No, it bloody well wasn't, but have fun clutching that persecution complex if it makes you feel better rather than facing that they got the boot not because they were conservative but because they were a dangerous idiot.

          If your 'politics' involve incredibly irresponsible and dangerous ideas, maybe take a look at your politics before you go complaining that someone else was 'mean' to you for presenting them.

          Every time one of the tech corporations censor a benign tweet like this,

          If 'people should do something that is incredibly irresponsible and has the potential to get people killed' counts as benign in your mind I'd hate to see what counts as bad.

          it is the corporations themselves who take a chip out of the Section 230 wall because of their willingness to engage in abuse.

          Actions have consequences, if you don't like a private platform's rules for what they do and do not allow then make your own gorram platform and stop whining that they won't let you say/do whatever you want on theirs.

          It's not 'abuse' for a platform to have rules that they enforce and/or choose who they let post simply because you don't like those rules.

          link to this | view in chronology ]

          • icon
            PaulT (profile), 3 May 2020 @ 8:33am

            Re: 'My conservative views were censored!' 'Which?' 'Uhh...'

            "Actions have consequences"

            The world would be a much better place if these people accepted that, in all sorts of ways.

            "Obscenity isn't the only thing that gets moderated, so that point it moot."

            Yes, here's Twitter's current stated policy on the matter:

            Under this guidance, we will require people to remove tweets that include
            ...
            Specific claims around COVID-19 information that intends to manipulate people into certain behavior for the gain of a third party with a call to action within the claim
            ...
            Specific and unverified claims that incite people to action and cause widespread panic, social unrest or large-scale disorder

            The whined about tweet clearly fits on multiple levels, and a lot of these loons will be making multiple violations of the TOC, hence the banhammer.

            Why are "leftists" not being banned? Possibly because they're not telling people to gather in large contagion vectors in order to specifically overwhelm the ability for the government to cope with it, and in Ms. Dumbass's tweet.

            link to this | view in chronology ]

            • icon
              That One Guy (profile), 3 May 2020 @ 9:33am

              Re: Re: 'My conservative views were censored!' 'Which?' 'Uhh...'

              The world would be a much better place if these people accepted that, in all sorts of ways.

              If only there was some political party that touted itself as the 'party of personal responsibility'...

              link to this | view in chronology ]

          • identicon
            Anonymous Coward, 4 May 2020 @ 9:41am

            Re: 'My conservative views were censored!' 'Which?' 'Uhh...'

            "If your 'politics' involve incredibly irresponsible and dangerous ideas, maybe take a look at your politics before you go complaining that someone else was 'mean' to you for presenting them"

            Perhaps if the person you are following suggests random drugs they have heard on right-wing consipiracy shows without any testing or vetting, or suggests things that are lethal, like drinking sanatizers or Bleach, or perhaps putting yourself under or injecting "bright lights"... as a treatment option, you may want to re-evaluate your leadership candidate...

            Just the good ol' boys
            Never meanin' no harm
            Beats all you never saw
            Been in trouble with the law
            Since the day they was born
            Staightenin' the curves
            Flattenin' the crornavirus - hills
            Someday the Iranians might get 'em
            But the law never will
            Makin' his way
            The only way he knows how
            That's just a little bit more
            Than the CIA will allow
            Makin' their way
            The only way they know how yeah
            That's…

            link to this | view in chronology ]

        • icon
          PaulT (profile), 3 May 2020 @ 8:24am

          Re: Re: Re: At what point does moderation become publisher edito

          "Basically for tweeting that she believes the governor of Michigan should allow people to go back to work."

          Yes, she was inciting people to take action against the current advice of the government, which is not only irresponsible and stands to get a lot of people needlessly infected or worse, but is DIRECTLY against Twitter's stated policy on the matter.

          Don't like the rules? Go somewhere else or stop breaking them.

          "This was pure political censorship."

          So? They don't owe you a platform no matter what politics you have.

          link to this | view in chronology ]

          • icon
            That One Guy (profile), 3 May 2020 @ 9:40am

            Re: Re: Re: Re: At what point does moderation become publisher e

            So? They don't owe you a platform no matter what politics you have.

            While I agree that they don't owe any political party a platform I feel it's a mistake to grant them even that much for the sake of the argument. Unless they want to argue that gross irresponsibility is a conservative position it most certainly wasn't 'political censorship'.

            They were shown the door not because of their political leanings but because they were posting incredibly foolish and dangerous tweets in a time when that could very easily get people killed.

            link to this | view in chronology ]

            • icon
              PaulT (profile), 3 May 2020 @ 10:34am

              Re: Re: Re: Re: Re: At what point does moderation become publish

              "Unless they want to argue that gross irresponsibility is a conservative position it most certainly wasn't 'political censorship'."

              Well, given recent evidence they might not far off?

              Either way, as usual for this type he misrepresented the original tweet. It didn't read as "she believes the governor of Michigan should allow people to go back to work". It reads as "if enough of us ignore the orders at once, the government loses the ability to control the pandemic, so let's do that!".

              That may or may not be what she meant, but it's what she said, and it shouldn't take a genius to work out why this is unacceptable even if it weren't directly against Twitter's stated rules.

              link to this | view in chronology ]

        • icon
          Stephen T. Stone (profile), 3 May 2020 @ 8:48am

          This was pure political censorship.

          Which Twitter employee told her “you can’t say that anywhere” and threatened to have her sued or arrested if she tried?

          link to this | view in chronology ]

  • identicon
    David, 2 May 2020 @ 12:55am

    Come again?

    Of course platforms have tons of incentive to moderate: much of it social pressure, but also the fact that if they're just filled with garbage they'll lose users (and advertisers).

    Uh, even before the Internet, the highest-circulation newspapers were not the ones known for quality journalism.

    Readers crave garbage.

    link to this | view in chronology ]

    • identicon
      TFG, 2 May 2020 @ 8:10am

      Re: Come again?

      I'm not sure the analogy is sound. For one, tabloids are a purchased product, while Twitter is free to view.

      Secondly, if you look at the actual social media platforms and compare moderated vs. unmoderated ... Twitter and Facebook have the largest imprint. Reddit has various moderation policies depending on the specific subreddit, but moderation does, in fact, happen - and those subreddits where it's superlax don't seem to have a large population.

      Meanwhile, 4chan, known for being a cesspool, is certainly famous but doesn't have nearly the same wide-spread cultural saturation. 8chan is small potatoes next to 4chan. Nobody really cares about Gab.

      Based on this, it seems that, yes, when the place gets filled with garbage, most people leave.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 2 May 2020 @ 1:42pm

        Re: Re: Come again?

        Ah - that may be, however the place being full of garbage is not necessarily the direct result of less moderation.

        The number of pirates has decreased, must be due to global warming.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 2 May 2020 @ 2:50pm

          Re: Re: Re: Come again?

          however the place being full of garbage is not necessarily the direct result of less moderation

          True, it is sometimes the result of moderation, created for and by those who get thrown off of more mainstream platforms.

          link to this | view in chronology ]

        • icon
          PaulT (profile), 2 May 2020 @ 10:22pm

          Re: Re: Re: Come again?

          "the place being full of garbage is not necessarily the direct result of less moderation."

          Not on its own, but these places are usually filled up with the garbage that comes from other places. 8chan was originally created by people who found 4chan too restrictive. Gab's main user base is people who got kicked off Twitter. In both cases, it's the relative lack of moderation that attracted them.

          It may be possible that you can create a site with light moderation yet still be a useful community, or a heavily moderated place with a bad one, but the general trend is clearly that moderated sites at the moment are demonstrably better than those which are not moderated.

          link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 May 2020 @ 12:59pm

    Alternatively, they could moderate very thoroughly, and pull down lots of information. But that might actually include good and useful information.

    I would replace thoroughly with extensively. Thoroughness implies a quality of work. In fact, i would argue that thoroughness is precisely the thing which cannot be done at scale.

    Apologies if this seems merely nit-picky.

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.