Americans Disagree On What Content Should Be Moderated, But They All Agree Social Media Companies Suck At Moderation

from the truly-a-land-of-contrasts dept

No one agrees on how tech companies should perform the impossible job of moderating their platforms. But almost everyone agrees platforms are doing it wrong.

Conservatives complain too many of their fellow social media users are being silenced by left-leaning tech companies. Those on the left seem to feel not enough is being done to silence people engaged in hate speech or other abusive actions. Both sides agree there's too much misinformation being spread, although they disagree greatly about which news sources are the "fakest."

Since it's impossible to please everyone, almost everyone is going to have complaints about moderation efforts. That's the key finding of a recently-released poll [PDF] from Gallup and the Knight Foundation. When it comes to moderation, just about everyone agrees social media companies are handling it badly.

Americans do not trust social media companies much (44%) or at all (40%) to make the right decisions about what content should or should not be allowed on online platforms.

That's pretty much everybody. The perception that companies aren't making the right decisions flows directly from the disagreement between social media users on what content platforms should focus on moderating.

The level of concern about online foreign interference in U.S. elections varies sharply by political party. Whereas 80% of Democrats are very concerned about this issue, just 23% of Republicans are.

Similarly, when it comes to concerns about hate speech and other abusive online behaviors, Democrats are more likely to say they are very concerned about the issue (76%), compared to Republicans (38%) and independents (50%).

A smaller, although notable gap also can be seen in views on misinformation. Americans who identify as Democrats (84%) or independents (71%) are more likely than Republicans (65%) to say the spread of misinformation is very concerning.

Content moderation has become a partisan issue. The only bipartisan aspect of this is that both sides agree platforms are handling moderation poorly and that they wield too much power. But a majority of both parties agree allowing platforms to handle moderation without government interference is the least bad option.

Even though Americans distrust internet and technology companies to make the right decisions around what content appears on their sites, given a choice, they would still prefer the companies (55%) rather than the government (44%) make those decisions.

This is despite the fact that almost everyone agrees other users are getting away with stuff.

Most Americans do not think major internet companies apply the same standards in the same way to all people who use their apps and websites (78%). This includes 89% of Republicans, 73% of Democrats and 76% of independents.

So, what do we have? A fractured social media landscape mostly divided down party lines. Adding the government to this mix would only increase the perception of bias, if not actually insert bias where none may be currently present.

Social media companies are being asked to moderate millions of pieces of content every day. This would be nearly impossible if moderation only dealt with clearly illegal content (like child porn) and obvious violations of terms of service. But they're asked to determine what is hate speech, to target nebulous concepts like "terrorist content," to combat misinformation, and to deal with everything else users report as perceived violations.

This report shows a lot of perceived bias by tech companies is based on users' own political biases. Much of what users claim tech companies are doing wrong depends on their party alignment. Fortunately, both sides agree the government would probably handle this worse, but it's only a slim majority of those polled.

The biases seen in this poll carry over to moderators themselves, who will never be free of their own biases. This includes the algorithms used to handle most of the moderation load. But bias at the moderation level isn't enough to shift an entire platform towards one side or the other. There's simply too much content in play at any given time to allow moderators to create an echo chamber.

Moderation efforts will never please everyone. What's being done now pleases almost no one. And that's the way it's going to be in perpetuity. And the more the government leans on tech companies to do "more" in response to whatever is the latest hot button topic, the less effectively it will be done. Moderation resources are finite. User-generated content isn't.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: bias, content moderation, content moderation at scale, social media


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Anonymous Coward, 9 Jul 2020 @ 12:42pm

    Questionable conclusion

    What's being done now pleases almost no one. And that's the way it's going to be in perpetuity. [...] Moderation resources are finite. User-generated content isn't.

    It seems like you're jumping to this conclusion. User-generated content and moderation resources are both finite, because the number of humans and their lifespans are finite.

    If moderating a post took half the time of writing it, then having a third of humanity's waking hours spent moderating would be more than sufficient. It's difficult to see how a platform could pay enough moderators to do that. But, in aggregate, the resources are theoretically there: people spend more time reading content than creating it. Some form of distributed moderation could therefore solve the labor problem.

    That doesn't necessarily solve the real problem, already stated in the story: that a single moderation decision can never please anyone. And here, I have to fundamentally disagree with your statement that "what's being done now [… is] the way it's going to be in perpetuity". If everyone thinks the status quo is problematic, it's extremely pessimistic to assume we'll never improve on it. We used to have people writing individual blogs, with no centralized moderation authority. Before that, there were all kinds of crazy magazines and newsletters on paper. Why do we suddenly need a central authority? If people want moderation, what, beyond sheer technical difficulty, stands in the way of one or more moderation systems based on reader preferences?

    (None of this means these are good ideas. Filter-bubble and "tyranny of the majority" effects are valid concerns.)

    I feel we're letting current technological architecture limit our imagination about what is possible. We've got a few central systems such as Facebook and Twitter that accept content from millions of users, store it, and distribute it. There are advantages to this model, but it's not the only possible one. Remember common carriers?

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Jul 2020 @ 2:01pm

      Re: Questionable conclusion

      If moderating a post took half the time of writing it, then having a third of humanity's waking hours spent moderating would be more than sufficient.

      Sure, and a third of those people's time, to monitor the moderating, and a third of those people's time to validate the monitoring.... Zeno's moderating paradox.

      If everyone thinks the status quo is problematic, it's extremely pessimistic to assume we'll never improve on it.

      Do you account for mutually exclusive moderation values? For instance, "All women in photos must be veiled" (moslem), "Women in photos must never be veiled" (surveillance).

      You might also look up Gödel’s Incompleteness Theorems. In context, any formal moderation system cannot prove itself completely consistent.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 9 Jul 2020 @ 3:55pm

        Re: Re: Questionable conclusion

        Sure, and a third of those people's time, to monitor the moderating, and a third of those people's time to validate the monitoring....

        But it still works, right? If they're doing busy-work like that, they're not creating content. Until someone gets the bright idea to live-stream the moderation process itself...

        Do you account for mutually exclusive moderation values? For instance, "All women in photos must be veiled" (moslem), "Women in photos must never be veiled" (surveillance).

        Sure. One who's offended by women without veils might choose to have their incoming data-stream moderated on that basis. It's not all that far off from halal food certifications, except that every reader could in theory be a moderator too—with results combined somehow, in a way that's useful to those who care but doesn't bother the rest of us.

        link to this | view in chronology ]

        • icon
          Ben (profile), 10 Jul 2020 @ 2:11am

          Re: Re: Re: Questionable conclusion

          The logical conclusion to your position would appear to be that content moderation needs to be done on a consumer-by-consumer basis, applying their own rules as to what's acceptable and what's not.
          Sounds like we should stop moderating at all, and leave it to the consumer to decide for themselves what they want to see (by choice of site, perhaps; a bit too much of a libertarian position for me, I must say), or come up with some super-intelligent AI that knows all your personal rules and can judge 100% accurately the content before it reaches you.
          And we all know how much we're all looking forward to being completely nannied by the Guugle/MS-Beng-designed AI filters that won't take any 'commercial sponsorships' into account in their filtering.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 10 Jul 2020 @ 4:22am

            Re: Re: Re: Re: Questionable conclusion

            Sounds like we should stop moderating at all, and leave it to the consumer to decide for themselves what they want to see (by choice of site, perhaps; a bit too much of a libertarian position for me, I must say), or come up with some super-intelligent AI that knows all your personal rules and can judge 100% accurately the content before it reaches you.

            I don't see it as quite so different from the status quo. More of this is done by AI than one might think, and it's not super-intelligent. People flag rule-violations and the moderators review it. Checks like "unveiled face" can be mostly automated already, and "naughty words" could be done decades ago. Anyone who's used like/dislike buttons knows the system can't "100% accurately" figure out what they want, but as this story shows, human moderators can't either.

            And we all know how much we're all looking forward to being completely nannied by the Guugle/MS-Beng-designed AI filters that won't take any 'commercial sponsorships' into account in their filtering.

            They already do or don't, and I don't see why this would change their position. If anything, wouldn't the ability to switch between MS and Google moderation in one click encourage them to compete to do this better? Right now, Youtube users who disagree with Google's moderation can't do shit unless they want to move to an entirely new platform (which doesn't have the same videos).

            This idea, by the way, is little more than Mike's "protocols, not platforms" concept. Historical antecedents include newsletters, Fidonet, Usenet.

            link to this | view in chronology ]

            • icon
              PaulT (profile), 10 Jul 2020 @ 5:30am

              Re: Re: Re: Re: Re: Questionable conclusion

              "I don't see it as quite so different from the status quo. More of this is done by AI than one might think, and it's not super-intelligent."

              More accurately - at the scale we're talking about with larger sites, it's literally impossible to do moderation without it.

              "People flag rule-violations and the moderators review it."

              Again in terms of larger sites, it's more accurate to say that they automate the response to flags, then they get reviewed if someone complains.

              "human moderators can't either"

              It's impossible to "accurately" moderate something on an international level. Cultural norms vary so widely that someone in country A might find the content perfectly benign, but country B's society finds it very offensive. Even within countries it's impossible to reach a real consensus, especially when religion gets involved. You could get different responses depending on where the moderators who pick up the complaint are located, or even between team members in the same country. Most of this stuff is subjective, after all.

              The ultimate answer might be to have individuals take responsibility for filtering, but in a wider context it's preferable for, say, YouTube to block all white supremacist propaganda rather than force every user to manually flag every Nazi account that pops up in their feed.

              "Right now, Youtube users who disagree with Google's moderation can't do shit unless they want to move to an entirely new platform (which doesn't have the same videos)."

              The ultimate issue here is that rather than build a community that allows participants to choose which site they prefer to use to participate, the content providers are saying "if you want to view anything we do, you have to use the site we chose". It takes a lot more work, but if the content providers stopped depending on a single platform and let users choose the best fit for their own needs, we'd all be better off. But so long as, say, YouTube or Twitch are the only places where the content providers choose to offer their content, people won't change their viewing habits and competitors won't get a real look in.

              I think that what should happen is some kind of aggregation service to be available where they publish to multiple platforms in the same way as I believe services like CD Baby operate - one venue for publishing but the end user gets to choose where they get the music from. I'm not sure how feasible that is, but if a YouTuber used something like that to publish to 30 video sites and the viewer choose what to watch, rather than publishing only to YouTube and having a fit when YouTube says they violated their policies, the overall landscape would be much better.

              link to this | view in chronology ]

              • identicon
                Anonymous Coward, 10 Jul 2020 @ 7:07am

                Re: Re: Re: Re: Re: Re: Questionable conclusion

                rather than force every user to manually flag every Nazi account

                There's no reason "every user" would need to do it. If enough people mark it "Nazi" and there's no significant disagreement, the system can conclude they're a Nazi. People who primary follow and repost "Nazi" content might be marked as such, without any manual flagging.

                The ultimate answer might be to have individuals take responsibility for filtering, but in a wider context it's preferable for, say, YouTube to block all white supremacist propaganda rather than force every user to manually flag every Nazi account that pops up in their feed.

                That's only really possible when centralized services distribute all the content. If we're in a protocol-not-platform world, people would be able to choose to view that stuff. Even with platforms, that option could be useful for research—pushing this shit underground won't stop it, and it's something that should be monitored and understood.

                Regardless, it would be entirely possible for platforms/apps/aggregators to have a default set of "reasonable" filters.

                link to this | view in chronology ]

                • icon
                  PaulT (profile), 11 Jul 2020 @ 1:24am

                  Re: Re: Re: Re: Re: Re: Re: Questionable conclusion

                  "There's no reason "every user" would need to do it. If enough people mark it "Nazi" and there's no significant disagreement, the system can conclude they're a Nazi. People who primary follow and repost "Nazi" content might be marked as such, without any manual flagging."

                  So, you're saying you support AI and automatic blocking, but only if you force an unnecessary first step where it's programmed by a random group of people first? Weird.

                  "Regardless, it would be entirely possible for platforms/apps/aggregators to have a default set of "reasonable" filters."

                  No, it's not, because you will never get 2 sets of people to totally agree on what "reasonable" means. It's best to have the non-Nazis have YouTube and tell the Nazis to fuck off to Gab and Stormfront.

                  link to this | view in chronology ]

                • identicon
                  Anonymous Coward, 11 Jul 2020 @ 4:51am

                  Re: Re: Re: Re: Re: Re: Re: Questionable conclusion

                  There's no reason "every user" would need to do it.

                  And just how many users have to flag a video of some terrorist shooting prisoners before it gets banned? Do you now see the problem with leaving moderation up to users, some of whom are children?

                  link to this | view in chronology ]

                  • icon
                    Scary Devil Monastery (profile), 16 Jul 2020 @ 2:06am

                    Re: Re: Re: Re: Re: Re: Re: Re: Questionable conclusion

                    "And just how many users have to flag a video of some terrorist shooting prisoners before it gets banned?"

                    You know, not too long ago pictures such as that of an asian child, naked and obviously starved, fleeing a napalm bombardment, was the subject of pulitzer prizes - even if other children could see it and hopefully grow up with the knowledge that the world needs to be made a better place.

                    Today the trend appears to be that we'd prefer that we and our children not know the terrors and fears that may be our own future if we choose to remain unaware.

                    I see no problem at all if children become aware that the world outside of they safety of the first world preserve holds dangers and great evil. In fact I think there is greater danger in children who grow up to privilege and entitlement only to then at some point learn the VERY hard way what is actually out there.

                    link to this | view in chronology ]

                    • icon
                      PaulT (profile), 16 Jul 2020 @ 2:27am

                      Re: Re: Re: Re: Re: Re: Re: Re: Re: Questionable conclusion

                      The problem here is that you're making false comparisons between a carefully made editing decision to publish one of thousands of photos taken that day in order to bolster a real act of journalism, even if a child might see the photo in context, and whether or not random people can post snuff movies with no context (often for the purposes of recruitment) on any site a child (or adult) might visit.

                      Remember, no reader was exposed to that photo until editors had approved its publication, and nobody outside of the editor and his staff ever saw it or the other photos taken until that decision was made. Since this is not possible online, what you're demanding is that hundreds, if not thousands, of people are exposed to the footage before any decision is made. For every video ever uploaded. That's not good.

                      link to this | view in chronology ]

            • identicon
              Anonymous Coward, 10 Jul 2020 @ 7:22am

              Re: Re: Re: Re: Re: Questionable conclusion

              Youtube users who disagree with Google's moderation can't do shit unless they want to move to an entirely new platform (which doesn't have the same videos).

              A user can get their videos from more than one platform, and a content creator can post videos to more than one platform, and even post a video on the most popular platform to notify people of a video on another platform.

              Nobody is limited to using or posting on only one platform, or posting a new video available on 'x' on many platforms.

              link to this | view in chronology ]

              • identicon
                Anonymous Coward, 10 Jul 2020 @ 8:25am

                Re: Re: Re: Re: Re: Re: Questionable conclusion

                A user can get their videos from more than one platform, and a content creator can post videos to more than one platform

                Regardless of whether they can, video creators who move off of Youtube are going to lose viewers; and people who avoid Youtube when watching stuff are going to miss a bunch.

                and even post a video on the most popular platform to notify people of a video on another platform.

                You think Youtube won't have a problem with people advertising videos that violate Youtube policies? I suspect they might.

                link to this | view in chronology ]

                • identicon
                  Anonymous Coward, 10 Jul 2020 @ 9:44am

                  Re: Re: Re: Re: Re: Re: Re: Questionable conclusion

                  and people who avoid Youtube when watching stuff are going to miss a bunch.

                  That is their problem, and not YouTube's or yours.

                  You think Youtube won't have a problem with people advertising videos that violate

                  All that is needed is a note that a new videos had been released on 'x', with no mention of content. They can link to a home page, rather than the video as well.

                  If the external Video, or channel is brought to YouTube's attention, by people complaining as the algorithms won't see it, and the person or channel is thrown of YouTube, well that is a consequence of expressing really objectionable opinions. Free speech can have consequences.

                  link to this | view in chronology ]

                • icon
                  PaulT (profile), 11 Jul 2020 @ 1:31am

                  Re: Re: Re: Re: Re: Re: Re: Questionable conclusion

                  "Regardless of whether they can, video creators who move off of Youtube are going to lose viewers"

                  You may notice that he suggested posting to multiple platforms at once, not removing themselves from YouTube.

                  "You think Youtube won't have a problem with people advertising videos that violate Youtube policies?"

                  Several channels I watch host recorded streams of their Twitch live shows on YouTube without issue.

                  link to this | view in chronology ]

    • identicon
      Anonymous Coward, 10 Jul 2020 @ 9:57am

      Re: Questionable conclusion

      then having a third of humanity's waking hours spent moderating would be more than sufficient.

      Only if you can get everyone to agree on what should be moderated, otherwise you have started a flame war about moderation.

      link to this | view in chronology ]

    • icon
      nasch (profile), 14 Jul 2020 @ 2:01pm

      Re: Questionable conclusion

      If moderating a post took half the time of writing it, then having a third of humanity's waking hours spent moderating would be more than sufficient. It's difficult to see how a platform could pay enough moderators to do that. But, in aggregate, the resources are theoretically there:

      Except that, fortunately, the worldwide unemployment rate is not 33%. There are not 2.5 billion employable people available to do this work, even if there were some way to pay them.

      link to this | view in chronology ]

      • icon
        PaulT (profile), 14 Jul 2020 @ 11:04pm

        Re: Re: Questionable conclusion

        Plus, even if you could employ that many people, you could not get such a huge number of people from such different ethnic, social and religious backgrounds to do consistent work on decisions that are fundamentally subjective.

        You'd literally end up with a less efficient and accurate system than the current automated model.

        link to this | view in chronology ]

        • icon
          nasch (profile), 15 Jul 2020 @ 7:32am

          Re: Re: Re: Questionable conclusion

          Right, as evidenced by that experiment that asked something like a dozen people (all English speaking Americans if I'm not mistaken) to make some moderation decisions, and even that many people didn't agree.

          link to this | view in chronology ]

  • This comment has been flagged by the community. Click here to show it
    icon
    Koby (profile), 9 Jul 2020 @ 1:09pm

    Happier Times

    I suggest that everyone was more content with the moderation levels of things available on the internet prior to social media, therefore we ought to outlaw social media. Eliminate the digital poison.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Jul 2020 @ 1:25pm

      Re: Happier Times

      As there were far fewer people on the Internet before social media, there were fewer people to get upset about what was on the Internet. The problem is not so much social media, as it is impossible to get everyone to agree to the same moderation rules.

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Jul 2020 @ 1:32pm

      Re: Happier Times

      Second point, Usenet has its moderation problems, and troll problems, so your comment is not based on reality.

      link to this | view in chronology ]

      • icon
        PaulT (profile), 10 Jul 2020 @ 12:29am

        Re: Re: Happier Times

        "Usenet has its moderation problems, and troll problems, so your comment is not based on reality."

        This also needs to be stressed. Koby seems to be basing his opinions on a rose-coloured fantasy of what used to happen.

        It's also worth noting that the main reason why Usenet is not longer used by the vast majority of people is that once a certain number of people started using it, it became a cesspool of abuse and spam. He would do well to look at the state of Usenet and then consider that this is the state he's trying to demand everything else descends into.

        link to this | view in chronology ]

        • icon
          Scary Devil Monastery (profile), 16 Jul 2020 @ 2:27am

          Re: Re: Re: Happier Times

          "Koby seems to be basing his opinions on a rose-coloured fantasy of what used to happen."

          More like his opinions on what used to not happen. The one and only difference between then and now is that a lot more people are using the internet, and that what was once an open sewer on usenet is now easily accessible to everyone.
          A lot of usenet root folders contained little else than outright trolling mixed with misogyny, naked racism, and all the stuff which has since become frowned upon, moderated by default, or outright illegal.

          link to this | view in chronology ]

    • identicon
      Rocky, 9 Jul 2020 @ 1:34pm

      Re: Happier Times

      I'm certain that we can find the same reasoning regarding human and technological progress when we peruse history and all the people espousing that particular viewpoint are people who hated change and thought everything was hunky dory.

      "What?! Women voting?! How preposterous!! They should bear children, cook food and in general be obedient wives."
      "What?! Translating the Bible to English? Blasphemy I say! A commoner has no understanding of theology and should get the word of god from a man of the cloth!"
      "What?! Television won't be able to hold on to any market it captures after the first six months. People will soon get tired of staring at a plywood box every night."
      "What?! There is no reason anyone would want a computer in their home."

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 9 Jul 2020 @ 3:03pm

      Re: Happier Times

      outlaw social media

      That's easy enough. You just need a technical definition of "social media" that is narrow enough that it would apply to sites like Facebook, Twitter, etc., but not blogging platforms like Wordpress, Blogger, or Medium. Or do you want to get rid of blogs, too? In which case, you'd need a definition of "social media" that would apply to both Facebook and Medium but not a magazine like Slate or Breitbart. Or, if you're fine with outlawing those, you need one that would apply to Facebook and Breitbart, but not CNN or Fox News. I'm not going to go any further along that exercise, because freedom of the press is pretty firmly established in the US.

      So, let me know what your definition is for "social media," and which of the above categories it is supposed to apply to, and I'll start taking your "outlaw social media" idea seriously if your definition is workable.

      link to this | view in chronology ]

    • icon
      That One Guy (profile), 9 Jul 2020 @ 11:08pm

      Said the person on an open platform...

      Bah, why half ass it? People are much happier when they don't have to do any of that pesky 'thinking' for themselves and the only content they deal with has been carefully curated for their own good to ensure they only think the right things, therefore non-authorized content should be eliminated and only the correct people should be allowed to share their ideas and content.

      link to this | view in chronology ]

    • icon
      PaulT (profile), 10 Jul 2020 @ 12:27am

      Re: Happier Times

      "I suggest that everyone was more content with the moderation levels of things available on the internet prior to social media"

      Yeah, it was nicer when there were a lot less users online and the racists, bigots and scam artists were relegated to their own little caves rather than attempting to infect everywhere else.

      But, the answer to this is not to prevent everyone from communicating.

      "Eliminate the digital poison."

      We are. You just whine when you're informed that the people you hang out with are the poisonous ones.

      link to this | view in chronology ]

  • identicon
    avideogameplayer, 9 Jul 2020 @ 1:36pm

    This is what happens when everyone has a soapbox. You can't agree on anything.

    Even if the 1st Amendment did apply to social media, it'd be a clusterfuck for everyone involved.

    Damn if you do...

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 10 Jul 2020 @ 3:48am

    Help ... I'm being repressed!!!

    link to this | view in chronology ]

  • identicon
    bobob, 10 Jul 2020 @ 11:28am

    Moderation resources may be finite, but like every other producy, if you cannot target an audience you can afford to serve, your producy is not viable. This bullshit about moderation at scale being impossible is just that - bullshit, because it's the scale that's the problem. Throwing up your hands and declaring moderation is just impossible is laying the problem in the wrong place. You can't be all things to all people unless you can afford to appease everyone. Evidently, platforms like twitter and facebook who try to be all things to all people are unable to remain viable if they had to meet the expectations of all of the customers they want to capture. Better to have lots of choices of cars that attracts different types of people than a world full of only yugos that annoys everyone.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 10 Jul 2020 @ 11:37am

      Re:

      because it's the scale that's the problem.

      Bullshit, because to reduce the size of the moderation problem, you have to kick most people offline. Spreading the posts over many more companies does not solve the moderation problem in general, as the same number of posts need to be moderated.

      Getting agreement on moderation is the same problem as getting agreement on politics, that is an endless source of argument.

      link to this | view in chronology ]

      • identicon
        bobob, 10 Jul 2020 @ 11:58am

        Re: Re:

        Bullshit. reducing single large platforms to may small platforms does solve the moderation problem. It reduces misinformation by subjecting such posts to many individual rules on posting which is much harder to circumvent than knowing how to circumvent the rules on one platform.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 10 Jul 2020 @ 12:19pm

          Re: Re: Re:

          Bullshit. reducing single large platforms to may small platforms does solve the moderation problem.

          Assuming the same number of posts, spreading them over many small platforms does not help because most of those platforms will not have enough income to afford the servers needed to for automatic support of moderation. Also, they would not be able to support manual moderation either.

          The moderation problem is not due to the size of the platform, but rather the number of people posting online.

          Also many small platforms do not solve the misinformation problem, unless you can get them all to moderate to the same standard. Visit Gab, or BitChute to see what is possible with small platforms.

          link to this | view in chronology ]

  • identicon
    Anonymous Coward, 11 Jul 2020 @ 7:15am

    "I don't trust you to do this job right, but I want you do it anyway."

    link to this | view in chronology ]

    • icon
      That One Guy (profile), 11 Jul 2020 @ 4:51pm

      Re:

      I think it's more along the lines of 'I don't trust you to do the job right, but I trust the alternatives even less so you'll have to do.'

      link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.