The Impossibility Of Content Moderation Extends To The People Tasked With Doing Content Moderation

from the maybe-we-should-stop-demanding-more-be-done dept

For years, now, we've been writing about the general impossibility of moderating content at scale on the internet. And, yet, lots of people keep demanding that various internet platforms "do more." Often those demands to do more come from politicians and regulators who are threatening much stricter regulations or even fines if companies fail to wave a magic wand and make "bad" content disappear. The big companies have felt compelled to staff up to show the world that they're taking this issue seriously. It's not difficult to find the headlines: Facebook pledges to double its 10,000-person safety and security staff and Google to hire thousands of moderators after outcry over YouTube abuse videos.

Most of the demands for more content moderation come from people who claim to be well-meaning, hoping to protect innocent viewers (often "think of the children!") from awful, awful content. But, of course, it also means making these thousands of employees continuously look at highly questionable, offensive, horrific or incomprehensible content for hours on end. Over the last few years, there's been quite a reasonable and growing concern about the lives of all of those content moderators. Last fall, I briefly mentioned a wonderful documentary, called The Cleaners,focused on a bunch of Facebook's contract content moderators working out of the Philippines. The film is quite powerful in showing not just how impossible a job content moderation can be, but the human impact on the individuals who do it.

Of course, there have been lots of other people raising this issue in the past as well, including articles in Inc. and Wired and Gizmodo among other places. And these are not new issues. Those last two articles are from 2014. Academics have been exploring this issue as well, led by Professor Sarah Roberts at UCLA (who even posted a piece on this issue here at Techdirt). Last year, there was another paper at Harvard by Andrew Arsht and Daniel Etcovitch on the Human Cost of Online Content Moderation. In short, none of this is a new issue.

That said, it's still somewhat shocking to read through a big report by Casey Newton at the Verge, about the "secret lives" of Facebook content moderators. Some of the stories are pretty upsetting.

The moderators told me it’s a place where the conspiracy videos and memes that they see each day gradually lead them to embrace fringe views. One auditor walks the floor promoting the idea that the Earth is flat. A former employee told me he has begun to question certain aspects of the Holocaust. Another former employee, who told me he has mapped every escape route out of his house and sleeps with a gun at his side, said: “I no longer believe 9/11 was a terrorist attack.”

There may be some reasonable questions about what kind of training is being done here -- and about hiring practices that might end up having people susceptible to the internet's garbage put into a job reviewing it. But, still...

Part of the problem is that too many people are looking to the big internet companies -- mainly Google and Facebook -- to solve all the world's ills. There are a lot of crazy people out there who believe a lot of crazy things. Facebook and YouTube and a few other sites are often a reflection back of humanity. And humanity is often not pretty. But we should be a bit concerned when we're asking Facebook and Google to magically solve the problems of humanity that have plagued humans through eternity... and to do so just by hiring tens of thousands of low-wage workers to click through all the awful stuff.

And, of course, the very same day that Casey's article came out, Bloomberg reported that its growing roster of thousands of moderators are increasingly upset about their working conditions, and Facebook's own employees are getting annoyed about it as well -- noting that for all of the company's claims about how "important" this is, it's weird that they're outsourcing content moderation to third parties... and then treating them poorly:

The company’s decision to outsource these operations has been a persistent concern for some full-time employees. After a group of content reviewers working at an Accenture facility in Austin, Texas complained in February about not being allowed to leave the building for breaks or answer personal phone calls at work, a wave of criticism broke out on internal messaging boards. “Why do we contract out work that’s obviously vital to the health of this company and the products we build,” wrote one Facebook employee.

Of course, it's not clear that hiring the content moderators directly would solve very much at all. As stated at the very top of this article: there is no easy solution to this, and every solution you think up has negative consequences. On that front, I recommend reading Matt Haughey's take on this, properly titled, Content moderation has no easy answers. And that's coming from someone who ran a very successful online community (MetaFilter) for years (for a related discussion, you can listen to the podcast I did last year with Josh Millard, who took over MetaFilter from Haughey a few years ago):

People often say to me that Twitter or Facebook should be more like MetaFilter, but there’s no way the numbers work out. We had 6 people combing through hundreds of reported postings each day. On a scale many orders of magnitude larger, you can’t employ enough moderators to make sure everything gets a check. You can work off just reported stuff and that cuts down your workload, but it’s still a deluge when you’re talking about millions of things per day. How many moderators could even work at Google? Ten thousand? A hundred thousand? A million?

YouTube itself presents a special problem with no easy solution. Every minute of every day, hundreds of hours of video are uploaded to the service. That’s physically impossible for humans to watch it even if you had thousands of content mods working for YT full time around the world.

Content moderation for smaller communities can work. At scale, however, it presents an impossible problem, and that's part of the reason why it's so frustrating to watch so many people -- especially politicians -- demanding that companies "do something" without recognizing that anything they do isn't going to work very well and is going to create other serious problems. Of course, it seems unlikely that they'll realize that, and instead will somehow insist that the problems of content moderation can also be blamed on the companies.

Again, as I've said elsewhere this week: until we recognize that these sites are reflecting back humanity, we're going to keep pushing bad solutions. But tech companies can't magically snap their fingers and make humanity fix itself. And demanding that they do so, just shoves the problem down into some pretty dark places.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, content moderators


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Anonymous Coward, 28 Feb 2019 @ 9:56am

    But we should be a bit concerned when we're asking Facebook and Google to magically solve the problems of humanity that have plagued humans through eternity... and to do so just by hiring tens of thousands of low-wage workers to click through all the awful stuff.

    By my calculations, posted here, it would require well over 100,000 people to review YouTube content alone. Probably an equal number for Facebook and another massive payroll for Twitter. And those are just the big 3.

    This is a (non?) problem without solution.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 28 Feb 2019 @ 10:09am

    Read that their solution would be to crowd source moderation.

    So instead of removing questionable content through their own failures they will facilitate more dead souls among the masses.

    Perhaps they should look to other solutions that don't include creating more causalities...

    My best opinion on the topic would be to shut down Facebook globally until regulations and moderation concerns are at least up to par with the problems this company creates.

    This isn't about free speech, this is about harm in moderation and viewing on a platform that doesn't have a clue how to handle harm.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 28 Feb 2019 @ 10:23am

      Re:

      And this is why we don't let you run anything, much less governments.

      This isn't about free speech

      No, it really is. Your solution is to infringe on a bunch of people's rights, just because you don't like it. Too bad, it doesn't work that way.

      this is about harm in moderation and viewing on a platform that doesn't have a clue how to handle harm.

      You obviously didn't read the article then, otherwise you would have seen that it's not about how to handle harm, it's that it's impossible to do that kind of moderation on a global scale. Even China with all its firewalls and censorship techniques, still can't completely quash the stuff they don't want from getting through.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 28 Feb 2019 @ 10:36am

        Re: Re:

        re: it's that it's impossible to do that kind of moderation on a global scale.

        ... and thus Facebook can't do the job, there are not enough moderators to do the job, machine learning can't do the job - so my question: why let Facebook continue to operate?

        To me this ISN'T about free speech, it's about a public company that can't effectively manage the platform it created, that has no realistic solutions to the problems they facilitated (news feeds, push, content profiles, et) that cause REAL world harm which has led to DEATHS on more than one occasion, which has impacted elections through the companies inability or unwillingness to tackle the problems.

        I look at akin to coal power, sure we get electricity but what about the issues of storing the coal ash ( ask the Tennessee Valley Authority), is the platform worth the harm it causes and my answer is unequivocally NO.

        link to this | view in chronology ]

        • icon
          Stephen T. Stone (profile), 28 Feb 2019 @ 10:55am

          why let Facebook continue to operate?

          Because to give any government that power — the power to unilaterally and without due process shut down a platform for speech because “it got too big to moderate” — is to give that government a power that can and will be exploited for the personal comfort/gain of those who would wield that power.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 28 Feb 2019 @ 11:20am

            why let Facebook continue to operate? [was Re: ]

                      why let Facebook continue to operate?

            Because to give any government that power — the power to unilaterally and without due process…

            A coalition then. A coalition formed of, say, sixty or eighty or a hundred of the world's leading military powers and advanced industrial economies, all banding together against one puny corporation. That's not unilateral.

            As for ‘due process’, the question is always how much process is due in a particular situation? How much process is needed to declare war? Before a nation's government can shoot someone for the common good?

            link to this | view in chronology ]

            • icon
              Stephen T. Stone (profile), 28 Feb 2019 @ 11:40am

              A coalition formed of, say, sixty or eighty or a hundred of the world's leading military powers and advanced industrial economies, all banding together against one puny corporation. That's not unilateral.

              But it is still dangerous. If you let this coalition have the power to shut down Facebook for an arbitrary reason such as “it got too big”, what would stop them from using that power against the next platform it deems “too big”, such as YouTube or Twitter? What arbitrary standard of size or cultural influence or some other factor will it use to justify shutting down not-quite-as-big platforms after the biggest ones are all dead and buried? How far would this coalition need to go before you would consider it a “bad idea”?

              link to this | view in chronology ]

            • identicon
              cpt kangarooski, 1 Mar 2019 @ 5:27am

              Re: why let Facebook continue to operate? [was Re: ]

              As for ‘due process’, the question is always how much process is due in a particular situation?

              That’s a question of procedural due process. There is also the separate issue of substantive due process; finding the line where governments are prohibited from acting because the rights of individuals and/or the people at large are more important than whatever the government is trying to do.

              So you must also ask why some government action is being done, who the government will or could harm if it takes this action, is it more important to protect the victim of government action or let the government act in this case, and what sorts of side effects or sloppy targeting exists such that either the government action will affect too many people (causing undue harm) or too few people (causing it to be pointless, and thus rendering harm done undue because of the futility of it) or sometimes both.

              link to this | view in chronology ]

          • icon
            Mason Wheeler (profile), 28 Feb 2019 @ 11:21am

            Re:

            We've all heard the controversial notion that "too big to fail is too big to exist." It seems like it should be a bit less controversial to state that too big to succeed is too big to exist.

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 28 Feb 2019 @ 12:17pm

              Re: Re:

              Well the thing is that if it really is too big to succeed it is a self solving problem.

              link to this | view in chronology ]

        • icon
          Thad (profile), 28 Feb 2019 @ 11:19am

          Re: Re: Re:

          To me this ISN'T about free speech

          It is, though. You having a wrong opinion doesn't make that opinion right. I can say "to me, black is white," but that doesn't make it so.

          link to this | view in chronology ]

        • identicon
          Anonymous Coward, 28 Feb 2019 @ 12:52pm

          Re: Re: Re:

          and thus Facebook can't do the job, there are not enough moderators to do the job, machine learning can't do the job - so my question: why let Facebook continue to operate?

          Because there is no requirement that they be able to moderate content on their platform to such a degree that no one ever sees anything harmful or offensive. That's what it means to have freedom of speech. You get the good with the bad. You can't have one or the other and still have freedom of speech.

          Whether Facebook can do the job of successful content moderation is irrelevant. That's not what they are in the business of, that's not what their platform is for, and there's no legal or moral requirement for them to do it at all, much less successfully.

          To me this ISN'T about free speech

          You are entitled to your opinion, that doesn't make you right. This very much IS about free speech because you have a government, or a group of governments, telling a platform, and by extension users of that platform, what they can and cannot say on said platform. That is the literal definition of government censorship and the First Amendment says they can't do that. So it really doesn't matter what you think it's about, it is about free speech.

          it's about a public company that can't effectively manage the platform it created

          You do know what an open platform means right? It means the exact opposite of managing what users say and do not say on it. Besides that, there's nothing morally wrong or illegal about this.

          that has no realistic solutions to the problems they facilitated (news feeds, push, content profiles, et)

          In and of themselves these are technical tools and not problems.

          that cause REAL world harm which has led to DEATHS on more than one occasion, which has impacted elections

          You know what? So have people talking to each other in person, writing articles in old school newspapers, books, having telephone conversations, town hall meetings, hell any time one person is allowed to communicate with another person, or a group of people, it has the potential to cause real world harm and lead to deaths, and actually HAS on MANY occasions. Just look at history before the rise of the internet. How many genocides and mass instances of slavery were caused by people talking and spreading their ideas by word of mouth and print? All of them. What you're saying is the equivalent of saying that governments should tell the people of the world what they can and cannot say because their words can cause harm and death. The medium is irrelevant, and in this case, Facebook is just the medium.

          through the companies inability or unwillingness to tackle the problems

          Well, they are actually trying, it's just impossible to do at the scale you are talking about. Literally impossible for anyone. Not just Facebook, NO ONE can successfully do what you are wanting them to do. It's never been done in the history of the world, and when it's been tried, it's ended badly for everyone.

          I look at akin to coal power, sure we get electricity but what about the issues of storing the coal ash

          False equivalence. One is about the rights of human beings, the other is about an inanimate object that has no rights.

          is the platform worth the harm it causes and my answer is unequivocally NO.

          Because this can be scientifically proven to be harmful to the environment and that there are better ways to generate electricity. Therefore it's the cost benefit analysis tips towards no.

          But that's not the case here. Here, the solution you propose is to take rights away from everybody ( massive harm) because a platform that has revolutionized communication and allowed people all over the world to communicate at an unprecedented level (massive amount of good) allows people to be people, good and bad. Right, wrong, or indifferent, Facebook has a legal right to exist and run their business as they see fit within the law. Not moderating what people say on their platform is within that law. And that law is the First Amendment.

          link to this | view in chronology ]

        • identicon
          Anonymous Coward, 1 Mar 2019 @ 3:58am

          Re: Re: Re:

          YouTube has acted,[More updates on our actions related to the safety of minors on YouTube }(https://youtube-creators.googleblog.com/2019/02/more-updates-on-our-actions-related-to.html).

          If your video features minors, no comments will be allowed, except for a chosen few heavily moderated channels.

          So once again, the bad behaviour of a few people brings punishment down on everybody else under the guise of protecting minors.

          link to this | view in chronology ]

    • identicon
      cpt kangarooski, 28 Feb 2019 @ 10:57am

      Re:

      Read that their solution would be to crowd source moderation.

      So instead of removing questionable content through their own failures they will facilitate more dead souls among the masses

      Oh, it’s so much worse than that.

      It will open the sites using that up to the risk of lawsuits for emotional distress and it will allow unvetted and malicious individuals to manipulate the filtering. (And they will try; think about the weirdos that make those creepy videos for kids on YouTube for whatever reason)

      link to this | view in chronology ]

    • identicon
      cpt kangarooski, 28 Feb 2019 @ 12:23pm

      Re:

      Read that their solution would be to crowd source moderation.

      So instead of removing questionable content through their own failures they will facilitate more dead souls among the masses

      Oh, it’s so much worse than that.

      It will open the sites using that up to the risk of lawsuits for emotional distress and it will allow unvetted and malicious individuals to manipulate the filtering. (And they will try; think about the weirdos that make those creepy videos for kids on YouTube for whatever reason)

      link to this | view in chronology ]

    • identicon
      Anonymous Coward, 28 Feb 2019 @ 4:06pm

      Re:

      My best opinion on the topic would be to shut down Facebook globally... blah blah blah ....

      Nah. The best solution would be to just kick people like you off the internet. Problem solved.

      link to this | view in chronology ]

  • identicon
    Not.You, 28 Feb 2019 @ 10:29am

    Youtube

    As a parent of kid just getting to the age where youtube music videos are of interest I had given some thought to the youtube issue at any rate. Youtube could have two versions, one where all content has been 100% vetted, and then the normal version they have now, and as long as they were explicit as to which version you were browsing then at least people would have the option of being in the guaranteed "safe" zone.

    In general though I recognize that the internet is full of morons and trolls and scammers and bigots and assholes of all varieties (as is the world at large) and I parent appropriately. Meaning that mostly up until now it has all been PBSKIDS and Netflix kids without parental supervision. I don't let my kid entirely loose on the internet just yet although those days are not far off.

    I don't use the facepages so I have no suggestions there but overall I am inclined to suspect that AI can be made better at this then it is now and if anyone has the resources to develop an AI that can do a pretty decent job of this it would be google and facebook. Building appropriate options that recognize that AI will never be 100% effective is necessary too I think.

    In the end I would rather see too much inappropriate content than too much moderation though if I have a choice between the two. As long as it is clear when I am browsing that I am in a less than fully moderated environment I can deal with it accordingly.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 28 Feb 2019 @ 10:49am

      Re: Youtube

      Youtube could have two versions, one where all content has been 100% vetted

      There is 100% vetted content out there, and more being planned. it is called Netflix, Amazon prime, Disney etc. Just don't expect such vetting to become part of an open publishing platform.

      link to this | view in chronology ]

      • identicon
        Not.You, 28 Feb 2019 @ 11:26am

        Re: Re: Youtube

        My post wasn't about what I think should be done, it was about a possible response for youtube to the calls for them to clean up their platform. Your response suggests that youtube should tell people to go to Netflix or Amazon when they complain about the lack of moderation on youtube.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 28 Feb 2019 @ 11:39am

          Re: Re: Re: Youtube

          I would consider that a reasonable response, as YouTube is built on being an open platform, and that does raise the question as to why should it become a moderated platform, with all that entails.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 28 Feb 2019 @ 1:07pm

            Re: Re: Re: Re: Youtube

            People who put pressure on platforms to change or modify, censor and/or restrict comments or content should get a real brainstorm, stop inflicting their puritanical hypocracy on the world, and start their own platform for whatever they like. TAKE A HIKE

            link to this | view in chronology ]

        • This comment has been flagged by the community. Click here to show it
          identicon
          Anonymous Coward, 28 Feb 2019 @ 1:12pm

          Re: Re: Re: Youtube

          American companies should tell them all to fuckoff. The Land of the Free should not be subject to facsists and draconian perverts trying to tell everyone what they should do.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 28 Feb 2019 @ 1:15pm

            Re: Re: Re: Re: Youtube

            Nobody on this planet has the moral high ground to be telling a free people what to watch or who to see or where to do anything as long as it is all lawful.

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 28 Feb 2019 @ 1:23pm

              Re: Re: Re: Re: Re: Youtube

              That is a problem with corporations relinquishing freedom to sustain their bottom lines. That is why they cannot be trusted to ensure sovereignty doesn't take a back seat to greed. They need to be held in check by a government that is governed by sovereign people. Otherwise you get monsters hovering over nations' welfares ready to strike.

              link to this | view in chronology ]

    • icon
      Mike Masnick (profile), 28 Feb 2019 @ 3:10pm

      Re: Youtube

      As a parent of kid just getting to the age where youtube music videos are of interest I had given some thought to the youtube issue at any rate. Youtube could have two versions, one where all content has been 100% vetted, and then the normal version they have now, and as long as they were explicit as to which version you were browsing then at least people would have the option of being in the guaranteed "safe" zone.

      YouTube does have a YouTube Kids version, which I agree they probably should set up the way you've described. So far, it's more that they use some unknown process to approve channels... but they don't review each video. It does seem like YouTube Kids would be a lot better if it did involve reviewing the videos that get on there.

      While that's an impossible ask for the fully open YouTube, it does seem much more reasonable for a more limited version, such as YouTube kids, to be a more curated version of YouTube that has all the videos reviewed.

      In general though I recognize that the internet is full of morons and trolls and scammers and bigots and assholes of all varieties (as is the world at large) and I parent appropriately. Meaning that mostly up until now it has all been PBSKIDS and Netflix kids without parental supervision. I don't let my kid entirely loose on the internet just yet although those days are not far off.

      Yup. I think the trick here is not to watch over every little thing your kids do, but to train them to have the tools to deal with bad behavior, and to recognize that such bad behavior exists. I've actually been surprised and impressed at the training the local schools provide kids here about internet safety. It is reasonable, thoughtful, and not based on moral panics.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 28 Feb 2019 @ 11:03am

    I Think that the best solution is to give people the tools to control their own experience. The problem with that approach is that there are those people out there who think that if they don't like it, nobody else should be able to see it, and such tools will never satisfy them, because the cannot impose their morals on other people.

    Also, the moderated platforms, and the software on which the build a moderated platform exists, and so any individual or group could build their own platforms, and federate with individuals and groups with a similar outlook. Any church group could set up their own social media sites, and federate with other churches. Don't like the big sites, club together and start building a network of like minded sites.

    link to this | view in chronology ]

    • icon
      Thad (profile), 28 Feb 2019 @ 11:25am

      Re:

      I Think that the best solution is to give people the tools to control their own experience. The problem with that approach is that there are those people out there who think that if they don't like it, nobody else should be able to see it, and such tools will never satisfy them, because the cannot impose their morals on other people.

      That's not the only problem with it.

      I agree with the stance that platforms should give people the tools to control their own experience -- hell, I'm browsing this site right now with a script I wrote to block some trolls who were adversely affecting my experience.

      But that only gets you so far.

      How do you protect against targeted harassment campaigns? Or true threats?

      There's no simple answer to those questions, because to start with, it's extremely difficult to even agree on a definition of the former, and while the latter may have a legal definition, it varies by jurisdiction. And even assuming the platforms could define those terms in a satisfactory way, there's still the matter of accurately identifying threats and harassment (telling them apart from jokes, or descriptions of threats and harassment, etc.).

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 28 Feb 2019 @ 11:44am

        Re: Re:

        How do you protect against targeted harassment campaigns?

        The person being harassed can disengage by not following or friending those people harassing them, just like people do in real life.

        Or true threats?

        Those along with harassment to the point of stalking are a matter for the police. Expecting the social media sites to deal with them is simply a means of driving the problem underground, and all too often that leads to police involvement after the threats have been realised.

        link to this | view in chronology ]

        • icon
          Thad (profile), 28 Feb 2019 @ 11:55am

          Re: Re: Re:

          The person being harassed can disengage by not following or friending those people harassing them

          I don't use Twitter, but that's not my understanding of how it works. If a thousand people start directing abuse @ your account, you're going to see it.

          just like people do in real life.

          ...that's...a completely baffling statement. People in real life only get harassed by people they friend or follow? What the fuck are you talking about?

          Those along with harassment to the point of stalking are a matter for the police.

          "Hello, police? Somebody named charizard69 on Twitter said he's going to murder my family. Can you help me?"

          Expecting the social media sites to deal with them is simply a means of driving the problem underground, and all too often that leads to police involvement after the threats have been realised.

          But leaving threatening messages up and visible until law enforcement has the time and resources to investigate them doesn't?

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 28 Feb 2019 @ 1:39pm

            Re: Re: Re: Re:

            All a big social media site can do is hide a problem, they cannot solve the problem, so why should they be forced to give people a false sense of security by hiding problems.`

            link to this | view in chronology ]

            • icon
              Thad (profile), 28 Feb 2019 @ 1:59pm

              Re: Re: Re: Re: Re:

              If the problem is that somebody's harassing me on Twitter, then Twitter mods fucking-well can solve that problem. A lot more quickly than the police can.

              You're being absurd.

              link to this | view in chronology ]

              • identicon
                Anonymous Coward, 28 Feb 2019 @ 2:31pm

                Re: Re: Re: Re: Re: Re:

                If the problem is that somebody's harassing me on Twitter, then Twitter mods fucking-well can solve that problem.

                How, kill their account? That only works if a person wants to use a specific name, otherwise they will soon be back, and with an increased desire to harass you/

                link to this | view in chronology ]

                • identicon
                  Anonymous Coward, 28 Feb 2019 @ 2:35pm

                  Re: Re: Re: Re: Re: Re: Re:

                  Just tell Twitter they voted for Trump. They'll do the rest.

                  link to this | view in chronology ]

                  • icon
                    Thad (profile), 28 Feb 2019 @ 3:15pm

                    Re: Re: Re: Re: Re: Re: Re: Re:

                    Ah yes, Twitter is well-known for banning anyone who voted for Donald Trump. That's why its single best-known user is...Donald Trump.

                    link to this | view in chronology ]

                • icon
                  Thad (profile), 28 Feb 2019 @ 3:14pm

                  Re: Re: Re: Re: Re: Re: Re:

                  How, kill their account? That only works if a person wants to use a specific name, otherwise they will soon be back, and with an increased desire to harass you/

                  I mean, if your reasoning is "a sufficiently dedicated harasser can sign up under another account," then a sufficiently dedicated harasser can follow me to other websites if I quit Twitter, too. Again, your reasoning is absurd; you're starting at your conclusion and backfilling justifications for it, with no regard toward whether the justification you're giving now is logically consistent with the one you gave two posts ago.

                  link to this | view in chronology ]

                  • identicon
                    Anonymous Coward, 28 Feb 2019 @ 3:23pm

                    Re: Re: Re: Re: Re: Re: Re: Re:

                    If somebody is prepared to track down where you have moved you social media activity to, you need to involve the police, because there is nothing any social media site can do to keep them away. So I ask again, what is a social media site meant to do to solve your problem?

                    link to this | view in chronology ]

                    • identicon
                      Anonymous Coward, 28 Feb 2019 @ 4:20pm

                      Re: Re: Re: Re: Re: Re: Re: Re: Re:

                      I've had mixed results with reporting to police. Sometimes the harasser gets a knock on their door (at which point they always stop), and sometimes they don't.

                      link to this | view in chronology ]

                    • icon
                      Thad (profile), 1 Mar 2019 @ 7:56am

                      Re: Re: Re: Re: Re: Re: Re: Re: Re:

                      If somebody is prepared to track down where you have moved you social media activity to, you need to involve the police

                      Okay. What crime do I report?

                      "Hello, officer? Somebody named charizard69 Googled my name, signed up for accounts on every messageboard I use, and every time I post he responds by saying rude things about my mother. There's a law against that, right?"

                      So I ask again, what is a social media site meant to do to solve your problem?

                      That's not "again", that's literally the first time you've asked that question. Though nice try pretending that I'm the guy not answering questions here. Say, have you answered a single one of mine? For example, the one about what you mean by "The person being harassed can disengage by not following or friending those people harassing them, just like people do in real life"? You never explained that one. People are only harassed by people they follow or friend in real life? Huh?

                      What can moderators do to solve problems? They can moderate. Respond to reports of abuse; investigate them; temporarily or permanently ban the associated accounts and IPs if appropriate.

                      Now, that's a simple response that hides how complex the issue actually is. What is abuse? When is it appropriate to ban someone? Those are, as I noted upthread, difficult questions, and, especially on a larger network, it's impossible to answer them to everyone's satisfaction.

                      But your "solution" -- never moderate anything, by anyone, at any time or for any reason -- is juvenile and reductive. And I think you know that, because you keep changing your justifications and moving your goalposts. Your argument is absurd; it's a variation on the old "you can't have 100% success, so you shouldn't even try" routine. Plus a healthy dose of victim-blaming, and a deep misunderstanding of what happens when people report online harassment to the police.

                      You're arguing in bad faith, and you're wasting my time. You can keep yammering if you want, but I'm done here.

                      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 28 Feb 2019 @ 12:25pm

        Re: Re:

        I'm browsing this site right now with a script I wrote to block some trolls who were adversely affecting my experience.

        Out of technical curiosity, how do you do that? And please, if I'm asking a dumb question, please excuse me. As someone who in the past dabbled with coding and HTML, and who currently works in IT, this intrigues me.

        I mean, I get how adblockers work, is this similar? A plugin in your browser running a script that searches for the name of said trolls, identifies the div or frame the text is found in and blocks that div or frame from loading in your browser? It just doesn't make sense to me since instead of being served from a third party location, the comments are more or less embedded in this page and thus a part of the page itself. How can a script block that?

        link to this | view in chronology ]

        • icon
          Stephen T. Stone (profile), 28 Feb 2019 @ 1:46pm

          They’re called “userscripts”. This particular script runs browser-side and blocks certain users based on…uh…shit I have no idea because I don’t run it, but it apparently does the trick for Thad.

          link to this | view in chronology ]

        • icon
          Thad (profile), 28 Feb 2019 @ 2:04pm

          Re: Re: Re:

          I mean, I get how adblockers work, is this similar? A plugin in your browser running a script that searches for the name of said trolls, identifies the div or frame the text is found in and blocks that div or frame from loading in your browser?

          More or less, though it's not a plugin itself; it's a userscript that you can use through another plugin like Greasemonkey or Tampermonkey. You can take a look at the source code if you'd like; it's on my website. (Used to link it from every post I wrote, but that seems to have broken at some point during the recent site update, and since nobody but me ever used the damn thing anyway, I wasn't really sweating it.)

          At any rate, it can block comments from specific usernames; I've also added optional whitelist functionality, and the ability to hide replies to hidden posts.

          It just doesn't make sense to me since instead of being served from a third party location, the comments are more or less embedded in this page and thus a part of the page itself. How can a script block that?

          Technically it doesn't really block the comments in question, it just hides them. The page loads; all the content on it loads; then the script runs, examines the DOM, and removes content based on specified criteria.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 28 Feb 2019 @ 2:07pm

            Re: Re: Re: Re:

            That's pretty cool. Thank you for taking the time to explain that. I'll definitely be looking into that further now.

            link to this | view in chronology ]

  • identicon
    Anonymous Coward, 28 Feb 2019 @ 12:58pm

    Its not hard to understand what toll viewing the worst of the worst trash humans can come up with, posting it sometimes live even if you take a look at the toll it takes on police and combat veterans. There is a mindshift that takes place. You must fight violence with violence and the governments know this very well. They multiply force times 100 domestically and in war multiply force times 1000. People are never the same after conflict. If you survive it, it will be hard to remove it from your sub-conscious mind. Good luck.

    link to this | view in chronology ]

  • icon
    Uriel-238 (profile), 28 Feb 2019 @ 1:44pm

    Overheard on a Facebook hangar bay...

    Moff Jerjerrod: Lord Vader, this is an unexpected pleasure. We are honored by your presence...
    Darth Vader: You may dispense with the pleasantries, Commander. I'm here to put you back on schedule.
    Moff Jerjerrod: I assure you, Lord Vader. My men are working as fast as they can.
    Darth Vader: Perhaps I can find new ways to motivate them
    Moff Jerjerrod: I tell you that this station will be operational as planned.
    Darth Vader: The Emperor does not share your optimistic appraisal of the situation.
    Moff Jerjerrod: But he asks the impossible. I need more men.
    Darth Vader: Then perhaps you can tell him when he arrives.
    Moff Jerjerrod: ...The Emperor's coming here?
    Darth Vader: That is correct, Commander. And, he is most displeased with your apparent lack of progress.
    Moff Jerjerrod: We shall double our efforts.
    Darth Vader: I hope so, Commander, for your sake. The Emperor is not as forgiving as I am.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 28 Feb 2019 @ 2:23pm

    "The impossibility of cost-efficient automobile safety makes it impossible for us to produce cars."

    Human moderation is quite possible, though it might be too expensive for a business model built on using bots to filter content.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 28 Feb 2019 @ 2:46pm

      Re:

      Human moderation is quite possible

      Please explain how you plan to moderate 300 hours of video uploaded to Youtube every minute. That's 432,000 hours of video uploaded every day. You would need, at a minimum, 18,000 people watching video 24/7 with no breaks. No bathroom breaks, food breaks, sleeping breaks, doing nothing but watching video. Oh and definitely not actually taking the time to flag or approve the content, since that takes time to do that you wouldn't be watching video.

      Now, say you want to hire enough people to review that content in 8 hour shifts. If I've done my math right, that's 54,000 people that you would need doing nothing but watching video. Again, no breaks, just straight video watching.

      Now add in lunch breaks, smoke breaks, time to actually flag or approve those videos, and do the day-to-day administrative tasks every company requires of its employees, such as checking email, clocking time, etc... You've just increased the amount of people you would need by likely another 10 - 20 thousand.

      So far we've only been talking about the people needed to actually watch the videos, but now you've got somewhere in the neighborhood of 70,000 employees. Who manages them? Now you need team leads, managers, supervisors, additional IT people to support them, HR people. At this point you're probably looking at far more than 100,000 people.

      But wait, there's more, the amount of video getting uploaded is probably only going to increase for the forseeable future (yes at some point it will hit a level off point but who knows where that is), so you're going to have to hire EVEN MORE people to watch videos, and even more people to manage and support them.

      And none of this takes into account the fact that humans WILL STILL GET IT WRONG.

      If your definition of "quite possible" is so horrendously expensive and a logistics and management nightmare that it really isn't all that possible because they are still going to get it wrong, then yes, you are "quite correct". And by that I mean you're insane and don't understand what you are talking about because no, it's not possible.

      too expensive for a business model built on using bots to filter content.

      How does that make any sense? If I have to pay people to do what I could get a bot to do for free, how is using bots more expensive? Ignoring the fact that bots are really bad at content moderation and context.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 28 Feb 2019 @ 3:06pm

        Re: Re:

        If the bots can't do the job, humans can, just too expensively for the internet company to profit.

        "Auto safety is too expensive!"

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 28 Feb 2019 @ 8:19pm

          Re: Re: Re:

          False equivalence. The auto industry is not a platform for people to communicate and exercise their First Amendment rights to free speech on.

          It's also "too expensive" to have all humans answer customer support lines for major businesses, and bots aren't nearly as good at it as humans, given the amount of complaints about the automated answering systems. Yet companies do that too.

          Your argument is invalid.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 28 Feb 2019 @ 11:30pm

            Re: Re: Re: Re:

            "Moderating CHILD PORN is too expensive!"

            link to this | view in chronology ]

            • icon
              Uriel-238 (profile), 1 Mar 2019 @ 12:22am

              Child porn

              That's one we're ultimately going to have to concede, given that the filtering software used by Google and YouTube to filter porn (or child porn) are extremely susceptible to adversarial data.

              Child porn already leaks through (though for the moment Bing has the reputation for the go-to image search site for child porn, rather than Google.)

              Fortunately we're in an era in which renders of human-shaped three-dimensional models are fast approaching photo-realism, and legalizing those can create a strong incentive for pornographers to use those instead of exploiting actual children in making their porn.

              Of course, if we don't decriminalize digitally rendered child porn (because child porn is gross and toxic to political careers) then digital kids will continue to be just as illegal as photos of real kids, and the latter will stay way easier to produce.

              Either way, your child porn (or your war gore or terrorist manifesto or bomb-building plan) can be superimposed with a transparent panda-mesh so that Google thinks its looking at a panda even though human eyes see something completely different. Sure Google can screen for the panda mesh, but then the pornographers will switch to a giraffe mesh and there will be new moles to whack each week.

              link to this | view in chronology ]

            • icon
              PaulT (profile), 1 Mar 2019 @ 2:22am

              Re: Re: Re: Re: Re:

              I'm not sure what's more worrying - your obsession with child porn or your inability to understand the basic facts of reality that make the comparison invalid.

              link to this | view in chronology ]

            • identicon
              Anonymous Coward, 1 Mar 2019 @ 3:04am

              Re: Re: Re: Re: Re:

              Most child porn is not posted to public sites because that will bring the police down on the posters head in a hurry. There is some content published which some consider child porn, but which are just patents sharing videos and pictures of kids being kids. The intent is not to arouse people, but some people find the images arousing, but why should that be reason to block content with innocent intent and purpose.

              link to this | view in chronology ]

      • identicon
        Anonymous Coward, 28 Feb 2019 @ 3:15pm

        Re: Re:

        Double your estimates at least, as the last figure I can find with a quick search is 500 hours a minute in November 2015. Also, on your hours of videos you need 54,000 people, hours of video a day divided by 8. A better base estimate is hours a day * 7 divided by weekly hours of an employee, and for a 40 hour week that equates to 75,600 people.

        Now lets call it 600 hours of videos a minute, with people working 40 hour weeks, and then ignoring holiday and sickness cover, you need 151,200 just to watch every video. Add in holiday and sickness cover, management structures, and administrative staff for the personnel department, plus legal experts to deal with edge cases, and 200 thousand would not be too many. And all that is before adding enough people in the customer service department to deal with user complaints and challenges.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 28 Feb 2019 @ 6:15pm

          Re: Re: Re:

          Or they could enforce their indemnification clauses.

          This debate boils down to whether or not moderation is a required element of running a UGC platform.

          link to this | view in chronology ]

          • identicon
            Anonymous Coward, 28 Feb 2019 @ 8:07pm

            Re: Re: Re: Re:

            This debate boils down to whether or not moderation is a required element of running a UGC platform.

            It is not. Debate over.

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 28 Feb 2019 @ 11:29pm

              Re: Re: Re: Re: Re:

              Not over to the governments.

              link to this | view in chronology ]

              • identicon
                Anonymous Coward, 1 Mar 2019 @ 1:58am

                Re: Re: Re: Re: Re: Re:

                Right, the governments who permit the idiocy of Viacom suing YouTube for content Viacom uploaded.

                You want moderation, do it yourself. And pay for it.

                link to this | view in chronology ]

              • identicon
                Anonymous Coward, 1 Mar 2019 @ 6:35am

                Re: Re: Re: Re: Re: Re:

                Governments are not debating this. They are stating this is how they want things to be and lalalalala they aren't willing to listen to reason and reality. That's not reality and they are literally tilting at windmills by continuing to pursue this magical form of moderation that will solve all the world's online and offline problems.

                There is no debate. It's just a bunch of people who are technologically illiterate running around like chickens with their heads cut off doing things just to do things.

                link to this | view in chronology ]

          • identicon
            Anonymous Coward, 1 Mar 2019 @ 3:10am

            Re: Re: Re: Re:

            That does not help when the other party cannot pay the indemnification.

            link to this | view in chronology ]

  • icon
    ECA (profile), 28 Feb 2019 @ 2:28pm

    Humans the ultimate assholes...

    Ever wonder why they use Automation for this??
    You are an editor/scanner/... and you see all this BS floating along your computer that you have to evaluate.
    Then you get this GREAT idea...take the Emails and send a strange little notice..
    "WOW, we have noticed what you are watching and doing, wouldnt your BOSS/SPOUSE/WORKER/everyone love to know what you are watching...and what your left hand is doing.."
    "Pay us this amount and we wont send this info to Everyone on your email list, and in your city"

    Yep, I got one of those letters...

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 28 Feb 2019 @ 2:34pm

      Re: Humans the ultimate assholes...

      Some of them use your passwords that they hacked from some online company to make it seem like they have your password.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 28 Feb 2019 @ 3:30pm

    Thank goodness for Dissenter for providing option for sane comment system disconnected from the open moral panic.

    link to this | view in chronology ]

  • icon
    dontwant2getfired (profile), 1 Mar 2019 @ 6:35pm

    do you know that these secondary content moderators are not given paid holidays? and if worked on federal holidays they are only paid 20% extra, meaning if the person is getting paid 10 a hour, the extra 20% is only 2, so on federal holidays that are observed by accenture/cognizant, content moderator only get paid 12 a hour. Do you know that the people who work for content moderation jobs are basically having lots of health issues, from sitting down long time, eat junk food which are free, and not allowed to express themselves even though hiring company said 'be your true self'? Do you know that when the content moderator initially started the training, they were told that if they are not comfy with the contents, they can request to change and now when they do, they are told to either suck it up or resign! Do you know that content moderators are jobs for unskilled workers who does not have college degree or simply no other way to go for a job but this one?

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.