Content Moderation is Broken. Let Us Count the Ways.

from the it's-not-as-simple-as-you-think dept

Social media platforms regularly engage in "content moderation"—the depublication, downranking, and sometimes outright censorship of information and/or user accounts from social media and other digital platforms, usually based on an alleged violation of a platform's "community standards" policy. In recent years, this practice has become a matter of intense public interest. Not coincidentally, thanks to growing pressure from governments and some segments of the public to restrict various types of speech, it has also become more pervasive and aggressive, as companies struggle to self-regulate in the hope of avoiding legal mandates.

Many of us view content moderation as a given, an integral component of modern social media. But the specific contours of the system were hardly foregone conclusions. In the early days of social media, decisions about what to allow and what not to were often made by small teams or even individuals, and often on the fly. And those decisions continue to shape our social media experience today.

Roz Bowden—who spoke about her experience at UCLA's All Things in Moderation conference in 2017—ran the graveyard shift at MySpace from 2005 to 2008, training content moderators and devising rules as they went along. Last year, Bowden told the BBC:

We had to come up with the rules. Watching porn and asking whether wearing a tiny spaghetti-strap bikini was nudity? Asking how much sex is too much sex for MySpace? Making up the rules as we went along. Should we allow someone to cut someone's head off in a video? No, but what if it is a cartoon? Is it OK for Tom and Jerry to do it?

Similarly, in the early days of Google, then-deputy general counsel Nicole Wong was internally known as "The Decider" as a result of the tough calls she and her team had to make about controversial speech and other expression. In a 2008 New York Times profile of Wong and Google's policy team, Jeffrey Rosen wrote that as a result of Google's market share and moderation model, "Wong and her colleagues arguably have more influence over the contours of online expression than anyone else on the planet."

Built piecemeal over the years by a number of different actors passing through Silicon Valley's revolving doors, content moderation was never meant to operate at the scale of billions of users. The engineers who designed the platforms we use on a daily basis failed to imagine that one day they would be used by activists to spread word of an uprising...or by state actors to call for genocide. And as pressure from lawmakers and the public to restrict various types of speech—from terrorism to fake news—grows, companies are desperately looking for ways to moderate content at scale.

They won't succeed—at least if they care about protecting online expression even half as much as they care about their bottom line.

The Content Moderation System Is Fundamentally Broken. Let Us Count the Ways:

1. Content Moderation Is a Dangerous Job—But We Can't Look to Robots to Do It Instead

As a practice, content moderation relies on people in far-flung (and almost always economically less well-off) locales to cleanse our online spaces of the worst that humanity has to offer so that we don't have to see it. Most major platforms outsourcing the work to companies abroad, where some workers are reportedly paid as little as $6 a day and others report traumatic working conditions. Over the past few years, researchers such as EFF Pioneer Award winner Sarah T. Roberts have exposed just how harmful a job it can be to workers.

Companies have also tried replacing human moderators with AI, thereby solving at least one problem (the psychological impact that comes from viewing gory images all day), but potentially replacing it with another: an even more secretive process in which false positives may never see the light of day.

2. Content Moderation Is Inconsistent and Confusing

For starters, let's talk about resources. Companies like Facebook and YouTube expend significant resources on content moderation, employing thousands of workers and utilizing sophisticated automation tools to flag or remove undesirable content. But one thing is abundantly clear: The resources allocated to content moderation aren't distributed evenly. Policing copyright is a top priority, and because automation can detect nipples better than it can recognize hate speech, users often complain that more attention is given to policing women's bodies than to speech that might actually be harmful.

But the system of moderation is also inherently inconsistent. Because it relies largely on community policing—that is, on people reporting other people for real or perceived violations of community standards—some users are bound to be more heavily impacted than others. A person with a public profile and a lot of followers is mathematically more likely to be reported than a less popular user. And when a public figure is removed by one company, it can create a domino effect whereby other companies follow their lead.

Problematically, companies' community standards also often feature exceptions for public figures: That's why the president of the United States can tweet hateful things with impunity, but an ordinary user can't. While there's some sense to such policies—people should know what their politicians are saying—certain speech obviously carries more weight when spoken by someone in a position of authority.

Finally, when public pressure forces companies to react quickly to new "threats," they tend to overreact. For example, after the passing of FOSTA—a law purportedly designed to stop sex trafficking but which, as a result of sweepingly broad language, has resulted in confusion and overbroad censorship by companies—Facebook implemented a policy on sexual solicitation that was essentially a honeypot for trolls. In responding to ongoing violence in Myanmar, the company created an internal manual that contained elements of misinformation. And it's clear that some actors have greater ability to influence companies than others: A call from Congress or the European Parliament carries a lot more weight in Silicon Valley than one that originates from a country in Africa or Asia. By reacting to the media, governments, or other powerful actors, companies reinforce the power that such groups already have.

3. Content Moderation Decisions Can Cause Real-World Harms to Users as Well as Workers

Companies' attempts to moderate what they deem undesirable content has all too often had a disproportionate effect on already-marginalized groups. Take, for example, the attempt by companies to eradicate homophobic and transphobic speech. While that sounds like a worthy goal, these policies have resulted in LGBTQ users being censored for engaging in counterspeech or for using reclaimed terms like "dyke". 

Similarly, Facebook's efforts to remove hate speech have impacted individuals who have tried to use the platform to call out racism by sharing the content of hateful messages they've received. As an article in the Washington Post explained, "Compounding their pain, Facebook will often go from censoring posts to locking users out of their accounts for 24 hours or more, without explanation — a punishment known among activists as ‘Facebook jail.'"

Content moderation can also pose harms to business. Small and large businesses alike increasingly rely on social media advertising, but strict content rules disproportionately impact certain types of businesses. Facebook bans ads that it deems "overly suggestive or sexually provocative", a practice that has had a chilling effect on women's health startups, bra companies, a book whose title contains the word "uterus", and even the National Campaign to Prevent Teen and Unwanted Pregnancy.

4. Appeals Are Broken, and Transparency Is Minimal

For many years, users who wished to appeal a moderation decision had no feasible path for doing so...unless of course they had access to someone at a company. As a result, public figures and others with access to digital rights groups or the media were able to get their content reinstated, while others were left in the dark.

In recent years, some companies have made great strides in improving due process: Facebook, for example, expanded its appeals process last year. Still, users of various platforms complain that appeals lack result or go unanswered, and the introduction of more subtle enforcement mechanisms by some companies has meant that some moderation decisions are without a means of appeal.

Last year, we joined several organizations and academics in creating the Santa Clara Principles on Transparency and Accountability in Content Moderation, a set of minimum standards that companies should implement to ensure that their users have access to due process and receive notification when their content is restricted, and to provide transparency to the public about what expression is being restricted and how.

In the current system of content moderation, these are necessary measures that every company must take. But they are just a start.  

No More Magical Thinking

We shouldn't look to Silicon Valley, or anyone else, to be international speech police for practical as much as political reasons. Content moderation is extremely difficult to get right, and at the scale at which some companies are operating, it may be impossible. As with any system of censorship, mistakes are inevitable.  As companies increasingly use artificial intelligence to flag or moderate content—another form of harm reduction, as it protects workers—we're inevitably going to see more errors. And although the ability to appeal is an important measure of harm reduction, it's not an adequate remedy.

Advocates, companies, policymakers, and users have a choice: try to prop up and reinforce a broken system—or remake it. If we choose the latter, which we should, here are some preliminary recommendations:

  • Censorship must be rare and well-justified, particularly by tech giants. At a minimum, that means (1) Before banning a category of speech, policymakers and companies must explain what makes that category so exceptional, and the rules to define its boundaries must be clear and predictable. Any restrictions on speech should be both necessary and proportionate. Emergency takedowns, such as those that followed the recent attack in New Zealand, must be well-defined and reserved for true emergencies. And (2) when content is flagged as violating community standards, absent exigent circumstances companies must notify the user and give them an opportunity to appeal before the content is taken down. If they choose to appeal, the content should stay up until the question is resolved. But (3) smaller platforms dedicated to serving specific communities may want to take a more aggressive approach. That's fine, as long as Internet users have a range of meaningful options with which to engage.
  • Consistency. Companies should align their policies with human rights norms. In a paper published last year, David Kaye—the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression—recommends that companies adopt policies that allow users to "develop opinions, express themselves freely and access information of all kinds in a manner consistent with human rights law." We agree, and we're joined in that opinion by a growing coalition of civil liberties and human rights organizations.
  • Tools. Not everyone will be happy with every type of content, so users should be provided with more individualized tools to have control over what they see. For example, rather than banning consensual adult nudity outright, a platform could allow users to turn on or off the option to see it in their settings. Users could also have the option to share their settings with their community to apply to their own feeds.
  • Evidence-based policymaking. Policymakers should tread carefully when operating without facts, and not fall victim to political pressure. For example, while we know that disinformation spreads rapidly on social media, many of the policies created by companies in the wake of pressure appear to have had little effect. Companies should work with researchers and experts to respond more appropriately to issues.

Recognizing that something needs to be done is easy. Looking to AI to help do that thing is also easy. Actually doing content moderation well is very, very difficult, and you should be suspicious of any claim to the contrary.

Republished from the EFF's Deeplinks Blog.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: ai, algorithms, community standards, content moderation, social media, trust and safety


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. This comment has been flagged by the community. Click here to show it
    identicon
    Jess Nawgonna Worryboutit, 2 May 2019 @ 10:05am

    Quit dithering over whether beheading is allowable. It isn't.

    This piece is almost interesting because admits to "outright censorship" in first sentence.

    Of course then goes on to wring hands "it's SO hard", justify, and excuse.

    I'll omit for the moment political area and my view that "conservatives" are largely the target of "liberal" excuse of hate speech.

    Key problem is that "moderation" is ATTEMPTING TOO MANY EDGE CASES AS IF SOME VALUE WILL BE LOST. That assumption is false, just on volume.

    When that "moderation" is NOT attempting fine distinctions is widely known, will be fewer objectionable attempted. (Proven in practice. Several comments have been made here over the years by alleged moderators to effect that they gleefully used the dictator's meat ax approach.)

    As always in practice "liberals" just aren't so fair and tolerant as they claim.

    WHAT conservative site has this problem? Name one. A glance at say, Infowars, way back found it was not truly not "moderated" nor censored. Green Glenwald in his prior columns (I don't know now) on whatever British paper never had this hand-wringing problem.

    So it's actually liberals / globalists attempting to hide their agenda while justifying and ramping it up, of which this piece is part.

    link to this | view in thread ]

  2. This comment has been flagged by the community. Click here to show it
    identicon
    Jess Nawgonna Worryboutit, 2 May 2019 @ 10:06am

    Re: Quit dithering over whether beheading is allowable. It isn't

    But fine distinction isn't the attitude here at Techdirt! No, right here, there have been calls by fanboys to just flat remove my comments, and certainly that nothing is lost by the censoring / editorial comment that they mis-term "hiding".

    I've always been for CLEAR RULES AND IMPARTIAL TREATMENT. But in fact certain viewpoints here even at "free speech" Techdirt, no matter how mild-mannered, are discriminated against.

    link to this | view in thread ]

  3. identicon
    Anonymous Coward, 2 May 2019 @ 10:12am

    I can understand the Think of the Children thing, so don't let your children have unfettered access to the internet.

    The remainder of the complaints seem to revolve around the desire to control what others are doing. Not your business is it? If it is a crime, report it and move on. Having a hissy fit does no one any good.

    link to this | view in thread ]

  4. identicon
    Anonymous Coward, 2 May 2019 @ 10:17am

    The perfect is the enemy of the better

    As Voltaire used to say. Yes all these flaws and hazards exist with moderation. Yet without moderation, most comment sites turn into 4chan like monstrosities. Perfect moderation cannot be done, but better moderation can. It's needed because there are way to many cases where the answer to harmful comments isn't more speech.

    link to this | view in thread ]

  5. icon
    Stephen T. Stone (profile), 2 May 2019 @ 10:21am

    Can you write in actual, understandable English sentences? Most of your post is incomprehensible due to a lack of proper sentence structure, and the rest due to your generalized idiocy.

    link to this | view in thread ]

  6. icon
    Stephen T. Stone (profile), 2 May 2019 @ 10:23am

    “Viewpoint discrimination” is not illegal. Section 230 does not require it; neither does the First Amendment.

    link to this | view in thread ]

  7. icon
    Stephen T. Stone (profile), 2 May 2019 @ 10:28am

    The remainder of the complaints seem to revolve around the desire to control what others are doing. Not your business is it?

    If I own/operate the platform? Yes, it is my business.

    link to this | view in thread ]

  8. icon
    Gary (profile), 2 May 2019 @ 10:30am

    Re: Re: Quit Trolling

    I've always been for CLEAR RULES AND IMPARTIAL TREATMENT. But in fact certain viewpoints here even at "free speech" Techdirt, no matter how mild-mannered, are discriminated against.

    No, you have always been about complaining that the jew conspiracy is stopping you from telling us about the saucer people.

    Please reply in plain english:
    1) How does your made up definition of Common Law (aka "Cabbage Law") differ from the accepted standard as defined by Wikipedia and everyone else.
    2) Please show us Your website where you exemplify the ideals you tirelessly claim to champion. If you believed in free speech like you say, you'd have your own forum to host our speech.

    Thank you.

    link to this | view in thread ]

  9. icon
    Stephen T. Stone (profile), 2 May 2019 @ 10:30am

    Minor fix:

    Section 230 does not require viewpoint neutrality

    I hate when I miss a minor mistake like that.

    link to this | view in thread ]

  10. identicon
    Anonymous Coward, 2 May 2019 @ 10:31am

    Content moderation is broken. Let me count the ways -- Fuck the EU and the tyrannical authoritarian horse they ride in on.

    Authoritarian tyrants always go after what the population can and can't say. Protecting the ability to call out an authoritarian tyrant was high priority at the founding of the United States exactly for the reasons we see today.

    Moderate content with respect to those guiding principles. Social media is the town square of today. Content creators are the population airing their opinions and grievances. Putting an extremist in a corner cut off from the world gives that extremist one option to communicate and be heard -- lashing out. Hence the reduction in violence observed in the years that the internet first rolled out, suddenly people had an outlet for the kathartic effect of shitposting and airing of opinions to get it off their chest. We've curtailed this to appease the authoritarian tyrannical sensibilities of the EU.

    Fuck the EU.

    Moderate with respect to American law.

    The EU can go wall themselves off from the world. Route around them. They want to oppress peoples of the world, they can do it on their own time. Slap that over-eager thumb of oppression. It belongs no where on the shores of America.

    link to this | view in thread ]

  11. identicon
    Anonymous Coward, 2 May 2019 @ 10:32am

    Re:

    Today there is more edgy content on South Park and Family Guy than the type of comments held up and demonized in social media platforms.

    It's tyrannical morality policing.

    Get it out of here.

    link to this | view in thread ]

  12. identicon
    Anonymous Coward, 2 May 2019 @ 10:38am

    I used to work for a website hosting provider with about 3 million customers and helped come up with a number of their content moderation policies. We didn't allow "adult content" so we had to define what counted, in response to lots of "well what about this" questions from our customers. For example, female breasts could be shown but not nipples. Bare buttocks weren't allowed, but a thong was. We also didn't allow firearm sales, but then a customer who sold gun parts asked what he could sell without violating the policy. After much internal discussion, we told him that he could sell anything other than the receiver. We allowed a customer to sell instructions on how to create a device that was illegal (I don't remember what the device actually did - had something to do with EM jamming I think), but we wouldn't allow him to sell kits containing the parts to build the device. Could a company based in Colorado sell marijuana through a website we hosted? Could they sell CBD oil? How about someone selling Kratom? It isn't a controlled substance. We didn't allow weapons - does selling a kitchen knife violate that policy?

    In some cases I wasn't happy with the lines we drew, but in others I felt our distinctions were reasonable. There are fine lines and edge cases all over the place, and they happen all the time.

    link to this | view in thread ]

  13. icon
    Mike Masnick (profile), 2 May 2019 @ 10:53am

    Re: Quit dithering over whether beheading is allowable. It isn't

    Key problem is that "moderation" is ATTEMPTING TOO MANY EDGE CASES AS IF SOME VALUE WILL BE LOST. That assumption is false, just on volume.

    I know that you're just trolling, but the above sentence literally makes no sense at all. And, because I'm a glutton for punishment, I'm going to try to engage with you as if you're actually being intellectually honest.

    Thus: can you explain what you mean by the above sentence? What does "attempting too many edge cases as if some value will be lost" even mean? What edge cases? Attempting what? And who's determining "if some value may be lost"? Are you arguing for MORE moderation or less? And how does that compare to the arguments you've made previously arguing against moderation.

    link to this | view in thread ]

  14. identicon
    Anonymous Coward, 2 May 2019 @ 11:31am

    Re:

    Yes, good point. The comment was meant for those trying to tell ISPs what they can do.

    link to this | view in thread ]

  15. identicon
    Anonymous Coward, 2 May 2019 @ 11:31am

    Re: Re:

    and platforms

    link to this | view in thread ]

  16. icon
    Bamboo Harvester (profile), 2 May 2019 @ 12:03pm

    Re: Re: Quit dithering over whether beheading is allowable. It i

    I read it as the "edge cases" being cases of dubious merit, and the "value lost" being that if enough highly questionable cases are found "true", the meaning of the law when it comes to more "mainstream" cases will be diluted to the point that it's useless, or, even worse, becomes so far-reaching that it's inimical to society.

    link to this | view in thread ]

  17. icon
    Gary (profile), 2 May 2019 @ 12:03pm

    Re:

    So other than you hate the EU (don't move there then?), what does that have to do with this article?

    link to this | view in thread ]

  18. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 2 May 2019 @ 1:25pm

    I bet I can play an entire game of football on this astroturf.

    link to this | view in thread ]

  19. identicon
    UNTechCLEAN, 2 May 2019 @ 4:50pm

    LIFELOG aka facebook

    In 2004, the Pentagon abandaoned project LIFELOG.

    LIFELOG was a programme destined to record every interaction of every human being on the planet, recording what they says, do, think, take in pictures, who they connect with, realte to, work with, etc...

    The same year, facebook was created.

    The people who wanted to log all of our lives had jsut modified the appearance of LIFELOG.

    Following facebook, there was linkendin (logging and spying for professionals, then twitter (logging and spying on our ideas, and so on.

    Today facebook is like the gestapo, stasi and any other secret polcie of nay totalitarian regime: not only spying, monitoring and logging every apsect of the lives of those who still use it, but also dictating them waht to think, who to follow, what to like and for whom to vote.

    Worst of all, they have succeeded in making the slaves like their servitude and ironically be oblivious of the evil done to them by their masters.

    There is a way out, it's called disconnect and live the real life...

    link to this | view in thread ]

  20. icon
    Gary (profile), 2 May 2019 @ 6:08pm

    Re: LIFELOG aka facebook

    Interesting, since you are a slave to TD and submit to our downvotes.

    Blue Balls is a slave to TD. Remember that. All his comments belong to TD - forever. his Copyright, and IP belong to someone else. That's slavery - if you are a Sod Cit like Blue Balls.

    link to this | view in thread ]

  21. identicon
    Christenson, 2 May 2019 @ 6:59pm

    More moderation principles....

    Mike has said it before, but centralizing moderation really screws up the works in many ways....
    Because there is too much content, and too many people to moderate for with incompatible ideas about what should be moderated... you can just do the combinatorial explosion... every person is generating content, somewhere, and has their own peculiar idea of what shouldn't be allowed.

    My own personal ideas about what should or should not be allowed flip flop depending on context -- Mr Trump should not be allowed to encourage violence, but Mr Masnick is quite welcome to quote Mr Trump and remind us about it and make him face consequences for it, and you can do that for any bad content you like.

    So what to do????
    Well, seems to me Mr Stone, unpaid, does a decent job of helping moderate Techdirt, for free.... well, not completely free, as he gets some positive social payback for his efforts.

    Seems to me Techdirt works pretty well, too...I think Mr Stone and I and many of you agree that its focus, technology and misbehavior thereabouts, is interesting. It has its limits, of course...some of us are interested in other things, too.

    And Mike Masnick's twitter feed works reasonably well, too.


    Each of these things has in common a relatively small scale, many people (compared to the scale) helping out, and lots of alternatives if for any reason or no reason, the decisions being made don't suit. There's also reasonable transparency. A persistent identity is valued. And there is consistency -- no attempt to "target" the site according to your identity -- troll OOTB/blue/blue balls sees the same site the rest of us do.


    OK, proposal time for how moderation might work on a large scale platform:
    Define a user community as a group of users with a defined common focal point.
    Define a content community as a group of content with a common defined focal point.

    A large scale platform has many communities of both types.

    Some kind of smallness happens to every community...

    Anyone can flag content as objectionable. The value of that objection is ranked (see page rank algorithm) according to many criteria.

    Some members of the community (with some kind of persistent identity) are caretakers who determine whether the flagged content is or is not within the guidelines of the content community where it is flagged and determines disposition according to the community rules. These caretakers are rewarded socially.

    In a system as large as facebook, not all moderators will be anywhere near as fair as Techdirt; power corrupts, and absolute power corrupts absolutely. All counterexamples involve non-corruption being required to support externalities such as reputation. The platform will provide mechanisms to grow alternate moderation, so if our troll blue feels like it, he has a way of indicating his moderation decisions and everyone who cares to can follow them instead of Mike Masnick's.

    link to this | view in thread ]

  22. icon
    nasch (profile), 2 May 2019 @ 7:55pm

    Re: LIFELOG aka facebook

    LIFELOG was a programme destined to record every interaction of every human being on the planet

    From what I can see it was designed to target individuals. Even the giant data center in Utah couldn't even begin to store the kind of data needed to track all that for over 7 billion people (ignoring the fact that not everyone is even on the internet, making it an impossible goal anyway).

    The same year, facebook was created. The people who wanted to log all of our lives had jsut modified the appearance of LIFELOG.

    Are you actually saying Zuckerberg was working for DARPA and created Facebook at their behest?

    Today facebook is like the gestapo, stasi and any other secret polcie of nay totalitarian regime: not only spying, monitoring and logging every apsect of the lives of those who still use it, but also dictating them waht to think, who to follow, what to like and for whom to vote.

    You've never actually seen what Facebook is like have you?

    link to this | view in thread ]

  23. identicon
    Anonymous Coward, 2 May 2019 @ 10:35pm

    Re: Re: Re:

    Actually, aren't Net Neutrality rules about telling the ISPs what they can (or can't) do? And a lot of people here support them.

    Still, I don't get one thing: we complain about a law that says that terrorism content is forbidden (or porn), but then we don't mind a company doing so because "it's their business".

    The result is the same in both cases: you have been censored.

    And I'll go further. It's not about "set your own platform if you don't like it". Sorry, but that's the same as telling people to change their country if they don't like the laws (or judges censoring speech).

    Effectively, Facebook or Twitter banning speech amounts to a worldwide ban of that speech, because every-fucking-one uses Facebook in many countries.

    We complain about judges edicts reaching other countries? Twitter's whims reach a lot of countries nowadays. Same for Facebook or any other platform. They censor and it's worldwide, no buts, ands or ifs. Worldwide.

    Even if you had the money and resources to set another Facebook (and no, most people don't), next thing would be convincing people to join that platform, something that is even harder to happen.

    Freedom of speech isn't only about the ability to talk about whatever you want in a desert, but about the possibility of it reaching those who you want to hear about it.

    At that point, it's their prerogative to ignore you, but that option has to exist in the first time.

    Remember when the UN declared internet a fundamental right? Not sure if it reaches to the point of it being fundamental, but for sure it's accessory for a lot of other rights (like freedom of speech and freedom of information).

    Sure, you still keep your freedom of speech without the internet, just that instead of being able to reach thousands or millions, you'll reach dozens, at most. Talk about limited in 21st century...

    But hey, you can still use a loudspeaker!

    link to this | view in thread ]

  24. identicon
    Anonymous Coward, 2 May 2019 @ 10:59pm

    Re: Re: Re: Re:

    Btw, there is another point I'd like to add.

    Engaging in Content Moderation is the stupidest thing to do from the business standpoint.

    Once you start censoring speech you don't like you show:

    • The ability to check your content.
    • The willingness to moderate it.
    • The capability to do so.

    At that point, any laws excepting you from liability upon notification (like DMCA) aren't valid. Or at least, that's what those suing you will push for.

    If a platform engages in content moderation, there are grounds to make it liable of any kind of illegal or illicit content in it. It can't use the "oh, I didn't know I had copyrighted/terrorist/hate/pick your choice content here".

    And no, no excuses are valid at that point. Now you have to moderate not only the content you don't like, but any content your (or even other) government might deem illegal.

    Copyright maximalists are rubbing their hands with content moderation. Good way of shooting themselves on the foot.

    link to this | view in thread ]

  25. icon
    Scary Devil Monastery (profile), 3 May 2019 @ 3:05am

    Re: Re: Re: Quit dithering over whether beheading is allowable.

    "I read it as the "edge cases" being cases of dubious merit, and the "value lost" being that if enough highly questionable cases are found "true", the meaning of the law when it comes to more "mainstream" cases will be diluted to the point that it's useless..."

    You DO realize it's a waste of time analyzing the random word salads Baghdad Bob keeps using in his attempts to win an argument? He still thinks by pouring a sufficiently large assortment of polysyllabic terms he doesn't know the meaning of at a text box in random order he will eventually convince someone he actually has a clue what he's talking about.

    link to this | view in thread ]

  26. icon
    Scary Devil Monastery (profile), 3 May 2019 @ 3:12am

    Re: Re: Re: Re:

    "Still, I don't get one thing: we complain about a law that says that terrorism content is forbidden (or porn), but then we don't mind a company doing so because "it's their business". The result is the same in both cases: you have been censored."

    Sort of, but not quite.

    When a platform, no matter how large, moderates according to their own rules that's still a private actor saying "My house, My rules".

    When a government makes a law about what may or may not be said it becomes "My house, their rules".

    There is a problem when there's a platform which monopolizes much of a given market but the solution to that is, and has always been, that there must be legal room for someone else to start and run a platform for the opposite view.

    "Freedom of speech isn't only about the ability to talk about whatever you want in a desert, but about the possibility of it reaching those who you want to hear about it. At that point, it's their prerogative to ignore you, but that option has to exist in the first time."

    This is true...but consider the fact that the one and only reason Facebook runs by its current moderation rules is due to legal compliance. Private corporations are notorious for acting only on law. That, in the end, is where FB's self-censorship comes from.

    link to this | view in thread ]

  27. icon
    Bamboo Harvester (profile), 3 May 2019 @ 6:35am

    Re: Re: Re: Re: Quit dithering over whether beheading is allowab

    My bad, shouldn't feed the trolls.

    But even that blind squirrel finds a nut once in a while..

    link to this | view in thread ]

  28. icon
    Gerald Robinson (profile), 3 May 2019 @ 9:56am

    Ban content moderation

    It has become obvious to the trivial observer that non-adversiarial moderation is inherently unfair. Applying due process simply makes things less fair-anyone who has felt with or is aware of courts should realize this. All that it does is make things expensive and so slow as to not matter. Consider a takedown that takes 90 days-that's no takedown!
    So long as we have vague terms:
    HATE Speech dfn: Speech which I hate!
    fair content moderation is impossible. Further given that:
    Defamation dfn: is any thing that I feel may be harmful to me,
    There is no way to fix the problem except-
    BAN ALL CONTENT MODERATION!

    link to this | view in thread ]

  29. icon
    Gary (profile), 3 May 2019 @ 12:34pm

    Re: Ban content moderation

    There is no way to fix the problem except-
    BAN ALL CONTENT MODERATION!

    An interesting proposal. Can you show a site where this actually works?

    link to this | view in thread ]

  30. icon
    Gerald Robinson (profile), 3 May 2019 @ 3:09pm

    Re: Re: Ban content moderation

    Unfortunately there are no sights. This is currently against EU & US law!

    link to this | view in thread ]

  31. icon
    Gary (profile), 3 May 2019 @ 3:39pm

    Re: Re: Re: Ban content moderation

    Unfortunately there are no sights.<sic> This is currently against EU & US law!

    Section 230 says otherwise. Please explain how US law is stopping you from running an unmoderated website right now. Aside from letting users post kiddie porn, you can let the boards run wild.

    Unmoderated boards are quickly filled with spam, nazi's, porn and off-topic posts. How does this promote conversations? (And before you answer - I'd like to let you know that your PC is crawling with infections, which can be cleaned for the low low price of $199.)

    link to this | view in thread ]

  32. identicon
    GERALD L ROBINSON, 3 May 2019 @ 4:00pm

    Ever hear of FOSTA? THE EU has several different laws which apply starting with 'The Right to Be Forgotten '!

    link to this | view in thread ]

  33. icon
    Stephen T. Stone (profile), 3 May 2019 @ 9:18pm

    aren't Net Neutrality rules about telling the ISPs what they can (or can't) do?

    Yes, and for good reason: Unlike Facebook or Twitter, ISPs control access to the entire Internet.

    Effectively, Facebook or Twitter banning speech amounts to a worldwide ban of that speech, because every-fucking-one uses Facebook in many countries.

    This would be true if Facebook was either the only social interaction network in the world or the only website on which people could speak their mind. It is neither of those.

    Freedom of speech isn't only about the ability to talk about whatever you want in a desert, but about the possibility of it reaching those who you want to hear about it.

    You are not entitled to an audience for your speech. Facebook, Twitter, etc. do not owe you an audience.

    Once you start censoring speech you don't like … any laws excepting you from liability upon notification (like DMCA) aren't valid.

    Please cite a single court decision in which this proposition was both accepted by the court and successfully used to nullify immunity from legal liability.

    If a platform engages in content moderation, there are grounds to make it liable of any kind of illegal or illicit content in it.

    Those grounds are exceptionally narrow. You cannot hold Twitter liable for someone posting child porn on Twitter only because the service deleted Alex Jones’s bullshit.

    It can't use the "oh, I didn't know I had copyrighted/terrorist/hate/pick your choice content here".

    It can, though. Facebook knowing people can post illegal content is not the same as Facebook knowing people have posted it. Moderation is not a guarantee of catching all future instances of illegal content; even the best algorithms can miss what they should have caught and catch what they should have missed.

    link to this | view in thread ]

  34. icon
    Gerald Robinson (profile), 4 May 2019 @ 11:40am

    Ban content moderation

    Basically content moderation is required by a bunch of bad laws which are either anti American companies (EU "snippet tax") or nasty censorship laws like China and Miramar. These governments have been playing whack a mile with covert commentary for years with no great success, but they have silenced most of the opposition. Reasonable (GOOD) content moderation has been proved not only impossible but determental.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.