We Need To Shine A Light On Private Online Censorship

from the transparency-reporting-on-content-moderation dept

On February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event -- and over the next few weeks we'll be publishing many of those essays, including this one.

In the wake of ongoing concerns about online harassment and harmful content, continued terrorist threats, changing hate speech laws, and the ever-growing user bases of major social media platforms, tech companies are under more pressure than ever before with respect to how they treat content on their platforms—and often that pressure is coming from different directions. Companies are being pushed hard by governments and many users to be more aggressive in their moderation of content, to remove more content and to remove it faster, yet are also consistently coming under fire for taking down too much content or lacking adequate transparency and accountability around their censorship measures. Some on the right like Steve Bannon and FCC Chairman Ajit Pai have complained that social media platforms are pushing a liberal agenda via their content moderation efforts, while others on the left are calling for those same platforms to take down more extremist speech, and free expression advocates are deeply concerned that companies' content rules are so broad as to impact legitimate, valuable speech, or that overzealous attempts to enforce those rules are accidentally causing collateral damage to wholly unobjectionable speech.

Meanwhile, there is a lot of confusion about what exactly the companies are doing with respect to content moderation. The few publicly available insights into these processes, mostly from leaked internal documents, reveal bizarrely idiosyncratic rule sets that could benefit from greater transparency and scrutiny, especially to guard against discriminatory impacts on oft-marginalized communities. The question of how to address that need for transparency, however, is difficult. There is a clear need for hard data about specific company practices and policies on content moderation, but what does that look like? What qualitative and quantitative data would be most valuable? What numbers should be reported? And what is the most accessible and meaningful way to report this information?

Part of the answer to these questions can be found by looking to the growing field of transparency reporting by internet companies. The most common kind of transparency report that companies voluntarily publish gives detailed numbers about government demands for information about the companies’ users—showing, for example, how many requests were received, from what countries or jurisdictions, what kind of data was requested, and whether they were complied with or not. As reflected in this history of the practice published by our organization, New America’s Open Technology Institute (OTI), transparency reporting about government demands for data has exploded over the past few years, so much so that projects like the Transparency Reporting Toolkit by OTI and Harvard’s Berkman-Klein Center for Internet & Society have emerged to try and define consistent standards and best practices for such reporting. Meanwhile, a decent number of companies have also started publishing reports about the legal demands they receive for the takedown of content, whether copyright-based or otherwise.

However, almost no one is publishing data about what we're talking about here: voluntary takedowns of content by companies based on their own terms of service (TOS). Yet especially now, as private censorship gets even more aggressive, the need for transparency also increases. This need has led to calls from a variety of corners for companies to report on content moderation. For example, a working group of the Freedom Online Coalition, composed of representatives from industry, civil society, academia, and government, called for meaningful transparency about companies’ content takedown efforts, complaining that “there is very little transparency” around TOS enforcement mechanisms. The 2015 Ranking Digital Rights Corporate Accountability Index found that every company surveyed received a failing grade with respect to reporting on TOS-based takedowns; the 2017 Index findings fared only slightly better. Finally, David Kaye, the United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, called for companies to “disclose their policies and actions that implicate freedom of expression.” Specifically, he observed that “there are … gaps in corporate disclosure of statistics concerning volume, frequency and types of request for content removals and user data, whether because of State-imposed restrictions or internal policy decisions.”

The benefits to companies issuing such transparency reports around their content moderation activities would be significant: For those companies under pressure to “do something” about problematic speech online, this is a an opportunity to outline the lengths to which they have gone to do just that; for companies under fire for “not doing enough,” a transparency report would help them express the size and complexity of the problems they are addressing, and explain that there is no magic artificial intelligence wand they can wave and make online extremism and harassment disappear; and finally, public disclosure about content moderation and terms of service practices will go a long way toward building trust with users—a trust that has crumbled in recent years. Putting aside the benefit to companies, though, there is the even more significant need of policymakers and the public. Before we can have an intelligent conversation about hate speech, terrorist propaganda, or other worrisome content online, or formulate fact-based policies about how to address that content, we need hard data about the breadth and depth of those problems, and about the platforms' current efforts to solve those problems.

While there have been calls for publication of such information, there has been little specificity with respect to what exactly should be published. No doubt this is due, in great part, to the opacity of individual companies’ content moderation policies and processes: It is difficult to identify specific data that would be useful without knowing what data is available in the first place. Anecdotes and snippets of information from companies like Automattic and Twitter offer a starting point for considering what information would be most meaningful and valuable. Facebook has said they are entering a new of era transparency for the platform. Twitter has published some data about content removed for violating its TOS, Google followed suit for some of the content removed from YouTube, and Microsoft has published data on “revenge porn” removals. While each of these examples is a step in the right direction, what we need is a consistent push across the sector for clear and comprehensive reporting on TOS-based takedowns.

Looking to the example of existing reports about legally-mandated takedowns, data that shows the scope and volume of content removals, account removals, and other forms of account or content interference/flagging would be a logical starting point. Information about content that has been flagged for removal by a government actor—such as the U.K.’s Counter Terrorism Internet Referral Unit, which was granted “super flagger” status on YouTube, allowing the agency to flag content in bulk—should also be included, to guard against undue government pressure to censor. More granular information, such as the number of takedowns in particular categories of content (whether sexual content, harassment, extremist speech, etc.), or specification of the particular term of service violated by each piece of taken-down content, would provide even more meaningful transparency. This kind of quantitative data (i.e., numbers and percents) would be valuable on its own, but would be even more helpful if paired with qualitative data to shed more light on the platforms’ opaque content moderation practices and tell users a clear story about how those processes actually work, using compelling anecdotes and examples.

As has already and often happened with existing transparency reports, this data will help keep companies accountable. Few companies will want to demonstrably be the most or least aggressive censor, and anomalous data such as huge spikes around particular types of content will be called out and questioned by one stakeholder group or another. It will also help ensure that overreaching government pressure to takedown more content is recognized and pushed back on, just as in current reporting it has helped identify and put pressure on countries making outsized demands for users’ information. And most importantly, it will help drive policy proposals that are based on facts and figures rather than on emotional pleas or irrational fears—policies that hopefully will help make the internet a safer space for a range of communities while also better protecting free expression.

Unquestionably, the major platforms have become our biggest online gatekeepers when it comes to what we can and cannot say. Whether we want them to have that power or not, and whether we want them to use more or less of that power in regard to this or that type of speech, are questions we simply cannot answer until we have a complete picture of how they are using that power. Transparency reporting is our first and best tool for gaining that insight.

Kevin Bankston is the Director of the Open Technology Institute at New America). Liz Woolery is Senior Policy Analyst at the Open Technology Institute at New America.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: censorship, content moderation, due process, filters, platforms, transparency


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. identicon
    Christenson, 31 Jan 2018 @ 1:37pm

    Et Tu, Techdirt?

    So can I look forward to Techdirt's moderation/transparency report shortly, then?

    link to this | view in thread ]

  2. identicon
    Anonymous Coward, 31 Jan 2018 @ 2:00pm

    Re: Et Tu, Techdirt?

    The author of this piece raises several good points, but a distributed moderation model (like TD comment flagging or most moderation on StackOverflow/StackExchange) doesn't lend itself so well to the "transparency report" concept -- much of the data isn't kept in a very granular way in these systems, anyway. (For instance, TD probably just keeps a count of flags on each post, although if Mike and co. wish to chime in with more details, they're certainly welcome to do so!)

    link to this | view in thread ]

  3. icon
    orbitalinsertion (profile), 31 Jan 2018 @ 2:14pm

    Re: Et Tu, Techdirt?

    Is the entire commissariat supposed to report what it flags and why? There is merely an annoyance flag which some people use. (And some of them tell you right then and there.) If you are one of those clowns who disbelieve this, you won't believe a transparency report, either. That game is fully transparent already.

    link to this | view in thread ]

  4. identicon
    Anonymous Coward, 31 Jan 2018 @ 2:19pm

    Re: Et Tu, Techdirt?

    Why? You can see what comments the community vote to hide, and even read their content.

    link to this | view in thread ]

  5. identicon
    Rich Kulawiec, 31 Jan 2018 @ 2:21pm

    Fundamental errors in architecture can't be fixed

    An example: if you start with a 1974 Ford Pinto and try to build a tank, you'll fail. Oh, you can beef off the frame and drop in a big engine and bolt on armor plate and so on, but no matter what you do, no matter how much money and effort you spend, you will never have a tank. And if you try to pretend that you do, and use it as one, then you're going to fail in a big way.

    The mechanisms (and processes) of abuse control need to be accounted for at the whiteboard stage of design. If they're not designed-in early on then retrofitting them later is almost certainly not going to work. As we see, all day, every day with Google and Facebook and Twitter and others. They didn't learn from their predecessors' successes and failures; instead, in their arrogance and naviete', they blundered ahead and built enormous operations *that they do not know how to run*.

    Thus all the flailing that we see as they try one thing and then another, none of which work particularly well and some of which have adverse side effects. This is all an attempt to patch the problem the field and thus avoid admitting that they made the wrong step years ago -- and that it might be unfixable.

    Facebook has publicly admitted that there are 200M fake profiles, which means that the number they know about is higher, and that in turn means that the real number is still higher. Twitter is hilariously lowballing their estimates of bot numbers, as if we should believe that the same people who dropped a couple hundred million fakes on Facebook couldn't do exactly the same thing to Twitter. And so on...to the point where I think it's reasonable to ask if these companies are actually in effective control of their own operations.

    So as you discuss all the points above, please keep in mind that some (but not all) of what's happening is due to incompetence and hubris: they were so busy asking how they could that they never stopped to ask if they should.

    link to this | view in thread ]

  6. identicon
    Christenson, 31 Jan 2018 @ 2:24pm

    Re: Re: Et Tu, Techdirt?

    Interestingly, I seem to trust Techdirt even without a transparency report. Something to do with consistency...

    Having said that, some basic statistics (maybe along with the weekly "best of the week"?) would be helpful(how much content *is* moderated out? How often is that more than weeding my stupid keying mistakes, of which I've made another one today? Robots? Human spammers?)

    There is also the problem of "at scale", and I just don't see how we can possibly get a good idea of what should happen "at scale" if we don't look at smaller sites.

    link to this | view in thread ]

  7. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 31 Jan 2018 @ 2:25pm

    Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

    I just repeat from yesterday:

    Nearly all websites EXCEPT Techdirt have WRITTEN RULES on words and attitudes allowed. But Techdirt tries to do it the sneaky way, first with the "hiding" which is falsely claimed to be just "community standard" and not to involve a moderator who makes the decision to hide comments. Then there's the fanboy / dissenter distinction: NOT ONE fanboy has ever had a comment hidden here, ONLY those who dissent, and for NO articulable reason. Then there's the un-admitted blocking of home IP address, which was done to me.

    You cannot even get an answer here as to whether there IS a moderator or not! Works by "magic".

    And add that the "hiding" of comments which are okay under common law goes on as of today, just read a couple pieces back.

    link to this | view in thread ]

  8. identicon
    Anonymous Coward, 31 Jan 2018 @ 2:32pm

    Re: Fundamental errors in architecture can't be fixed

    The biggest problems those sites have is that software cannot deal with abuse problems in a reliable way. That goes for postings and sign-ups both. Also, most of the social media sites allow users to decide which accounts they will follow, but would rather chase for large numbers of followers and followed that curate their use of the system to that which they are interested in, and which meets their standards of acceptability.

    Too many people when they find objectionable material not only want other to protect them from that material, but also protect other from the same material; hence the pressure for censorship.

    link to this | view in thread ]

  9. identicon
    Anonymous Coward, 31 Jan 2018 @ 2:32pm

    Re: Fundamental errors in architecture can't be fixed

    You seem to have a benign view of Facebook, Google, and Twitter.

    Those all have same leftist, corporatist, globalist agenda -- in which chaos and "pushing the limits" is used as a tool -- but still for a while have to be sneaky about doing it.

    Again as I've reported here in this little island of corporatism: Google removed advertising income from Infowars and Antiwar.com, and its Youtube has "de-monetized" many conservatives. -- You just don't hear about those because They control the message!

    link to this | view in thread ]

  10. icon
    Stephen T. Stone (profile), 31 Jan 2018 @ 2:34pm

    Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

    Just for the record, Mr. SovCit, I have had at least a couple of comments flagged in the past. One got both a flagging and a Funny badge!

    And even if there is a moderator, at worst, they get rid of spam comments that any other comments section on any other blog would send to the digital dumpster. I have seen no reason to believe a flesh-and-blood moderator is stopping you, me, or anyone else from saying what they want.

    (Oh, and one more thing: Per usual, “common law” is not a magic phrase that ends discussion and prevents rebuttals, especially if you cannot define what it means and in what context you use it. Try another trick.)

    link to this | view in thread ]

  11. identicon
    Christenson, 31 Jan 2018 @ 2:34pm

    Underestimating replaceability

    The pinto analogy isn't carried far enough.... I can build a tank from a pinto, but when I'm done and have a good battle tank, I don't think I'll have any parts of the original pinto left! But don't come whining about how that cost 3 times as much as if I'd started from scratch, lol.

    Similarly for the social media platforms...of which Techdirt itself is, in some degree, one small example. The entire architecture under techdirt, I believe, has been replaced a few times, but keeping the fundamentals...a postings we call a blog, a way for the world to respond to that blog, and a way for techdirt to moderate those responses.

    link to this | view in thread ]

  12. identicon
    Anonymous Coward, 31 Jan 2018 @ 2:35pm

    Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

    While you have the right to speak on this forum, you do not have a corresponding right to be heard, so quit trying to claim the latter right. Anybody who wants to can read the hidden contents, and will do so; while those who trust the communities judgment will ignore them.

    link to this | view in thread ]

  13. icon
    Stephen T. Stone (profile), 31 Jan 2018 @ 2:40pm

    Re: Re: Fundamental errors in architecture can't be fixed

    Google removed advertising income from Infowars and Antiwar.com

    Why should Google be forced to host speech from Infowars in the form of advertisements, regardless of how anyone at Google personally feels about that site?

    its Youtube has "de-monetized" many conservatives.

    That sucks for them. They ain’t the only ones who got dinged by the Adpocalypse, though.

    You just don't hear about those because They control the message!

    If anything, we hear about it from the people who got dinged by Google/YouTube moderation because they refuse to shut up about how their getting dinged is some anti-conservative conspiracy funded and run by whatever boogeyman is popular this week.

    link to this | view in thread ]

  14. identicon
    Anonymous Coward, 31 Jan 2018 @ 2:51pm

    credit where due

    This seams a good opportunity to congratulate TechDirt, on what is objectively the best comment system on the net.

    No scripts, no captcha's, anon allowed, and your even good about vpn users- I don't know how you guys do it, and I suspect it's allot of work, which makes it all the more impressive.

    cheers fellows!

    link to this | view in thread ]

  15. icon
    Roger Strong (profile), 31 Jan 2018 @ 2:54pm

    Re: Re: Fundamental errors in architecture can't be fixed

    Those all have same leftist, corporatist, globalist agenda

    As opposed to the more common rightist corporatist, globalist agenda?

    in which chaos and "pushing the limits" is used as a tool

    That would be alt-rightist.

    link to this | view in thread ]

  16. identicon
    Anonymous Coward, 31 Jan 2018 @ 3:02pm

    Re: Re: Fundamental errors in architecture can't be fixed

    You should totally head to infowars and let them know! Hurry don’t look back. They need you there from here on out! It’s too late for us, save yourself!

    link to this | view in thread ]

  17. icon
    Richard (profile), 31 Jan 2018 @ 3:04pm

    Re: Re: Re: Fundamental errors in architecture can't be fixed

    There is a certain irony here.

    Currently the right is claiming that they are being silenced by large corporations whose agenda they dislike. They may well be correct in this observation BUT - which philosophy is it that says it is OK, even laudable for corporations to use the free market and grow into de-facto monopolies and that for the state to interfere would be "liberal/socialist/communist".

    Of course if the federal state were to nationalise Google/youtube/twitter/facebook - the effect of which would be to force the corporations to follow the first amendment (which is what they seem to want) then the right would cry COMMUNISM!!! (at least that is what they ought to cry...

    link to this | view in thread ]

  18. identicon
    Christenson, 31 Jan 2018 @ 3:05pm

    Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

    Hey, I've had a few comments deleted,too...and flagged them myself for deletion due to fairly obvious technical problems.

    The humans (Mike Masnick and helpers) behind Techdirt obviously moderate...where do you suppose weekly "editor's choice" awards come from? How do you suppose those "flag" choices get converted to hidden comments, especially in the presence of ill-behaved visitors who flag at random? Make accidental clicks?

    Just where did you cop such an "attitude", by the way? You don't suppose your brick-ignorance and inability to figure out commonly-unspoken rules might have pissed off the management, who notices that you are using more of their limited time than is available for no benefit to anyone?

    link to this | view in thread ]

  19. identicon
    Anonymous Coward, 31 Jan 2018 @ 3:08pm

    Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

    Only corporate fictions have the right to free speech under common law.

    link to this | view in thread ]

  20. icon
    Anonymous Anonymous Coward (profile), 31 Jan 2018 @ 3:10pm

    The Limits of the First

    What I wonder about is at what point does Government pushing companies to 'moderate' their sites become Government censorship?

    link to this | view in thread ]

  21. identicon
    Thad, 31 Jan 2018 @ 4:04pm

    Re: Re: Re: Fundamental errors in architecture can't be fixed

    As opposed to the more common rightist corporatist, globalist agenda?

    No; it's a dog-whistle. The word he means but isn't using is "Jewish".

    link to this | view in thread ]

  22. identicon
    JEDIDIAH, 31 Jan 2018 @ 4:09pm

    Re: Re: Fundamental errors in architecture can't be fixed

    No. The biggest problem with sites like Facebook is that they promote the nonsense. This is by design. So an feed that's prone to trolling quickly becomes pointless as the trolls get all the "mod points".

    That's not even getting into the stupid things you can get banned for on FB.

    link to this | view in thread ]

  23. identicon
    JEDIDIAH, 31 Jan 2018 @ 4:10pm

    Re: Re: Re: Fundamental errors in architecture can't be fixed

    > Why should Google be forced to host speech from Infowars in the form of advertisements, regardless of how anyone at Google personally feels about that site?

    They seek to be unavoidable.

    link to this | view in thread ]

  24. identicon
    Anonymous Coward, 31 Jan 2018 @ 4:16pm

    Re: Re: Re: Fundamental errors in architecture can't be fixed

    Doesn't Facebook allow its users to decide who to befriend, and therefore see posts from? That is a means of dealing with trolls, unless you are of the opinion that somebody else should protect you from them.

    link to this | view in thread ]

  25. identicon
    Anonymous Coward, 31 Jan 2018 @ 4:40pm

    Re: Re: Et Tu, Techdirt?

    a distributed moderation model (like TD comment flagging or most moderation on StackOverflow/StackExchange)

    These are only semi-distributed, because ultimately it's one site collecting data and deciding whether to show/hide/delete comments based on it. In a fully distributed model, we'd get the comments from somewhere other than the site posting the story, and we'd decide what to show/hide.

    link to this | view in thread ]

  26. identicon
    Anonymous Coward, 31 Jan 2018 @ 4:46pm

    Re: Re: Fundamental errors in architecture can't be fixed

    The biggest problems those sites have is that software cannot deal with abuse problems in a reliable way.

    The biggest design problem is that they have to. Why should Techdirt, your local newspaper, or anyone else posting stuff to a website have to be involved in people's discussions of it? That just happened to be the easiest way to do things in the early days of the web, and worked "well enough" to avoid getting replaced.

    Moving this to Facebook or Disqus doesn't solve the problem, because there's still some centralized authority deciding which conversions people can have. A real distributed discussion system could solve it, once we figure out how to do that.

    link to this | view in thread ]

  27. identicon
    Anonymous Coward, 31 Jan 2018 @ 4:54pm

    Re: Re: Re: Fundamental errors in architecture can't be fixed

    A real distributed discussion system could solve it,

    One exists, it is called Usenet, but conversation are slower because it takes time for posting to spread to all servers, and can be a bit disjointed, because they are seen in different orders on different servers. Sometimes a central system is better for human interactions.

    link to this | view in thread ]

  28. identicon
    Anonymous Coward, 31 Jan 2018 @ 5:23pm

    Re:

    out_of_the_blue just hates it when due process is enforced.

    link to this | view in thread ]

  29. identicon
    Anonymous Coward, 31 Jan 2018 @ 6:43pm

    Unquestionably incorrect

    >Unquestionably, the major platforms have become our biggest online gatekeepers when it comes to what we can and cannot say.

    If only "on their platforms" was added to the end of this sentence it would be correct.

    What we need is technology that empowers people to create and control their own platforms for speech, not government regulation.

    link to this | view in thread ]

  30. identicon
    Anonymous Coward, 31 Jan 2018 @ 8:03pm

    Flat but threaded comments solves this problem

    Assholes still say things but people can respond, karma systems like slashdot or arbitrary systems like fark don't really help anyone, let people speak and if that is messy and even violent that is the price of everywhere being a global forum

    You remember the Idea of a forum.

    (much as Rome never lived up to anything ,but as an ideal, it is aspirational)

    link to this | view in thread ]

  31. identicon
    Anonymous Coward, 31 Jan 2018 @ 8:42pm

    Re: Unquestionably incorrect

    What, like any of the hundreds or thousands of programs designed to help you make a website?

    link to this | view in thread ]

  32. icon
    Mike Masnick (profile), 31 Jan 2018 @ 9:56pm

    Re: Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

    The humans (Mike Masnick and helpers) behind Techdirt obviously moderate...where do you suppose weekly "editor's choice" awards come from? How do you suppose those "flag" choices get converted to hidden comments, especially in the presence of ill-behaved visitors who flag at random? Make accidental clicks?

    We don't have moderators. There are three things that happen, and that's it: we have fairly sophisticated (yet still imperfect) spam filters that deal with comments pre-posting (and which we review regularly to let through comments that were incorrectly flagged as spam. Then the voting system.

    The third thing is almost never used, but in the RARE cases when a 100%, obviously total spam comment (totally off topic, pushing a website/product) shows up and we see it, we will delete those, and only those comments. The rest we leave up to the community to handle via the voting system.

    link to this | view in thread ]

  33. identicon
    Anonymous Coward, 31 Jan 2018 @ 11:04pm

    software

    The biggest problems those sites have is that software cannot deal with abuse problems in a reliable way. That goes for postings and sign-ups both. Also, most of the social media sites allow users to decide which accounts they will follow, but would rather chase for large numbers of followers and followed that curate their use of the system to that which they are interested in, and which meets their standards of acceptability.

    link to this | view in thread ]

  34. icon
    Bennot (profile), 31 Jan 2018 @ 11:09pm

    Re: software

    U may check out [this](https://freedomhouse.org/report/freedom-net-2015/freedom-net-2015-privatizing-censorship-erod ing-privacy) article, Im not so sure about "software cannot deal with abuse problems"..

    link to this | view in thread ]

  35. identicon
    Anonymous Coward, 1 Feb 2018 @ 2:01am

    Re: Re: Fundamental errors in architecture can't be fixed

    Ah. I see. The poor widdle Nazis at Infowars are butthurt.

    link to this | view in thread ]

  36. identicon
    Anonymous Coward, 1 Feb 2018 @ 2:09am

    Re: Re: Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

    There may be room for numbering the spam handling.
    - How many do you review manually?
    - How many are corrected?
    - Maybe some light on the systems numbers, but that is not as important.
    - Number of the RARE cases each year or smth.

    These things can also improve your understanding of how effective it is and potentially inspire you towards improving it.

    link to this | view in thread ]

  37. identicon
    Anonymous Coward, 1 Feb 2018 @ 3:31am

    Censorship is a double-edged sword.

    link to this | view in thread ]

  38. icon
    Mike Masnick (profile), 1 Feb 2018 @ 5:25am

    Re: Re: Re: Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

    That's an interesting idea, and I'll look into doing it.

    link to this | view in thread ]

  39. identicon
    Wendy Cockcroft, 1 Feb 2018 @ 6:02am

    Re: Re: Re: Et Tu, Techdirt?

    Keying errors don't get moderated, Christenson. Comments are hidden by the likes of me clicking on the red "Report" button at the side of any post I don't like; if five people click it the post gets hidden (if memory serves).

    Spam such as ads for sunglasses, etc., gets caught and binned most of the time or we'd see more of it.

    Sometimes my posts are held for moderation, probably because I tripped a keyword, and never get posted. Okay, fine, that's not the end of the world.

    link to this | view in thread ]

  40. icon
    Richard (profile), 1 Feb 2018 @ 6:03am

    Re: Re: Re: Fundamental errors in architecture can't be fixed

    _As opposed to the more common rightist corporatist, globalist agenda?

    in which chaos and "pushing the limits" is used as a tool

    That would be alt-rightist._

    As opposed to the Totalitarian rightist corporatist, globalist agenda?

    which is of course Control alt rightist.

    and the Totalitarian corporatist, globalist agenda that will happily drift into a nuclear war (presumably initially with N Korea)

    which is Control Alt delete-ist

    link to this | view in thread ]

  41. identicon
    Christenson, 1 Feb 2018 @ 6:57am

    Re: Re: Re: Re: Et Tu, Techdirt?

    Umm, I'm thankful my accidental posts with blank bodies (happens due to autocomplete interactions and habits) get "held for moderation" and largely disappear. I'm thankful there are no ads for sunglasses, and all of that *is* moderation, even if (see Mike Masnick's comments below) it is largely automated and crowdsourced to good people like you.

    link to this | view in thread ]

  42. identicon
    Anonymous Coward, 1 Feb 2018 @ 6:59am

    Re: The Limits of the First

    The First clearly has limits, but the question of how hard you can push back against sites with a very hardcore language and some threatening positioning is always interesting. Ultimately, pressure cause counter-pressure etc. We may end up with the solution being part of the problem.

    That is also why emotional measuring is becoming such an important area in analysing the internet: I imagine radicalisation may be correlated with certain emotional combinations.

    link to this | view in thread ]

  43. identicon
    Christenson, 1 Feb 2018 @ 7:11am

    Re: Re: Re: Re: Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

    Thanks Mike and helpers.

    I started "Et Tu, Techdirt" with exactly this idea in mind.

    My only further request is that you consider "Moderation" broadly as a system -- to my mind, it includes that automated spam filter and its human reviewers, whatever it is that 'holds comments for moderation', the voting crowd, *and* the humans who sometimes intervene.

    link to this | view in thread ]

  44. identicon
    Wendy Cockcroft, 1 Feb 2018 @ 7:12am

    Re: Re: Re: Re: Re: Et Tu, Techdirt?

    Indeed. I am disappointed if a comment I've posted gets held and never shows up but that might be due to a glitch; I'm pretty certain there's no mad conspiracy to stop me posting comments on TD.

    link to this | view in thread ]

  45. icon
    Mike Masnick (profile), 1 Feb 2018 @ 7:30am

    Re: Re: Re: Re: Et Tu, Techdirt?

    Sometimes my posts are held for moderation, probably because I tripped a keyword, and never get posted.

    I would actually be fairly surprised if those comments are not posted. It is, of course, possible that we miss some false positives in our review, but our process makes that... pretty difficult. I'm fairly confident that we can catch most false positives and get them posted to the site.

    link to this | view in thread ]

  46. identicon
    Anonymous Coward, 1 Feb 2018 @ 7:30am

    Re: Re: Re: Re: Fundamental errors in architecture can't be fixed

    Usenet doesn't have a distributed moderation system though.

    link to this | view in thread ]

  47. identicon
    Anonymous Coward, 1 Feb 2018 @ 7:32am

    Re: Re: Re: Re: Re: Fundamental errors in architecture can't be fixed

    Well, excepting the groups for which one specific moderator is defined. I can't filter an arbitrary Usenet group based on my friends' opinions for example.

    link to this | view in thread ]

  48. icon
    Mike Masnick (profile), 1 Feb 2018 @ 7:33am

    Re: Re: Re: Re: Re: Et Tu, Techdirt?

    Umm, I'm thankful my accidental posts with blank bodies (happens due to autocomplete interactions and habits) get "held for moderation" and largely disappear.

    Yes, I should note two features of our system: it flags "blank body" or "empty comment" posts as requiring review, and it also flags "duplicate posts" that are done immediately after one another (sometimes comments accidentally get submitted twice with a double click). Those comments we generally won't release from the filter, because it's fairly clear that they were errors, and not intended to be posted.

    link to this | view in thread ]

  49. identicon
    Anonymous Coward, 1 Feb 2018 @ 7:37am

    Re: Re: Re: Re: Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

    I'd like to see numbers regarding moderation delays. Sometimes it's hours before my comments appear and the conversation has already moved on.

    link to this | view in thread ]

  50. identicon
    Anonymous Coward, 1 Feb 2018 @ 7:40am

    Re: Re: Re: Re: Re: Et Tu, Techdirt?

    Comments held for moderation on a Friday sometimes remain held for days, which is understandable but could seem like "never got posted" from a commenter's point of view (commenting is nearly "done" after a weekend).

    link to this | view in thread ]

  51. identicon
    Anonymous Coward, 1 Feb 2018 @ 10:08am

    Re: The Limits of the First

    Even without government "pushing", could certain types of moderation/banning be illegal? Many of these companies are in California, where companies cannot arbitrarily limit speech on nominally-private property that's accessible to the public. The California constitution is stronger than the US constitution in this regard. Also see citation 9 "Extending Speech Rights Into Virtual Worlds" on that page.

    link to this | view in thread ]

  52. icon
    JMT (profile), 1 Feb 2018 @ 3:53pm

    Re: Re: Well, this is the place to learn about sneaky tricks and unwritten rules -- but only applied to dissenters! VILE AD HOM IS OKAY IF YOU'RE A FANBOY!

    "The humans (Mike Masnick and helpers) behind Techdirt obviously moderate...where do you suppose weekly "editor's choice" awards come from?"

    WTF? That's not moderation. That's not even vaguely related to moderation. The editors choices are simply comments selected from those with high vote counts.

    "How do you suppose those "flag" choices get converted to hidden comments, especially in the presence of ill-behaved visitors who flag at random? Make accidental clicks?"

    The process of hiding comments that receive a certain number of flags is easily automated. Any any flag, accidental or not, can be undone by clicking again.

    link to this | view in thread ]

  53. icon
    Anonymous Anonymous Coward (profile), 1 Feb 2018 @ 4:56pm

    Re: Re: Re: Re: Re: Re: Et Tu, Techdirt?

    Duplicate posts also happen when once submit is clicked and the next page fails (for whatever reason, it could be my connection rather than your servers), one clicks submits again (the first click seems to be recorded somehow, and followed through on). I figured this out some time ago, and no longer click submit again.

    link to this | view in thread ]

  54. identicon
    John Smith, 1 Feb 2018 @ 6:20pm

    no need

    Just don't buy anything from sponsors of websites which censor, or try USENET. No one cared about USENET obviously.

    link to this | view in thread ]

  55. identicon
    Wendy Cockcroft, 2 Feb 2018 @ 2:22am

    Re: Re: Re: Re: Re: Et Tu, Techdirt?

    Eh, no biggie. Thank you for responding.

    link to this | view in thread ]

  56. identicon
    Thad, 2 Feb 2018 @ 9:31am

    Re: Re: Re: Well, this is the place. I can't tell one from another; did I find you, or you find me? There was a time before we were born; if someone asks, this where I'll be, where I'll be. Hi yo! we drift in and out. Hi yo! sing into my mouth.

    Jesus Christ, is Blue still having trouble with the concept that if five different people click on the flag icon, it hides his post?

    ...or is he still refusing to acknowledge that there could possibly be five people who don't like him?

    He does seem pretty bad at counting.

    link to this | view in thread ]

  57. identicon
    christenson, 2 Feb 2018 @ 1:53pm

    Broader definition of moderation

    On the contrary, don't limit moderation to removing "bad" comments by people!
    "Editor's choice" awards certainly do encourage people to try to write what most of us consider "excellent" comments. It's not what we usually think of as moderation, but it is moderation nonetheless.
    And clearly, someone decided that a certain number of flags meant the comment got hidden. That's very much moderation.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.