Moderation Is The Commodity

from the not-a-bandage dept

Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event, which we are publishing here. This one is excerpted from Custodians of the internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. forthcoming, Yale University Press, May 2018.

Content moderation is such a complex and laborious undertaking that, all things considered, it's amazing that it works at all, and as well as it does. Moderation is hard. This should be obvious, but it is easily forgotten. It is resource intensive and relentless; it requires making difficult and often untenable distinctions; it is wholly unclear what the standards should be, especially on a global scale; and one failure can incur enough public outrage to overshadow a million quiet successes. And we are partly to blame for having put platforms in this untenable situation, by asking way too much of them. We sometimes decry the intrusion of platform moderation, and sometimes decry its absence. Users probably should not expect platforms to be hands-off and expect them to solve problems perfectly and expect them to get with the times and expect them to be impartial and automatic.

Even so, as a society we have once again handed over to private companies the power to set and enforce the boundaries of appropriate public speech for us. That is an enormous cultural power, held by a few deeply invested stakeholders, and it is being done behind closed doors, making it difficult for anyone else to inspect or challenge. Platforms frequently, and conspicuously, fail to live up to our expectations—in fact, given the enormity of the undertaking, most platforms' own definition of success includes failing users on a regular basis.

The companies that have profited most from our commitment to platforms have done so by selling back to us the promises of the web and participatory culture. But as those promises have begun to sour, and the reality of their impact on public life has become more obvious and more complicated, these companies are now grappling with how best to be stewards of public culture, a responsibility that was not evident to them at the start.

It is time for the discussion about content moderation to shift, away from a focus on the harms users face and the missteps platforms sometimes make in response, to a more expansive examination of the responsibilities of platforms. For more than a decade, social media platforms have presented themselves as mere conduits, obscuring and disavowing the content moderation they do. Their instinct has been to dodge, dissemble, or deny every time it becomes clear that, in fact, they produce specific kinds of public discourse. The tools matter, and our public culture is in important ways a product of their design and oversight. While we cannot hold platforms responsible for the fact that some people want to post pornography, or mislead, or be hateful to others, we are now painfully aware of the ways in which platforms invite, facilitate, amplify, and exacerbate those tendencies: weaponized and coordinated harassment; misrepresentation and propaganda buoyed by its algorithmically-calculated popularity; polarization as a side effect of personalization; bots speaking as humans, humans speaking as bots; public participation emphatically figured as individual self-promotion; the tactical gaming of platforms in order to simulate genuine cultural participation and value. In all of these ways, and others, platforms invoke and amplify particular forms of discourse, and they moderate away others, all in the name of being impartial conduits of open participation. The controversies around content moderation over the last half decade have helped spur this slow recognition, that platforms now constitute powerful infrastructure for knowledge, participation, and public expression.

~ ~ ~

All this means that our thinking about platforms must change. It is not just that all platforms moderate, or that they have to moderate, or that they tend to disavow it while doing so. It is that moderation, far from being occasional or ancillary, is in fact an essential, constant, and definitional part of what platforms do. I mean this literally: moderation is the essence of platforms, it is the commodity they offer.

First, moderation is a surprisingly large part of what they do, in a practical, day-to-day sense, and in terms of the time, resources, and number of employees they devote to it. Thousands of people, from software engineers to corporate lawyers to temporary clickworkers scattered across the globe, all work to remove content, suspend users, craft the rules, and respond to complaints. Social media platforms have built a complex apparatus, with innovative workflows and problematic labor conditions, just to manage this—nearly all of it invisible to users. Moreover, moderation shapes how platforms conceive of their users—and not just the ones who break the rules or seek their help. By shifting some of the labor of moderation back to us, through flagging, platforms deputize users as amateur editors and police. From that moment, platform managers must in part think of, address, and manage users as such. This adds another layer to how users are conceived of, along with seeing them as customers, producers, free labor, and commodity. And it would not be this way if moderation were handled differently.

But in an even more fundamental way, content moderation is precisely what platforms offer. Anyone could make a website on which any user could post anything he pleased, without rules or guidelines. Such a website would, in all likelihood, quickly become a cesspool of hate and porn, and then be abandoned. But it would not be difficult to build, requiring little in the way of skill or financial backing. To produce and sustain an appealing platform requires moderation of some form. Content moderation is an elemental part of what makes social media platforms different, what distinguishes them from the open web. It is hiding inside every promise social media platforms make to their users, from the earliest invitations to "join a thriving community" or "broadcast yourself," to Mark Zuckerberg's promise to make Facebook "the social infrastructure to give people the power to build a global community that works for all of us."

Content moderation is part of how platforms shape user participation into a deliverable experience. Platforms moderate (removal, filtering, suspension), they recommend (news feeds, trending lists, personalized suggestions), and they curate (featured content, front page offerings). Platforms use these three levers together to, actively and dynamically, tune the participation of users in order to produce the "right" feed for each user, the "right" social exchanges, the "right" kind of community. ("Right" here may mean ethical, legal, and healthy; but it also means whatever will promote engagement, increase ad revenue, and facilitate data collection.)

Too often, social media platforms discuss content moderation as a problem to be solved, and solved privately and reactively. In this "customer service" mindset, platform managers understand their responsibility primarily as protecting users from the offense or harm they are experiencing. But now platforms find they must answer also to users who find themselves implicated in and troubled by a system that facilitates the reprehensible—even if they never see it. Whether I ever saw, clicked on, or ‘liked' a fake news item posted by Russian operatives, I am still worried that others have; I am troubled by the very fact of it and concerned for the sanctity of the political process as a result. Protecting users is no longer enough: the offense and harm in question is not just to individuals, but to the public itself, and to the institutions on which it depends. This, according to John Dewey, is the very nature of a public: "The public consists of all those who are affected by the indirect consequences of transactions to such an extent that it is deemed necessary to have those consequences systematically cared for." What makes something of concern to the public is the potential need for its inhibition.

  So, despite the safe harbor provided by U.S. law and the indemnity enshrined in their terms of service contracts as private actors, social media platforms now inhabit a new position of responsibility—not only to individual users, but to the public they powerfully affect. When an intermediary grows this large, this entwined with the institutions of public discourse, this crucial, it has an implicit contract with the public that, whether platform management likes it or not, may be quite different from the contract it required users to click through. The primary and secondary effects these platforms have on essential aspects of public life, as they become apparent, now lie at their doorstep.

~ ~ ~

If content moderation is the commodity, if it is the essence of what platforms do, then it makes no sense for us to treat it as a bandage to be applied or a mess to be swept up. Rethinking content moderation might begin with this recognition, that content moderation is part of how they tune the public discourse they purport to host. Platforms could be held responsible, at least partially so, for how they tend to that public discourse, and to what ends. The easy version of such an obligation would be to require platforms to moderate more, or more quickly, or more aggressively, or more thoughtfully, or to some accepted minimum standard. But I believe the answer is something more. Their implicit contract with the public requires that platforms share this responsibility with the public—not just the work of moderating, but the judgment as well. Social media platforms must be custodians, not in the sense of quietly sweeping up the mess, but in the sense of being responsible guardians of their own collective and public care.

Tarleton Gillespie is a Principal Researcher at Microsoft Research and an Adjunt Associate Professor in the Department of Communications at Cornell University.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, filtering, internet, moderation, platforms


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 6 Feb 2018 @ 12:09pm

    Cannot even get answer here at Techdirt whether there IS a Moderator!

    " It is not just that all platforms moderate, or that they have to moderate, or that they tend to disavow it while doing so."

    without rules or guidelines. Such a website would, in all likelihood, quickly become a cesspool of hate and porn, and then be abandoned.

    Gee, this sounds familiar.

    link to this | view in thread ]

  2. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 6 Feb 2018 @ 12:10pm

    Even common law comes up! I don't need to write nothing in this!

    Their implicit contract with the public requires that platforms share this responsibility with the public—not just the work of moderating, but the judgment as well.

    link to this | view in thread ]

  3. identicon
    Anonymous Coward, 6 Feb 2018 @ 1:04pm

    Re: Even common law comes up! I don't need to write nothing in this!

    It is the public that downvotes the trolls. That sounds like "sharing moderation" with the staff to me.

    link to this | view in thread ]

  4. identicon
    Anonymous Coward, 6 Feb 2018 @ 1:21pm

    Re: Cannot even get answer here at Techdirt whether there IS a Moderator!

    link to this | view in thread ]

  5. identicon
    Anonymous Coward, 6 Feb 2018 @ 2:35pm

    Re: Even common law comes up! I don't need to write nothing in this!

    The public has an implicit contract to point and laugh and your complete and total lack of understanding of basic juriceprdence, civics and civility.

    link to this | view in thread ]

  6. identicon
    Anonymous Coward, 6 Feb 2018 @ 2:37pm

    Re: Re: Even common law comes up! I don't need to write nothing in this!

    at*

    link to this | view in thread ]

  7. icon
    bhull242 (profile), 6 Feb 2018 @ 3:45pm

    I disagree that anyone could provide a platform for everyone to post whatever content they want, moderated or not. You need many large servers just to be able to handle all that data.

    I also think that it’s not necessarily the case that no moderation => cesspool of hate and porn. There’s more to it than that.

    Finally, it’s hard to tell whether they’re saying that social media platforms should or shouldn’t be held liable/responsible for content that slips through their moderation, but I’ve never really felt they should. They’re not janitors; they’re more like delivery men or a loudspeaker manufacturer. (Plus, I personally think many people expect too much from janitors as it is.)

    I’m all for reasonable, fair moderation, but I don’t think that it’s as straightforward as this article implies.

    link to this | view in thread ]

  8. identicon
    Christenson, 6 Feb 2018 @ 4:31pm

    Re:

    Let me disagree here and agree with the main thrust of the article: Moderation, and to some extent, editorial selection (which with Techdirt tips might just be almost the same thing) *IS* the product. And without *some* system of moderation, I'm sorry, there isn't time in the day for me to read *all* of the comments, even if they *are* good, so moderation happens.

    And let me disagree with your reading, too: Good moderation, for your favorite value of "good", itself a complicated question, is full of complicated decisions that may or may not scale -- who in the crowd do you trust? where does criticism like I write here end and harrassment begin? What is abuse?

    link to this | view in thread ]

  9. identicon
    Anonymous Coward, 6 Feb 2018 @ 5:38pm

    Re: Even common law comes up! I don't need to write nothing in this!

    You don't need to write nothing... and yet you do.

    Flee, like you insist everyone does. Physician heal thyself!

    link to this | view in thread ]

  10. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 6 Feb 2018 @ 10:11pm

    Re: Re: Cannot even get answer here at Techdirt whether there IS a Moderator!

    Oh, I see the Techdirt system now! Instead of a "Guidelines" page easily found, we're all supposed to READ every comment in every topic! How are new users to know this?

    "The rest we leave up to the community to handle via the voting system."

    But what I ask it how does that "system" work! It's no answer to say "system"!

    IS THERE SOME PERSON WITH ADMINISTRATOR ACTION WHO OKAYS THE HIDING? That's SAME as "Moderator", then.

    What guidelines does "the community" go by? Make it up as go along?

    Who is this "community"? Where do I go to complain to them, then?

    Is there ANY appeal from this "system", or is it Soviet style: The People Have Spoken?

    To have any input means allowing Techdirt / Google to run javascript, so THERE'S A PRICE TO PAY.

    To EVEN SEE the hidden comments means allowing Techdirt / Google to run javascript, so THERE'S A PRICE TO PAY.

    How many clicks required out of how many readers?

    How do readers who do NOT click have any effect on the system so they don't have to waste time see the "hidden" comments?

    This "system" may be only one fanboy, then. -- And surely an administrator, because I'm again getting browser sessions poisoned after making one comment. -- In my theory, the random delays mentioned show that an "administrator" hasn't yet taken action.

    link to this | view in thread ]

  11. identicon
    Anonymous Coward, 6 Feb 2018 @ 10:49pm

    Re: Re: Re: Cannot even get answer here at Techdirt whether there IS a Moderator!

    Most online communities generally believe that if the same poster spams the same comment and topic over and over, it merits reporting.

    That you refuse to understand this is hilarious.

    There being a price to pay is something you've wanted. After all, you can't compete with free, so potentially getting something for nothing is bad bad bad by RIAA standards.

    link to this | view in thread ]

  12. icon
    Stephen T. Stone (profile), 6 Feb 2018 @ 10:55pm

    Re: Re: Re: Cannot even get answer here at Techdirt whether there IS a Moderator!

    Well, in regards to the voting system, this is what I gather from Mike’s post:

    1. The administrators can see the raw numbers on voted/reported posts, but they have no direct ability to fudge those numbers or manually mark/flag a post. (Side note: This is how the admins determine the winners for the weekly “Most Insightful and Funny Comments” posts.)

    2. While the administrators can fiddle with the threshold for a marker/flagging, they most likely leave it where it is right now.

    3. The administrators only step in for manual comment moderation when a comment requires deletion or a legitimate post gets caught in the spamfilter.

    Nothing to my knowledge, nor any of your complaints, has proven these statements to be untrue.

    As far as community standards go, the community values intelligent discussion and a good joke; often ignores posts that offer nothing worthwhile but do not actively offend; and dislikes spam, trolls, and comments that aim to offend. If a comment is flagged, the community considers that comment unworthy of its attention (despite a few stalwart trollhunters [coughcough] digging into those comments anyway).

    I doubt you can appeal a flagging, given that the community itself made the decision to flag the comment. Never hurts to ask why your posts keep getting flagged, though. (Hint: It might have to do, at least in part, with your antagonistic attitude toward Techdirt writers and commenters. Just a hunch.)

    Readers who do not click have no effect on the system. They reap the benefits of the community’s ability to separate the wheat from the chaff, so to speak. Only those readers who click the vote/flag buttons have any effect. Feel free to appeal to their better nature—and try not to insult them while you do.

    Oh, and one more thing: If your browser or your connection or some other tool you use makes posting here harder or gets you caught in the spamfilter, the problem is most likely on your end. Don’t wanna deal with that problem? Door’s to your left.

    link to this | view in thread ]

  13. icon
    Mike Masnick (profile), 6 Feb 2018 @ 11:22pm

    Re: Re: Re: Re: Cannot even get answer here at Techdirt whether there IS a Moderator!

    Your assumptions are all accurate. And in answer to your second question, we have not changed the thresholds in many, many years (though we do occasionally review to make sure we're still comfortable that the thresholds are at the right levels). We launched with them at one setting, and adjusted them slightly after a few months based on what we were seeing, and have not seen a need to adjust them again as they appear to work pretty well. Trollish/spamish comments get hidden. Good comments get noticed and highlighted -- all (contrary to our trollish friend above) without any interaction by staff/administrators.

    The system appears to work quite well, and unlike most platforms, we're able to have it work without having to delete comments or have people "approving" every comment that goes on the site. We're much more permissive than most sites (which is why I find OOTB's whining so hilarious) and yet we still have pretty good comments -- OOTB, remaining an exception, though his trolling sometimes leads to insightful and interesting response from folks like yourself. All in all, it's... a pretty good system.

    As for "appeals" -- anyone who has voted can remove their vote. So the appeals process is to ask why your comment was voted down and if you can convince people in the comments to change their vote, you win your appeal. It has happened, though rarely.

    link to this | view in thread ]

  14. identicon
    Thad, 7 Feb 2018 @ 8:49am

    Re: Re: Re: Re: Cannot even get answer here at Techdirt whether there IS a Moderator!

    Oh, and one more thing: If your browser or your connection or some other tool you use makes posting here harder or gets you caught in the spamfilter, the problem is most likely on your end. Don’t wanna deal with that problem? Door’s to your left.

    One of many ironies here is, if he'd asked nicely instead of being a raging asshole, I bet the admins would have helped him. It's not like they're anti-Tor.

    link to this | view in thread ]

  15. identicon
    Wendy Cockcroft, 8 Feb 2018 @ 7:21am

    Re: Re:

    I'll bite.

    Criticism is when you dissent; harassment and abuse is when you actively chase people from site to site posting rude comments in which you attempt to encourage others to have a negative opinion of your target.

    One of the resident trolls made rude, negative, and downright defamatory statements about me in various comments here on TD. Abuse? Yes. Harassment? No, he kept it here and didn't follow me to other sites.

    link to this | view in thread ]

  16. icon
    fairuse (profile), 8 Feb 2018 @ 7:46pm

    Reckon So/

    The first forum I was on would be called "sketchy" now but it demanded members to be civil and it worked. Built a couple of "destinations" on now extinct platforms. At no time was a unknown user posting. You know forums, run like bitchy little girls[1].

    Now all this Social Media Platforms stuff. _tube and _FBk are digital sides of buildings with easy to pick screen doors in front of the good stuff. Comments are the village green (Twitter for sure). Anyone can up/down what I am typing, even flag for human inspection.

    When the platform API allows evil shit into database that's when the platform's has a duty to kill it. There is one little problem -- Media Companies -- user's content is evil to them The amount work keeping them happy means I rarely post on _tube. It gets tiresome because any video that has a few seconds of Fair Use video for reasons gets slapped. Time is all users of these platforms have for themselves, we sold everything else

    If I'm being an ass on Twitter cussing something, people who I converse with can say time out you are kicking walls down. Note: in Twitter politics or "my turn to call out (xyz person)" is pissing on timeline much of the time. College cafeteria in commuter block is Twitter.

    I don't use Facebook except to check-in on nieces and racists friends from youth (I usually say "Been to Saigon lately?" and exit.

    We got what we wished for and it's shiny. Peek under a peeling bit of chrome and the critter there is eating you in real time.

    [1] See "Burn Notice" series on TV

    link to this | view in thread ]

The Last Word

Reckon So/

The first forum I was on would be called "sketchy" now but it demanded members to be civil and it worked. Built a couple of "destinations" on now extinct platforms. At no time was a unknown user posting. You know forums, run like bitchy little girls[1].

Now all this Social Media Platforms stuff. _tube and _FBk are digital sides of buildings with easy to pick screen doors in front of the good stuff. Comments are the village green (Twitter for sure). Anyone can up/down what I am typing, even flag for human inspection.

When the platform API allows evil shit into database that's when the platform's has a duty to kill it. There is one little problem -- Media Companies -- user's content is evil to them The amount work keeping them happy means I rarely post on _tube. It gets tiresome because any video that has a few seconds of Fair Use video for reasons gets slapped. Time is all users of these platforms have for themselves, we sold everything else

If I'm being an ass on Twitter cussing something, people who I converse with can say time out you are kicking walls down. Note: in Twitter politics or "my turn to call out (xyz person)" is pissing on timeline much of the time. College cafeteria in commuter block is Twitter.

I don't use Facebook except to check-in on nieces and racists friends from youth (I usually say "Been to Saigon lately?" and exit.

We got what we wished for and it's shiny. Peek under a peeling bit of chrome and the critter there is eating you in real time.

[1] See "Burn Notice" series on TV
—fairuse

Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.