from the not-a-bandage dept
Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event, which we are publishing here. This one is excerpted from Custodians of the internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. forthcoming, Yale University Press, May 2018.
Content moderation is such a complex and laborious undertaking that, all things considered, it's amazing that it works at all, and as well as it does. Moderation is hard. This should be obvious, but it is easily forgotten. It is resource intensive and relentless; it requires making difficult and often untenable distinctions; it is wholly unclear what the standards should be, especially on a global scale; and one failure can incur enough public outrage to overshadow a million quiet successes. And we are partly to blame for having put platforms in this untenable situation, by asking way too much of them. We sometimes decry the intrusion of platform moderation, and sometimes decry its absence. Users probably should not expect platforms to be hands-off and expect them to solve problems perfectly and expect them to get with the times and expect them to be impartial and automatic.
Even so, as a society we have once again handed over to private companies the power to set and enforce the boundaries of appropriate public speech for us. That is an enormous cultural power, held by a few deeply invested stakeholders, and it is being done behind closed doors, making it difficult for anyone else to inspect or challenge. Platforms frequently, and conspicuously, fail to live up to our expectations—in fact, given the enormity of the undertaking, most platforms' own definition of success includes failing users on a regular basis.
The companies that have profited most from our commitment to platforms have done so by selling back to us the promises of the web and participatory culture. But as those promises have begun to sour, and the reality of their impact on public life has become more obvious and more complicated, these companies are now grappling with how best to be stewards of public culture, a responsibility that was not evident to them at the start.
It is time for the discussion about content moderation to shift, away from a focus on the harms users face and the missteps platforms sometimes make in response, to a more expansive examination of the responsibilities of platforms. For more than a decade, social media platforms have presented themselves as mere conduits, obscuring and disavowing the content moderation they do. Their instinct has been to dodge, dissemble, or deny every time it becomes clear that, in fact, they produce specific kinds of public discourse. The tools matter, and our public culture is in important ways a product of their design and oversight. While we cannot hold platforms responsible for the fact that some people want to post pornography, or mislead, or be hateful to others, we are now painfully aware of the ways in which platforms invite, facilitate, amplify, and exacerbate those tendencies: weaponized and coordinated harassment; misrepresentation and propaganda buoyed by its algorithmically-calculated popularity; polarization as a side effect of personalization; bots speaking as humans, humans speaking as bots; public participation emphatically figured as individual self-promotion; the tactical gaming of platforms in order to simulate genuine cultural participation and value. In all of these ways, and others, platforms invoke and amplify particular forms of discourse, and they moderate away others, all in the name of being impartial conduits of open participation. The controversies around content moderation over the last half decade have helped spur this slow recognition, that platforms now constitute powerful infrastructure for knowledge, participation, and public expression.
~ ~ ~
All this means that our thinking about platforms must change. It is not just that all platforms moderate, or that they have to moderate, or that they tend to disavow it while doing so. It is that moderation, far from being occasional or ancillary, is in fact an essential, constant, and definitional part of what platforms do. I mean this literally: moderation is the essence of platforms, it is the commodity they offer.
First, moderation is a surprisingly large part of what they do, in a practical, day-to-day sense, and in terms of the time, resources, and number of employees they devote to it. Thousands of people, from software engineers to corporate lawyers to temporary clickworkers scattered across the globe, all work to remove content, suspend users, craft the rules, and respond to complaints. Social media platforms have built a complex apparatus, with innovative workflows and problematic labor conditions, just to manage this—nearly all of it invisible to users. Moreover, moderation shapes how platforms conceive of their users—and not just the ones who break the rules or seek their help. By shifting some of the labor of moderation back to us, through flagging, platforms deputize users as amateur editors and police. From that moment, platform managers must in part think of, address, and manage users as such. This adds another layer to how users are conceived of, along with seeing them as customers, producers, free labor, and commodity. And it would not be this way if moderation were handled differently.
But in an even more fundamental way, content moderation is precisely what platforms offer. Anyone could make a website on which any user could post anything he pleased, without rules or guidelines. Such a website would, in all likelihood, quickly become a cesspool of hate and porn, and then be abandoned. But it would not be difficult to build, requiring little in the way of skill or financial backing. To produce and sustain an appealing platform requires moderation of some form. Content moderation is an elemental part of what makes social media platforms different, what distinguishes them from the open web. It is hiding inside every promise social media platforms make to their users, from the earliest invitations to "join a thriving community" or "broadcast yourself," to Mark Zuckerberg's promise to make Facebook "the social infrastructure to give people the power to build a global community that works for all of us."
Content moderation is part of how platforms shape user participation into a deliverable experience. Platforms moderate (removal, filtering, suspension), they recommend (news feeds, trending lists, personalized suggestions), and they curate (featured content, front page offerings). Platforms use these three levers together to, actively and dynamically, tune the participation of users in order to produce the "right" feed for each user, the "right" social exchanges, the "right" kind of community. ("Right" here may mean ethical, legal, and healthy; but it also means whatever will promote engagement, increase ad revenue, and facilitate data collection.)
Too often, social media platforms discuss content moderation as a problem to be solved, and solved privately and reactively. In this "customer service" mindset, platform managers understand their responsibility primarily as protecting users from the offense or harm they are experiencing. But now platforms find they must answer also to users who find themselves implicated in and troubled by a system that facilitates the reprehensible—even if they never see it. Whether I ever saw, clicked on, or ‘liked' a fake news item posted by Russian operatives, I am still worried that others have; I am troubled by the very fact of it and concerned for the sanctity of the political process as a result. Protecting users is no longer enough: the offense and harm in question is not just to individuals, but to the public itself, and to the institutions on which it depends. This, according to John Dewey, is the very nature of a public: "The public consists of all those who are affected by the indirect consequences of transactions to such an extent that it is deemed necessary to have those consequences systematically cared for." What makes something of concern to the public is the potential need for its inhibition.
So, despite the safe harbor provided by U.S. law and the indemnity enshrined in their terms of service contracts as private actors, social media platforms now inhabit a new position of responsibility—not only to individual users, but to the public they powerfully affect. When an intermediary grows this large, this entwined with the institutions of public discourse, this crucial, it has an implicit contract with the public that, whether platform management likes it or not, may be quite different from the contract it required users to click through. The primary and secondary effects these platforms have on essential aspects of public life, as they become apparent, now lie at their doorstep.
~ ~ ~
If content moderation is the commodity, if it is the essence of what platforms do, then it makes no sense for us to treat it as a bandage to be applied or a mess to be swept up. Rethinking content moderation might begin with this recognition, that content moderation is part of how they tune the public discourse they purport to host. Platforms could be held responsible, at least partially so, for how they tend to that public discourse, and to what ends. The easy version of such an obligation would be to require platforms to moderate more, or more quickly, or more aggressively, or more thoughtfully, or to some accepted minimum standard. But I believe the answer is something more. Their implicit contract with the public requires that platforms share this responsibility with the public—not just the work of moderating, but the judgment as well. Social media platforms must be custodians, not in the sense of quietly sweeping up the mess, but in the sense of being responsible guardians of their own collective and public care.
Tarleton Gillespie is a Principal Researcher at Microsoft Research and an Adjunt Associate Professor in the Department of Communications at Cornell University.
Filed Under: content moderation, filtering, internet, moderation, platforms