from the it's-not-a-grand-plan dept
On February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event -- and over the next few weeks we'll be publishing many of those essays, including this one.
The first few years of the 21st century saw the start of a number of companies whose model of making user-generated content easily amplified and distributable continues to resonate today. Facebook was founded in 2004, YouTube began in 2005 and Twitter became an overnight sensation in 2006. In their short history, countless books (and movies and plays) have been devoted to the rapid rise of these companies; their impact on global commerce, politics and culture; and their financial structure and corporate governance. But as Eric Goldman points out in his essay for this conference, surprisingly little has been revealed about how these sites manage and moderate the user-generated content that is the foundation for their success.
Transparency around the mechanics of content moderation is one part of understanding what exactly is happening when sites decide to keep up or take down certain types of content in keeping with the community standards or terms of service. How does material get flagged? What happens to it once it's reported? How is content reviewed and who reviews it? What does takedown look like? Who supervises the moderators?
But more important than understanding the intricacies of the system is understanding the history of how it was developed. This gives us not only important context for the mechanics of content moderation, but a more comprehensive idea of how policy was created in the first place, so as to know how best to change it in the future.
At each company, there were various leaders who were charged with developing the content moderation policies of the site. At YouTube (Google) this was Nicole Wong. At Facebook, this was Jud Hoffman and Dave and Charlotte Willner. Though it seems basic now, the development of content moderation policies was not a foregone conclusion. Early on, many new Internet corporations thought of themselves as software companies—they did not think about "the lingering effects of speech as part of what they were doing."
As Jeff Rosen wrote in one of the first accounts of content moderation's history, while "the Web might seem like a free-speech panacea: it has given anyone with Internet access the potential to reach a global audience. But though technology enthusiasts often celebrate the raucous explosion of Web speech, there is less focus on how the Internet is actually regulated, and by whom. As more and more speech migrates online, to blogs and social-networking sites and the like, the ultimate power to decide who has an opportunity to be heard, and what we may say, lies increasingly with Internet service providers, search engines and other Internet companies like Google, Yahoo, AOL, Facebook and even eBay."
Wong, Hoffman and the Willners all provide histories of the hard questions dealt with at each corporation related to speech. For instance, many problems existed simply because flagged content lacked necessary context in order to apply a given rule. This was often the case with online bullying. As Hoffman described, "There is a traditional definition of bullying—a difference in social power between two people, a history of contact—there are elements. But when you get a report of bullying, you just don't know. You have no access to those things. So you have to decide whether you're going to assume the existence of some of those things or assume away the existence of some of those things. Ultimately what we generally decided on was, 'if you tell us that this is about you and you don't like it, and you're a private individual not a public figure, we'll take it down.' Because we can't know whether all these other things happened, and we still have to make those calls. But I'm positive that people were using that function to game the system. . . I just don't know if we made the right call or the wrong call or at what time."
Wong came up against similar problems at Google. In June 2009, a video of a dying Iranian Green Movement protestor shot in the chest and bleeding from the eyes was removed from YouTube as overly graphic and then reposted because of its political significance. YouTube's policies and internal guidelines on violence were altered to allow for the exception. Similarly, in 2007, a YouTube video of a man being brutally beaten by four men in a cell and was removed for violence, but restored by Wong and her team after journalists contacted Google to explain that the video was posted by Egyptian human rights activist Wael Abbas to inform the international community of human rights violations by the police in Egypt.
What the stories of Wong and Hoffman reveal is that much of the policy and the enforcement of that policy developed in an ad hoc way at each company. Taking down breastfeeding was a fine rule, until it wasn't. Removing an historic photo of a young girl running naked in Vietnam following a napalm attack was acceptable for years, until it was a mistake. A rule worked until it didn't.
Much of the frustration that gets expressed towards Facebook, Twitter, and YouTube seems to build itself off a fundamentally flawed premise: that online speech platforms have had one seminal moment in their history where they established a fundamental set of values that would guide their platform. Instead, however, most of these content moderation policies were a series of piecemeal long, hard, and deliberations about the policies to put in place. There was no "Constitutional Convention" moment at these companies, decisions were made reactively in response to signals that were reported to companies through media pressure, civil society groups, government, or individual users. Without a signal, these platforms couldn't develop, change or "fix" their policy.
Of course, it's necessary to point out that even when these platforms have been made aware of a problematic content moderation policy, they don't always modify their policies, even when they say they will. That's a huge problem -- especially as these sites become an increasingly essential part of our modern public square. But learning the history of these policies, alongside the systems that enforce them, is a crucial part of advocating effectively for change. At least for now, and for the foreseeable future, online speech is in the hands of private corporations. Understanding how to communicate the right signals through amidst the noise will continue to be incredibly useful.
Kate Klonick is a PhD. in Law candidate and a Resident Fellow at the Information Society Project at Yale.
Filed Under: content moderation, filtering, history
Companies: facebook, google, twitter, youtube