It's Time to Talk About Internet Companies' Content Moderation Operations
from the transparency dept
As discussed in this post below, on February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event -- and over the next few weeks we'll be publishing many of those essays. This first one comes from Professor Eric Goldman, who put together the conference, explaining the rationale behind the event and this series of essays.
Many user-generated content (UGC) services aspire to build scalable businesses where usage and revenues grow without increasing headcount. Even with advances in automated filtering and artificial intelligence, this goal is not realistic. Large UGC databases require substantial human intervention to moderate anti-social and otherwise unwanted content and activities. Despite the often-misguided assumptions by policymakers, problematic content usually does not have flashing neon signs saying "FILTER ME!" Instead, humans must find and remove that content—especially with borderline cases, where machines can't make sufficiently nuanced judgments.
At the largest UGC services, the number of people working on content moderation is eye-popping. By 2018, YouTube will have 10,000 people on its "trust & safety teams." Facebook's "safety and security team" will grow to 20,000 people in 2018.
Who are these people? What exactly do they do? How are they trained? Who sets the policies about what content the service considers acceptable?
We have surprisingly few answers to these questions. Occasionally, companies have discussed these topics in closed-door events, but very little of this information has been made public.
This silence is unfortunate. A UGC service's decision to publish or remove content can have substantial implications for individuals and the community, yet we lack the information to understand how those decisions are made and by whom. Furthermore, the silence has inhibited the development of industry-wide "best practices." UGC services can learn a lot from each other—if they start sharing information publicly.
On Friday, a conference called "Content Moderation and Removal at Scale" will take place at Santa Clara University. (The conference is sold out, but we will post recordings of the proceedings, and we hope to make a live-stream available). Ten UGC services will present "facts and figures" about their content moderation operations, and five panels will discuss cutting-edge content moderation issues. For some services, this conference will be the first time they've publicly revealed details about their content moderation operations. Ideally, the conference will end the industry's norm of silence.
In anticipation of the conference, we assembled ten essays from conference speakers discussing various aspects of content moderation. These essays provide a sample of the conversation we anticipate at the conference. Expect to hear a lot more about content moderation operational issues in the coming months and years.
Eric Goldman is a Professor of Law, and Co-Director of the High Tech Law Institute, at Santa Clara University School of Law. He has researched and taught Internet Law for over 20 years, and he blogs on the topic at the Technology & Marketing Law Blog.
Filed Under: companies, content moderation, filtering, intermediary liability, internet platforms, moderation