Facebook And Google Finally Take First Steps On Road To Transparency About Content Moderation
from the but-more-work-is-needed dept
As internet platforms are aggressively expanding their “moderation” of problematic content in response to increased pressure from policymakers and the public, how can we best hold them accountable and make sure that these private censorship regimes are fair, proportionate, accurate and unbiased?
As we wrote in our last piece for Techdirt at the beginning of the year, right before the first Content Moderation and Removal at Scale Conference in Santa Clara, there is a dire need for meaningful transparency and accountability around content moderation efforts in order to ensure that the new rulers of our virtual public squares–practically governments in their own right, with billions of citizens–are using their power to moderate speech responsibly. This need has only grown as the pressure on Facebookistan and Googledom to deal with the extremists, white supremacists, and fake news operations on their platforms has also grown, and as questions about whether they are abusing their power by not taking down enough content–or by taking down too much–have proliferated.
This trend was most evident in the recent Congressional hearings prompted by the Cambridge Analytica scandal, where some lawmakers rebuked Facebook CEO Mark Zuckerberg for not doing enough to keep certain content off the platform, while others raised concerns that Facebook had demonstrated political bias against the right when determining what content to take down. Similar concerns were voiced by Republicans at today’s hearing in the House Judiciary Committee focused on examining major internet platforms’ content moderation practices (despite the fact that claims of anti-conservative bias having been thoroughly debunked). Such concerns are not limited to the right wing, though–charges of racially-biased censorship have also been levelled from the left.
In response to these growing pressures–and in no small part thanks to years of consistent demands from free expression advocates–Google and Facebook this week both took major strides towards “doing the right thing” and promoting greater transparency around their content moderation practices, in ways that mirror what we were advocating for in our previous article.
First, on Monday afternoon, Google released the industry’s first detailed transparency report focused on content moderation, giving statistics about YouTube content removals based on violation of the service’s Community Guidelines. Among other things, the report highlights the total number of videos removed in the last quarter of 2017 (a staggering 8,284,039 videos), the percentage of videos flagged by human users versus YouTube’s automated flagging systems (the robots flagged four times as many videos as the humans), and a percentage breakdown of the different reasons human flaggers had flagged content (whether it was spam, sexual content, hate speech, terrorist content, etc.) This is the first time any company has published this sort of data at this level of detail–and now that YouTube has taken the first step, it certainly won’t be the last.
Soon after YouTube’s trailblazing transparency report, on Tuesday morning, Facebook made a trailblazing announcement of its own. The company published a much more comprehensive version of its Community Standards, including the detailed internal guidelines the company uses to make moderation decisions, and highlighting the “spirit” of their content policies in order to generate greater understanding about why and how the company removes content. In addition, for the first time, the company is giving users the ability to appeal takedown decisions made on individual posts. Posts that are appealed will be reviewed by a human moderator on the company’s appeals team within 24 hours. Prior to this announcement, users could appeal the removal of pages and groups, but the introduction of this process for individual posts is a valuable step towards providing users with greater agency over their content and more engagement in the moderation process.
Taken together, these moves have sharply increased both the quantitative transparency (Google’s numbers) and the qualitative transparency (Facebook’s explanations) around content takedowns, while also improving due process around those takedowns (Facebook’s new appeals). These are both critical first steps, but there is definitely more to be done. For example, although YouTube published a significant amount of data related to the types of objectionable content removed as a result of human flaggers, it does not produce similar data for content flagged by automated flagging systems, which is especially concerning since automated systems flagged the vast majority of objectionable content. Meanwhile, although Facebook’s introduction of an appeals process is a valuable step towards providing users with stronger due process, it currently only applies to hate speech, graphic violence, and nudity/sexual activity, which have been the most controversial categories of objectionable content. In order for this process to be truly impactful, it needs to apply to all forms of content that are being taken down–and the process needs to give impacted users a way to argue their case for why their content should stay up.
Going forward, Facebook and Google also need to take a page out of each other’s books. Like Google, Facebook needs to start reporting quantitative data on its takedowns and how they have impacted different categories of objectionable content, not only for itself but for its other products like Whatsapp and Instagram. Similarly, Google needs to provide users with greater qualitative insight into the guidelines that impact content takedowns, just as Facebook has. They should also expand their takedown reporting to include other Google products and services such as Google+ and the Google Play store. Doing so could help pressure Apple to similarly report on takedowns in the Apple Store, therefore further expanding transparency reporting in this space.
And that’s the real value of these new steps, beyond the transparency itself: Google and Facebook’s new efforts will hopefully push the rest of the industry to compete with them on transparency. Google’s first innovations around transparency reporting on government surveillance demands nearly a decade ago helped set the stage for a domino effect of widespread adoption once the Snowden surveillance scandal broke, as detailed in this timeline and case study on the spread of that reporting practice. In this political moment of “techlash” that has now been turbo-charged by the Cambridge Analytica scandal, the adoption of strong content moderation transparency practices may happen even faster–but only if policymakers and advocates keep demanding it. That includes voices that have been pressing on this issue for years such as the ACLU of Northern California, the Electronic Frontier Foundation, our own organization the Open Technology Institute, and the Ranking Digital Rights project (which just yesterday released its third annual ranking of how well tech companies’ are protecting users’ human rights. Spoiler alert: they’re not doing so great). And since we’re catching this practice at its beginning, perhaps with the right pressure we can not only get all the companies to issue reports but also get them to standardize their reporting formats. Otherwise we may end up with the same crazy quilt of formats that we have in other areas of transparency reporting, which makes it that much harder to meaningfully compare and combine data.
More than pressure, though, we’ll also need continued dialogue with the companies, to better understand how their content moderation and reporting processes do and don’t work, what their biggest challenges are when moderating at scale, and where they think the technology and practice of content moderation and reporting is heading. That’s why our organization along with many others is co-hosting the second Content Moderation at Scale Conference in Washington, DC on May 7, where representatives from a wide range of tech companies both big and small will be talking in detail and on the record about their internal content moderation processes (the conference will be livestreamed and Techdirt's Mike Masnick will be co-running a session on some of the challenges of content moderation).
We may see even more dominoes fall at that conference, with fresh new announcements about increased transparency and due process around content moderation on even more platforms. Let’s hope so, because internet users deserve to know more about exactly when and how their online expression is censored.
Filed Under: appeals process, content moderation, due process, takedowns, transparency