The Tech Policy Greenhouse is an online symposium where experts tackle the most difficult policy challenges facing innovation and technology today. These are problems that don't have easy solutions, where every decision involves tradeoffs and unintended consequences, so we've gathered a wide variety of voices to help dissect existing policy proposals and better inform new ones.

Content Moderation Beyond Platforms: A Rubric

from the guiding-questions dept

For decades, EFF and others have been documenting the monumental failures of content moderation at the platform level—inconsistent policies, inconsistently applied, with dangerous consequences for online expression and access to information. Yet despite mounting evidence that those consequences are inevitable, service providers at other levels are increasingly choosing to follow suit.

The full infrastructure of the internet, or the “full stack,” is made up of a range of entities, from consumer-facing platforms like Facebook or Pinterest, to ISPs like Comcast or AT&T. Somewhere in the middle are a wide array of intermediaries, such as upstream hosts like Amazon Web Services (AWS), domain name registrars, certificate authorities (such as Let’s Encrypt), content delivery networks (CDNs), payment processors, and email services.

For most of us, most of the stack is invisible. We send email, tweet, post, upload photos and read blog posts without thinking about all the services that help get content from the original creator onto the internet and in front of users’ eyeballs all over the world. We may think about our ISP when it gets slow or breaks, but day-to-day, most of us don’t think about intermediaries like AWS at all—until AWS decides to deny service to speech it doesn’t like, as it did with the social media site Parler, and that decision gets press attention.

Invisible or not, these intermediaries are potential speech “chokepoints” and their choices can significantly influence the future of online expression. Simply put, platform-level moderation is broken and infrastructure-level moderation is likely to be worse. That said, the pitfalls and risks for free expression and privacy may play out differently depending on what kind of provider is doing the moderating. To help companies, policymakers and users think through the relative dangers of infrastructure moderation at various levels of the stack, here’s a set of guiding questions.

  1. Is meaningful transparency, notice, and appeal possible? Given the inevitability of mistakes, human rights standards demand that service providers notify users that their speech has been, or will be, taken offline, and offer users an opportunity to seek redress. Unfortunately, many services do not have a direct relationship with either the speaker or the audience for the expression at issue, making all of these steps challenging. But without them, users will be held not only to their host’s terms and conditions but also those of every service in the chain from speaker to audience, even though they may not know what those services are or how to contact them. Given the potential consequences of violations, and the difficulty of navigating the appeals processes of previously invisible services (assuming such a process even exists), many users will simply avoid sharing controversial opinions altogether. Relatedly, where a service provider has no relationship to the speaker or audience, takedowns will be much easier and cheaper than a nuanced analysis of a given user’s speech.

  2. Do viable competitive alternatives exist? One of the reasons net neutrality rules for ISPs are necessary is that users have so few options for high-quality internet access. If your ISP decides to shut down your account based on your expression (or that of someone else using the account), in much of the world, including the U.S., you can’t go to another provider. At other layers of the stack, such as the domain name system, there are multiple providers from which to choose, so a speaker who has their domain name frozen can take their website elsewhere. But the existing of alternatives alone is not enough; answering this question also requires evaluating the costs of switching and whether it calls for technical savvy beyond the skill set of most users.

  3. Is it technologically possible for the service to tailor its moderation practices to target only the specific offensive expression? At the infrastructure level, many services cannot target their response with the necessary precision human rights standards demand. Twitter can block specific tweets; Amazon Web Services can only deny service to an entire site, which means they inevitably affect far more than the objectionable speech that motivated the action. We can take a lesson here from the copyright context, where we have seen domain name registrars and hosting providers shut down entire sites in response to infringement notices targeting a single document. It may be possible for some services to communicate directly with customers where they are concerned about a specific piece of content, and request that it be taken down. But if that request is rejected, the service has only the blunt instrument of complete removal at its disposal. 

  4. Is moderation an effective remedy? The U.S. experience with online sex trafficking teaches that removing distasteful speech may not have the hoped-for impact. In 2017, Tennessee Bureau of Investigation special agent Russ Winkler explained that online platforms were the most important tool in his arsenal for catching sex traffickers. Today, legislation designed to prevent the use of online platforms for sex trafficking has made it harder for law enforcement to find traffickers. Indeed, several law enforcement agencies report that without these platforms, their work finding and arresting traffickers has hit a wall.

  5. Will collateral damage, such as the stifling of lawful expression, disproportionally affect less powerful groups? Moderation choices may reflect and reinforce bias against marginalized communities. Take, for example, Facebook’s decision, in the midst of the #MeToo movement’s rise, that the statement “men are trash” constitutes hateful speech. Or Twitter’s decision to use harassment provisions to shut down the verified account of a prominent Egyptian anti-torture activist. Or the content moderation decisions that have prevented women of color from sharing the harassment they receive with their friends and followers. Or the decision by Twitter to mark tweets containing the word “queer” as offensive, regardless of context. As with the competition inquiry, this analysis should consider whether the impacted speakers and audiences will have the ability to respond and/or find effective alternative venues.

  6. Is there a user and speech friendly alternative to central moderation? Could there be? One of the key problems of content moderation at the social media level is that the moderator substitutes its policy preferences for those of its users. When infrastructure providers enter the game, with generally less accountability, users have even less ability to make their own choices about their own internet experience. If there are tools that allow users themselves to express and implement their own preferences, infrastructure providers should return to the business of servicing their customers -- and policymakers have a weaker argument for imposing new requirements.

  7. Will governments seek to hijack any moderation pathway? We should be wary of moderation practices that will provide state and state-sponsored actors with additional tools for controlling public dialogue. Once processes and tools to takedown expression are developed or expanded, companies can expect a flood of demands to apply them to other speech. At the platform level, state and state-sponsored actors have weaponized flagging tools to silence dissent. In the U.S., the First Amendment and the safe harbor of Section 230 largely prevent moderation requirements. But policymakers have started to chip away at Section 230, and we expect to see more efforts along those lines. In other countries, such as Canada, the U.K., Turkey and Germany, policymakers are contemplating or have adopted draconian takedown rules for platforms and would doubtless like to extend them further. 

Companies should ask all of these questions when they are considering whether to moderate content (in general or as a specific instance). And policymakers should ask them before they either demand or prohibit content moderation at the infrastructure level. If more than two decades of social media content moderation has taught us anything, it is that we cannot “tech” our way out of political and social problems. Social media companies have tried and failed to do so; infrastructure companies should refuse to replicate those failures—beginning with thinking through the consequences in advance, deciding whether they can mitigate them and, if not, whether they should simply stay out of it.

Corynne McSherry is the Legal Director at EFF, specializing in copyright, intermediary liability, open access, and free expression issues.

Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here). On October 7th, we'll be hosting a smaller workshop focused on coming up with concrete steps we can take to make sure providers, policymakers, and others understand the risks and challenges of infrastructure moderation, and how to respond to those risks.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: collateral damage, competition, content moderation, free speech, infrastructure, privacy, transparency


Reader Comments

Subscribe: RSS

View by: Thread


  1. icon
    GHB (profile), 23 Sep 2021 @ 5:20pm

    The internet is a mixture of good and bad...

    The internet is a mixture of good and bad, it is stupid to nuke just because one thing on it is “bad”.

    Also give praise to Hurricane Electric for standing up against the demands of the record labels. They KNEW that it is a f**cked up idea that a huge service with huge number of customers to be obligated to terminate an entire group of customers (because customers of customers, meaning one representative on behalf of hundreds of others).

    ANYTHING can be abused, both product and services. To those who believe that the intermediaries should also be the police on things that should be the site's duty (like removing infringing content off the page) is an idiot. You might as well ban chair companies because chairs can be used as a weapon to harm others, or sue electric companies for serving electricity to the criminal's house in which that person is attempting to hack other's PC, and go to war against

    link to this | view in thread ]

  2. icon
    GHB (profile), 23 Sep 2021 @ 5:22pm

    Re: The internet is a mixture of good and bad...

    Silly mistake, I mean movie companies, not record labels. Confused the two because they're draconian maximalist lobbyists.

    link to this | view in thread ]

  3. icon
    GHB (profile), 23 Sep 2021 @ 5:23pm

    Re: Re: The internet is a mixture of good and bad...

    ...for copyright. PLZ techdirt, let me edit my posted comments.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.