from the complexity-is-messy dept
The signs were clear right from the start: at some point, content moderation would inevitably move beyond user-generated platforms down to the infrastructure—the place where services operate the heavy machinery of the Internet and without which user-facing services cannot function. Ever since the often-forgotten incident when Amazon stopped hosting Wikileaks after US political pressure took place in 2010, there has been a steady uneasiness regarding the role infrastructure providers could end up playing in the future of content moderation.
A glimpse of what this would look like came in 2017, when companies like Clouldflare and GoDaddy took affirmative action against content they considered problematic for their business models, in this case white supremacist websites that had been the subject of massive public opprobrium. Since then, that future has become the present reality as the list of infrastructure companies performing content moderation functions keeps growing.
Content moderation has two inherent qualities that provide important context.
First, content moderation is generally complex in real-world process design and implementation. There are a host of conflicting rights, diverse procedural norms and competing interests that come into play every time content is posted on the Internet; each case is unique and on some level so it should be treated.
Second, content moderation is messy because the world is messy: the global nature of the Internet, economies of scale, societal realities and cultural differences create a multi-layered set of considerations that are difficult to reconcile.
The bright spot in all this messiness and complexity is the hope of due process and the rule of law. The theory is that, in healthy and competitive markets, users have choice and therefore it becomes more difficult for any mistakes to scale. So, if a user’s post gets deleted on one platform, the user should have the option of posting it someplace else.
Of course, such markets are difficult to accomplish and the current Internet market is certainly not in this category. But, the point here is that it is one thing to have one of your postings removed from Facebook and it is another to go completely offline if Cloudflare stops providing you their services. The stakes are completely different.
For a long time, infrastructure providers were smart enough to stay out of the content business. The argument was that the actors who are responsible for the pipes of the Internet should not concern themselves with the kind of water that runs through them. Their agnosticism was encouraged because their main focus was to provide other services, including security, network reliability and performance.
However, as the Internet evolved, so did the infrastructure providers’ relationship with content.
In the early days of content moderation, what constituted infrastructure was more discernible and structured. People would usually refer to the Open System Interconnection (OSI) model as a useful analogy, especially with policy makers who were trying to identify the role and responsibilities various companies held in the Internet ecosystem.
The Internet of today, however, is very different from those days. The layers of the Internet are not distinguishable any longer and, in many cases, participating actors are not just operating at the infrastructure or the application layers. At the same time, and as applications in the Internet were gaining in popularity and use, innovation started moving upstream.
“Infrastructure” is now being nested on top of other “infrastructure” all within just layer 7 of the OSI stack. Things are not as clear-cut.
This indicates that, in some ways, we should not be surprised that the content moderation conversations seem to gradually be moving downstream. A cloud provider that provides support to a host of different websites, platforms, news outlets or businesses, will inevitably have to deal with issues of content.
A content delivery network (CDN) will unquestionably face, at some point, the moral dilemma of providing its services to businesses that walk a tightrope with harmful or even illegal content. It really comes down to a simple equation: if user-generated platforms don’t do their job, infrastructure providers will have to do it for them. And, they do. Increasingly often.
If this is the reality, the question becomes what is the best way for infrastructure providers to do moderation considering current practices of content moderation, the significant chilling effects, and the often-missed trade-offs.
If we are to follow the “framework, tools, principles” triad, we should be mindful to not reinvent any existing ecosystem. Content moderation is not new and, over the years, a combination of laws and self-regulatory norms ensures a relatively consistent, predictable and stable environment—at least most of the time.
Section 230 of the CDA in the US, the eCommerce Directive in Europe, Marco Civil in Brazil and other laws around the world have succeeded in creating a space where users and businesses could manage their affairs effectively and know that judicial authorities would treat their cases equally.
For content moderation at the infrastructure level, a framework based on certainty and consistency is even more of a priority. Legal theory instructs that lack of consistency can diminish the development of norms or it can undermine the way existing ones can manifest themselves. In a similar vein, lack of certainty means the inability to get organized in such a way that complies with the law. For infrastructure providers that support basic and day-to-day functions of the Internet, such a framework becomes indispensable.
I often say that the Internet is not a monolith. This is not only to demonstrate how the Internet was never meant to perform one single thing, but also to show the importance of designing a legal framework that behaves the same. When we talk about predictability and certainty, we must be conscious of putting in place requirements of clarity, stability and intelligibility so that participating actors can make calculated and informed decisions about the legal consequences of their actions. That’s what made Section 230 a success for more than two decades.
Frameworks without appropriate tools to implement and assess them, however, mean little. Tools are important as they can help maximize the benefits of processes, ultimately increasing flexibility, reducing complexity, and ensuring clarity. Content moderation has consistently been suffering from lack of tools that could clearly exhibit the effects of moderation. Think, for instance, all these times content is taken down and there is no way to say what the true effect is on free speech and on users.
In this context, we need to think of tools as things that would allow us to better understand the scale and chilling effect that content moderation in the infrastructure causes. Here is what I wrote about this last year:
“A starting point is to perform a regulatory impact assessment for the Internet. It is a tested method of policy analysis, intended to assist policy makers in the design, implementation and monitoring of improvements to the regulatory system; it provides the methodology for producing high quality regulation, which can, in turn, allow for sustainable development, market growth and constant innovation. A regulatory impact assessment constitutes a tool that ensures regulation is proportional (appropriate to the size of the problem it seeks to address), targeted (focused and without causing any unintended consequences), predictable (it creates legal certainty), accountable (in terms of actions and outcomes) and, transparent (on how decisions are made).”
In this sense, tools and frameworks are co-dependent.
Finally, the legitimacy of any framework and of any tools depends on the existence of principles. In content moderation, it is not the lack of principles that is the problem; on the contrary, it is the abundance of them. Big companies have their own Terms of Service (ToS), states operate within their own legal frameworks, and then there is the Internet, which is designed under its own set of principles. Too many principles inevitably create too many conflicts and, therefore, consensus becomes important.
The Santa Clara principles on transparency and accountability in content moderation have that consensus. Negotiated through a collaborative and inclusive process, they offer a roadmap to content moderation and remove certain obstacles in the process. Their strength lies in their simplicity and straightforwardness. In a similar vein, the properties of the Internet constitute a solid guide in understanding the potential unintended consequences content moderation by infrastructure providers can have.
The design of the Internet is very specific and adheres to some normative principles that have existed ever since its inception. In fact, the Internet’s blueprint has not changed much since it was sketched a few decades ago. Ensuring that these principles become part of the consideration process is key.
There are still plenty of unknowns in content moderation at the infrastructure layer. But, there are also quite a few things we do know: the first thing is that scale plays a significant factor. Moderating content down the stack is not just about speech; in many cases, it can be about being present on the Internet.
The second thing we know is that the general principle of infrastructure actors being allowed to provide their services agnostically of the content they carry should continue to hold as the default. There might be cases when they may need to engage but that should be the exemption rather the rule and it should abide by the “frameworks, tools and principles” identified above.
Finally, the third thing we know is that content moderation is evolving in ways that could now directly affect the future of the Internet. Ensuring that the role and responsibilities of infrastructure providers is appropriately scoped will be content moderation’s greatest challenge yet!
Konstantinos Komaitis is a veteran of developing and analysing Internet policy to ensure an open and global Internet. Konstantinos has spent almost ten years in active policy development and strategy as a Senior Director at the Internet Society, and is currently a policy fellow at the Brave New Software Foundation. Before that, he spent 7 years researching and teaching at the university of Strathclyde, Glasgow, UK.
Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here).
Filed Under: complexity, content moderation, impact assessment, infrastructure