The Inexorable Push For Infrastructure Moderation
from the it's-coming-whether-we-like-it-or-not dept
I’m grateful to Techdirt and the EFF for this series. There are so many legitimately difficult issues around content moderation at the application layer—that is, on (and usually by) platforms like Facebook and Twitter. And they can crowd out the problems around the corner that are at least as difficult: those of code and content moderation at the infrastructural level, such as the wholesale platforms (such as Amazon Web Services) that host websites; domain name registries that support the use of domain names; and app stores from Apple and Google that largely determine what applications users can choose to run.
To be sure, the line between infrastructure and application can be blurry. For example, text messaging via SMS is offered as a bundled service by providers of mobile phone services like AT&T and Verizon. These services are usually thought of as infrastructural—while users of iPhones experience iMessage as an application that supplants SMS for inter-iOS text exchanges with a fall back to SMS for participants who don’t use iOS.
Perhaps the distinction lies as much in the dominance of a service as it does in its position within a layered stack. Informally surveying students in courses on digital governance, I’ve found increasing appetite for content moderation by Facebook of users’ news feeds and within Facebook groups—say to remove blatant election disinformation such as asserting the polls will be available on the wrong day, to depress turnout—while largely refusing to countenance moderation by telecommunications companies if the same disinformation were sent by SMS. Facebook Messenger remains a tossup.
However fuzzy the definitions, infrastructural moderation is a natural follow-on to application-level moderation. Historically there hasn’t been much pressure for infrastructural moderation given that many critics and companies traditionally saw “mere” application-layer moderation as undesirable—or, at least, as a purely private matter for whoever runs the application to decide upon within its terms of service for its users.
Part of that long-term reluctance for public authorities to pressure companies like Facebook for greater moderation has been a solicitude for how difficult it is to judge and deal with flows of user-submitted content at scale. When regulators thought they were choosing between a moderation requirement that causes a company to shut down its services, or abstention that allowed various harms to accrue, many opted for the second.
For example, the “notice-and-takedown” provisions around the U.S.’ 1998 Digital Millennium Copyright Act—which have encouraged content aggregators like YouTube to take down allegedly copyright infringing videos and music after a claim has been lodged—are, for all the instances of wrongly-removed content, comparatively light touch. Major services eventually figured out that they could offer claimants a “monetize” button, so that content could stay up and some ad revenue from it could be directed to presumed copyright holders rather than, say, to whoever uploaded the video.
And, of course, the now widely-debated Section 230 of the Communications Decency Act, of the same vintage as the DMCA, flatly foreclosed many avenues of potential legal liability for platforms for illegal content other than copyrighted material, such as defamatory statements offered up by some users about other users.
As the Internet entered the mainstream, aside from the acceptance of content moderation at scale as difficult, and the corresponding reluctance to impinge upon platforms’ businesses, there was a wide embrace of First Amendment values as articulated in Supreme Court jurisprudence of the 1960s and 70s. Simplifying a little, this view allows that, yes, there could be lots of bad speech, but it’s both difficult and dangerous to entrust government to sift out the bad from the good, and the general solution to bad speech is more speech. So when it came to online speech, a marketplace-of-ideas-grounded argument I call the “rights” framework dominated the landscape.
That framework has greatly eroded in the public consciousness since its use to minimize Internet companies’ liabilities in the late 1990s and early 2000s. It’s been eclipsed by what I call the “public health” framework. I used the label before it became a little too on the nose amidst a global pandemic, but the past eighteen months’ exigencies are a good example of this new framework. Rights to, say, bodily integrity, so hallowed as to allow people to deny the donation of their bodily organs when they die to save others’ lives, yield to a more open balancing test when it’s so clear that a “right” to avoid wearing a mask, or to take a vaccination, can have clear knock-on effects on others’ health.
In the Internet context, there’s been a recognition of the harms that flow from untrammeled speech—and the viral amplification of the same—made possible at scale by modern social media.
It was, in retrospect, easy for the Supreme Court to extol the grim speech-affirming virtue of allowing hateful placards to be displayed on public sidewalks adjacent to a private funeral (as the Westboro Baptist Church has been known to do), or anonymous pamphlets to be distributed on a town common or at a public meeting, despite laws to the contrary.
But the sheer volume and cacophony of speech from unknown sources that bear little risk of enforcement against them even if they should cross a line, challenges those easy cases. Whether it's misinformation, for which the volume and scope can be so great as to have people either be persuaded by junk or, worse, wrongly skeptical of every single source they encounter, or harassment and abuse that silences the voices of others, it’s difficult to say that the marketplace of ideas is outing only the most compelling ones.
With a public health newly ascendant for moderation at the application layer, we see new efforts by platform operators to tighten up their terms of service if only on paper, choosing to forbid more speech over time. That includes speech that, if the government were to pursue it, would be protected by the First Amendment (a lot of, say, misinformation about COVID and vaccines would fit this category of “lawful but awful”).
Not coincidentally, regulators have a new appetite for regulation, whether because they’re convinced that moderation at scale, with the help of machine learning tools and armies of moderators, is more possible than before, or that there’s a genuine shift in values or their application that militates towards interventionism in the name of public health, literally or metaphorically.
Once the value or necessity of moderation is accepted at the application layer, the inevitable leakiness of it will push the same kinds of decisions onto providers of infrastructure. One form of leakiness is that there will be social media upstarts who try to differentiate their services on the basis of moderating less, such as Parler. That, in turn, confronted Apple and Google, operating their respective app stores for iOS and Android, to consider whether to deny Parler access to those stores unless it committed to achieve minimum content moderation standards. The companies indeed removed the Parler app from their stores, while Amazon, which offers wholesale hosting services for many otherwise-unrelated web sites and online services, suspended its hosting of Parler in the wake of the January 6th insurrection at the Capitol.
Another form of leakiness of moderation is within applications themselves, as the line between publicly-available and private content becomes more blurred. Facebook aims to apply its terms of service not only to public posts, but also to those within private groups. To enforce its rules against the latter, Facebook either must peek at what’s going on within them—perhaps only through automated means—or field reports of rule violations from members of the groups themselves.
Beyond private groups are services shaped to appear more as private group messaging than as social networks at all. Whether Facebook’s own Messenger, with new options for encryption, or through other apps such as Telegram, Facebook’s Whatsapp, or the open-source Signal, there’s the prospect that strangers sharing a cause can meet one another on a social network and then migrate to large private messaging groups whose infrastructure is encrypted.
Indeed, there’s nothing stopping people from choosing to gather and have a conversation within World of Warcraft, merely admiring the view of the game’s countryside as they chat about sports, politics, or alleged terrorist schemes. A Google Doc could serve the same function, if with less of a scenic backdrop. At that point content moderation either must be done through exceptions to any encryption that’s offered—so-called backdoors—or through bot-driven client-side analysis of what people are posting before it moves from, say, their smartphones onto the network.
That’s a rough description of what Apple has been proposing to do in order to monitor users’ private iCloud accounts for illegal images of child sexual abuse, using a combination of privileged acccess to data from the phone and data of existing abusive images collected by child protection agencies to ascertain matches. Apple has suspended plans to implement this scanning after an outcry from some members of the technical and civil liberties communities. Some of that pushback has been around implementation details and worries about misidentification of lawful content, and Apple has offered rejoinders to those worries.
But more fundamentally, the civil liberties worry is that this form of scanning, once a commonplace for a narrow and compelling purpose, will find new purposes, perhaps against political dissidents, whose speech—and apps—can readily be deemed illegal by a government that does not embrace the rule of law. This happened recently when the Russian government prevailed on both Apple and Google to remove an app by opposition leader Aleksei Navalny’s movement designed to encourage strategic voting against the ruling party.
We’ve seen worries about scope creep around the formation and development of ICANN, a non-profit that manages the allocation of certain Internet-wide identifiers, such as top-level domains like .com and .org. Through its ability to choose who operates domain registries like those, ICANN can require such registries to in turn compel domain name registrants to accept a dispute resolution process if someone else makes a trademark-like claim against a registration (that’s how, early on, the holder of madonna.com was dispossessed of the name after a complaint by Madonna).
The logical concern was that the ability for registries to yank domain names under pressure from regulators would go beyond trademark-like disputes over the names themselves, and into the activities and content of the sites and services those names point to. For the most part that hasn’t happened—at least not through ICANN. Hence the still surprisingly common operation of domains that operate command-and-control networks for botnets or host copyright-infringing materials.
Nonetheless, if content moderation is important to do, the fact is that it will be difficult to keep it to the application layer alone. And today there is more of a sense that there isn’t such a thing as the neutral provision of services. Before, makers of products ranging from guns to VCRs offered arguments like those of early Internet platforms: to have them liable for what their customers do would put them out of business. They disclaimed responsibility for the use of their products for physical assault or copyright infringement respectively since those took place long after they left the makers’ factories and thus control, and there weren’t plausible ways to shape the technologies themselves at the factory to carve away future bad uses while preserving the good ones.
As the Internet has allowed products to become services, constantly checking in with and being adapted by their makers, technology vendors don’t say goodbye to their work when it leaves a factory. Instead they are offering it anew every time people use it. For those with a public health perspective, the ability of vendors to monitor and shape their services continuously ought at times to be used for harm reduction in the world, especially when those harms are said to be uniquely made possible by the services themselves.
Consider a 2021 Texas law allowing anyone to sue anyone else for at least $10,000 for “aiding” in the provision of most abortions. An organization called Texas Right to Life created a web site soliciting “whistleblowers” to submit personal information of anyone thought to be a suitable target under the new law—a form of doxxing. The site was originally hosted by GoDaddy, which pulled the plug on the basis that it collected information about people without their consent.
Now consider the loose group of people calling themselves Sedition Hunters, attempting to identify rioters at the Capitol on January 6th. They too have a web site linking out to their work. Should they solicit tips from the public—which at the moment they don’t do—and should their site host treat them similarly?
Those identifying with a rights framework might tend to think that in both instances the sites should stay up. Those worrying about private doxxing of any kind might think they should be taken down. And others might draw distinctions between a site facilitating application of a law that, without a future reversal by the Supreme Court, is clearly unconstitutional, and those uniting to present authorities with possible instances of wrongdoing for further investigation.
As the public health framework continues to gain legitimacy, and the ability of platforms to intervene in content at scale grows, blanket invocations of rights will not alone blunt the case for content moderation. And the novelty of regulating at the infrastructural level will not long hold back the pressures that will follow there, especially as application-layer interventions begin to show their limits. Following in the model of Facebook’s move towards encryption, there could come to be infrastructural services that are offered in a distributed or anonymized fashion to avoid the possibility of recruitment for regulation. But as hard as these problems are, they seem best solved through reflective consensus rather than technical fiat in either direction.
Jonathan Zittrain is George Bemis Professor of Law and Professor of Computer Science at Harvard University, and a co-founder of its Berkman Klein Center for Internet & Society.
Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here). On October 7th, we'll be hosting a smaller workshop focused on coming up with concrete steps we can take to make sure providers, policymakers, and others understand the risks and challenges of infrastructure moderation, and how to respond to those risks.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: balancing test, content moderation, governance, infrastructure, public health, rights framework, takedowns
Reader Comments
Subscribe: RSS
View by: Thread
Argument from inevitably
Seriously I loathe the "we must all jump off bridges because everybody else is doing it" logic. If people need to argue inevitability to get others /to start/ it is bullshit doomed to failure. We have seen how "inevitable" communism and fascism went - into their graves.
[ link to this | view in thread ]
If the Internet is to be moderated by law, will that requirement be extended to real world places like pubs, clubs and cafes? Also, why is the medium under attack, where people, and in particular TV personalities and politicians are driving the spread of miss and divisive information?
[ link to this | view in thread ]
Amendment one is amendment one for the same reason it is always the first target it any takeover - to limit your adversaries' communication and assembly capabilities...
[ link to this | view in thread ]
When you eliminate the impossible, what remains...
The belief that freedom of speech causes harm is a delusion. Never -- not even once -- is the speech to blame. Never -- not even once -- is the censorship actually "necessary". Nor is it ever anything but a distraction from the deeper problem.
If people are being bullied, such as being driven out of their employment, this does not mean there is too much freedom of speech on the internet. What it means is that there is too little infrastructure for standing up against bullies! There should be support for individuals who are victimized, for companies that stand up to the mob and say they make their own hiring decisions, for sensible people who argue the simple common sense that somebody's political beliefs or revenge-porn-posting boyfriend has nothing to do with whether they do a good job teaching Algebra II.
If the crowds are believing nonsense and acting on it, that is an even stronger indication they do not have too much free speech! To the contrary, it means - it PROVES - that they do not have enough free speech to be reading and hearing and talking to people who have more sensible ideas! Some company is dominating their view of the world, imposing algorithms and upvotes from PR gangs, HIDING the truth from them. We need freedom to find the truth and get it to them.
[ link to this | view in thread ]
What do you think?
Would Elon Musk be conniving to put a chip in your head in order that you can turn right at the next intersection? You can bet wherever the proles mingle they will be under attack, just like in any other fascist utopia.
[ link to this | view in thread ]
Cebron Group
Competition for venture capital is fierce in private capital.
Of those, roughly 78,000 receive angel investment to banking and only 7,000 receive venture capital to investment funding. This means that your startup has the best 2.7% chance of receiving funding from one of these sources by the cebron Group.
https://cebrongroup.com/
[ link to this | view in thread ]