Our Bipolar Free-Speech Disorder And How To Fix It (Part 2)
from the the-free-speech-triangle dept
In Part 1 of this series, I gave attention to law professor Jack Balkin's model of "free speech as a triangle," where each vertex of the triangle represents a group of stakeholders. The first vertex is government and intergovernmental actors. The second is internet platform and infrastructure providers, and the third is users themselves. This "triangle" model of speech actors is useful because it enables us to characterize the relationships among each set of actors, thereby illuminating how the nature of regulation of speech has changed and become more complicated than it used to be.
Take a look again at Balkin's Figure 1.
Although it's clearer when we visualize all the players in the free-speech regulation landscape that a "free-speech triangle" at least captures more complexity than the usual speakers-against-the-government or speakers-against-the-companies or companies-against-the-government models, the fact is that our constitutional law and legal traditions predispose us to think of these questions in binary rather than, uh, "trinary" terms. We've been thinking this way for centuries, and it's a hard habit to shake. But shaking the binary habit is a necessity if we're going to get the free-speech ecosystem right in this century.
To do this we first have to look at how we typically reduce these "trinary" models to the binary models we're more used to dealing with. With three classes of actors, there are three possible "dyads" of relationships: user–platform, government–platform, and user–government.
(a) Dyad 1: User complaints against platforms (censorship and data gathering)
Users' complaints about platforms may ignore or obscure the effects of government demands on platforms and their content-moderation policies.
Typically, public controversies around internet freedom of expression are framed by news coverage and analysis as well as by stakeholders themselves, as binary oppositions. If there is a conflict over content between (for example) Facebook and a user, especially if it occurs more than once, that user may conclude that that her content was removed for fundamentally political reasons. This perception may be exacerbated if the censorship occurred and was framed as a violation of the platform's terms of service. A user subject to such censorship may believe that her content is no more objectionable than that of users who weren't censored, or that her content is being censored while content that is just as heated, but representing a different political point of view, isn't being censored. Naturally enough, this outcome seems unfair, and a user may infer that the platform as a whole is politically biased against those of her political beliefs. It should be noted that complaints about politically motivated censorship apparently come from most and perhaps all sectors.
A second complaint from users may derive from data collection by a platform. This may not directly affect the direct content of a user's speech, but it may affect the kind of content she encounters, which, when driven by algorithms aimed increasing her engagement on the platform, may serve not only to urge her participation in more or more commercial transactions, but also to "radicalize" her, anger her, or otherwise disturb her. Even if an individual may judge herself more or less immune from algorithmically driven urges to view more and more radical and radicalizing content, she may be disturbed by the radicalizing effects that such content may be having on her culture generally. (See, e.g., Tufekci, Zeynep, "YouTube, the Great Radicalizer.") And she may be disturbed at how an apparently more radicalized culture around her interacts with her in more disturbing ways.
Users may be concerned both about censorship of their own content (censorship that may seem unjustified) and platforms' use of data, which may seem to be designed to manipulate them or else manipulate other people. In response, users (and others) may demand that platforms track bad speakers or retain data about who bad speakers are (e.g., to prevent bad speakers from abandoning "burned" user accounts and returning with new accounts to create the same problems) as well as about what speakers say (so as to police bad speech more). But there are two likely outcomes of a short-term pursuit of pressuring platforms to censor more or differently, or to gather less data (about users themselves) or to gather more data (about how users' data are being used). One obvious, predictable outcome of these pressures is that, to the extent the companies respond to them, governments may leverage platforms' responses to user complaints in ways that make it easier for government to pressure platforms for more user content control (not always with the same concerns that individual users have) or to provide user data (because governments like to exercise the "third-party" doctrine to get access to data that users have "voluntarily" left behind on internet companies' and platform providers' services).
(b) Dyad 2: Governments' demands on platforms (content and data)
Government efforts to impose new moderation obligations on platforms, even in response to user complaints, may result in versions of the platforms that users value less, as well as more pressure on government to intervene further.
In the United States, internet platform companies (like many other entities, including ordinary blog-hosting servers and arguably bloggers themselves) will find that their First Amendment rights are buttressed and extended by Section 230 of the Communications Decency Act, which generally prohibits content-based liability for those who reproduce on the internet content that is originated by others. Although a full discussion of the breadth and the exceptions to Section 230—which was enacted as part of the omnibus federal Telecommunications Act reform in 1996—is beyond the scope of this particular paper, it is important to underscore that Section 230 extends the scope of protection for "intermediaries" more broadly than First Amendment case law alone, if we are to judge by relevant digital-platform cases prior to 1996, might have done. But the embryonic case law in those early years of the digital revolution seemed to be moving in a direction that would have resulted in at least some First Amendment protections for platforms consistent with principles that protect traditional bookstores from legal liability for the content of particular books. One of the earliest cases prominent cases concerning online computer services, Cubby v. CompuServe (1991), drew heavily on a 1959 Supreme Court case, Smith v. California, that established that bookstores and newsstands were properly understood to deserve First Amendment protections based on their importance to the distribution of First Amendment-protected content.
Section 230's broad, bright-line protections (taken together with the copyright-specific protections for internet platforms created by the Digital Millennium Copyright Act in 1998) are widely interpreted by legal analysts and commentators as having created the legal framework that gave rise to internet-company success stories like Google, Facebook, and Twitter. These companies, as well as a raft of smaller, successful enterprises like Wikipedia and Reddit, originated in the United States and were protected in their infancy by Section 230. Even critics of the platforms—and there are many—typically attribute the success of these enterprises to the scope of Section 230. So it's no great surprise to discover that many and perhaps most critics of these companies (who may be government actors or private individuals) have become critics of Section 230 and want to repeal or amend it.
In particular, government entities in the United States, both at the federal level and at the state level, have sought to impose greater obligations on internet platforms not merely to remove content that is purportedly illegal, but also to prevent that content from being broadcast by a platform in the first place. The notice-and-takedown model of the Digital Millennium Copyright Act of 1998, which lends itself to automated enforcement and remedies to a higher degree than non-copyright-related content complaints, is frequently suggested by government stakeholders as a model for how platforms ought to respond to complaints about other types of purportedly illegal content, including user-generated content. The fact that copyright enforcement, as distinct from enforcement other communications-related crimes or private causes of action, is comparatively much simpler than most other remedies in communications law, is a fact that is typically passed over by those who are unsympathetic to today's social-media landscape.
Although I'm focusing here primarily on U.S. government entities, this tendency is also evident among the governments of many other countries, including many countries that rank as "free" or "partly free" in Freedom House's annual world freedom report. It may be reasonably asserted that the impulse of governments to offload the work of screening for illegal (or legal but disturbing) content is international. The European Union, for example, is actively exploring regulatory schemes that implicitly or explicitly impose content-policing norms on platform companies and that impose quick and large penalties if the platforms fail to comply. American platforms, which operate internationally, must abide by these systems at least with regard to their content delivery within EU jurisdictions as well as (some European regulators have argued) anywhere else in the world.
Added to governments' impulse to impose content restrictions and policing obligations on platforms is governments' hunger for the data that platforms collect. Not every aspect of the data that platforms like Google and Facebook and Twitter collect on users is publicly known, nor have the algorithms (decision-making processes and criteria implemented by computers) that the platforms use to decide what content may need monitoring, or what content users might prefer, being generally published. The reasons some aspects of the platforms' algorithmic decision-making may be generally reduced to two primary arguments. First, the platforms' particular choices about algorithmically selecting and serving content, based on user data, may reasonably classed as trade secrets, so that if they were made utterly public a competitor could free-ride on the platforms' (former) trade secrets to develop competing products. Second, if platform algorithms are made wholly public, it becomes easier for anyone—ranging from commercial interests to mischievous hackers and state actors—to "game" content so that it is served to more users by the platform algorithms.
Governments' recognition that protections for platforms has made it easier for the platforms to survive and thrive may wish to modify the protections they have granted, or to impose further content-moderation obligations on platforms as a condition of statutory protections. But even AI-assisted moderation measures will necessarily be either post-hoc (which means that lots of objectionable content will be public before the platform curates it) or pre-hoc (which means that platforms will become gatekeepers of public participation, shoehorning users into a traditional publishing model or an online-forum model as constrained by top editors as the early version of the joint Sears-IBM service Prodigy was).
(c) Dyad 3: People (and traditional press) versus government.
New, frequently market-dominant internet platforms for speakers create new government temptations and capabilities to (i) surveil online speech, (ii) leverage platforms to suppress dissident or unpopular speech or deplatform speakers, and/or (iii) employ or compel platforms to manipulate public opinion (or to regulate or suppress manipulation).
It's trivially demonstrable that some great percentage of complaints about censorship in open societies is grounded in individual speakers' or traditional publishers' complaints that government is acting to suppress certain kinds of speech. Frequently the speech in question is political speech but sometimes it is speech of other kinds (e.g., allegedly defamatory, threatening, fraudulent, or obscene) of speech. This dyad is, for the most part, the primary subject matter of traditional First Amendment law. It is also a primary focus of international free-expression law where freedom of expression is understood to be guaranteed by national or international human-rights instruments (notably Article 19 of the International Covenant on Civil and Political Rights).
But this dyad has been distorted in the twenty-first century, in which, more often than not, troubling political speech or other kinds of troubling public speech are normally mediated by internet platforms. It is easier on some platforms, but by no means all platforms, for speakers to be anonymous or pseudonymous. Anonymous or pseudonymous speech is not universally regarded by governments as a boon to public discourse, and frequently governments will want to track or even prosecute certain kinds of speakers. Tracking such speakers was difficult (although not necessarily impossible) in the pre-internet era of unsigned postcards and ubiquitous public telephones. But internet platforms have created new opportunities to discover, track, and suppress speech as a result of the platforms' collection of user data for their own purposes.
Every successful internet platform that allows users to express themselves has been a target of government demands for disclosure of information about users. In addition, internet platforms are increasingly the target of government efforts to mandate assistance (including the building of more surveillance-supportive technologies) in criminal-law or national-security investigations. In most ways this is analogous to the 1994 passage of CALEA in the United States, which obligated telephone companies (that is, providers of voice telephony) to build technologies that facilitated wiretapping. But a major difference is that the internet platforms more often than not capture far more information about users than telephone companies traditionally had done. (This generalization to some extent oversimplifies the difference, given that there is frequently convergence between the suites of services that internet platforms and telephone companies—or cable companies—now offer their users.)
Governmental monitoring may suppress dissenting (or otherwise troubling) speech, but governments (and other political actors, such as political parties) may also use internet platforms to create or potentiate certain kinds of political speech in opposition to the interests of users. Siva Vaidhyanathan documents particular uses of Facebook advertising in ways that aimed to achieve political results, including not just voting for an approved candidate but also dissuasion of some voters from voting at all, in the 2016 election.
As Vaidhyanathan writes: "Custom Audiences is a powerful tool that was not available to President Barack Obama and Governor Mitt Romney when they ran for president in 2012. It was developed in 2014 to help Facebook reach the takeoff point in profits and revenue." Plus this: "Because Facebook develops advertising tools for firms that sell shoes and cosmetics and only later invites political campaigns to use them, 'they never worried about the worst-case abuse of this capability, unaccountable, unreviewable political ads,' said Professor David Carroll of the Parsons School of Design."
There are legitimate differences of opinion regarding the proper regime for regulation of political advertising, as well as regarding the extent to which regulation of political advertising can be implemented consistent with existing First Amendment precedent. It should be noted, however, that advertising of the sort that Vaidhyanathan discusses raises issues not only of campaign spending (although in 2016, at least, the spending on targeted Facebook political advertising of the "Custom Audiences" variety seems to have been comparatively small) as of transparency and accountability. Advertising that's micro-targeted and ephemeral is arguably not accountable to the degree that an open society should require. There will be temptations for government actors to use mechanisms like "Custom Audiences" to suppress opponents' speech—and there also will be temptations by government to limit or even abolish such micro-targeted instances of political speech.
What is most relevant here is that the government may address temptations either to employ features like "Custom Audiences" or to suppress the use of those features by other political actors in non-transparent or less formal ways, (e.g., through the "jawboning" that Jack Balkin describes in his "New School Speech Regulation" paper). Platforms—especially market-dominant platforms that, as a function of their success and dominance, may be particularly targeted on speech issues—may feel pressured to remove dissident speech in response to government "jawboning" or other threats of regulation. And, given the limitations of both automated and human-based filtering, a platform that feels compelled to respond to such governmental pressure is almost certain to generate results that are inconsistent and that give rise to further dissatisfaction, complaints, and suspicions on the part of users—not just the users subject to censorship or deplatforming, but also users who witness such actions and disapprove of them.
Considered both separately and together, it seems clear that each of the traditional "dyadic" models of how to regulate free speech tend to focus on two vertices of the free-speech triangle while overlooking a third vertex, whose stakeholders may intervene or distort or exploit or be exploited by outcomes of conflicts of the other two stakeholder groups. What this suggests is that no "dyadic" conception of the free-speech ecosystem is sufficiently complex and stable enough to protect freedom of expression or, for that matter, citizens' autonomy interests in privacy and self-determination. This leaves us with the question of whether it is possible to direct our law and policy in a direction that takes into account today's "triangular" free-speech ecosystem in ways that provide stable, durable, expansive protections of freedom of speech and other valid interests of all three stakeholder groups. That question is the subject of Part 3 of this series.
Mike Godwin (@sfmnemonic) is a distinguished senior fellow at the R Street Institute.
Filed Under: 1st amendment, free speech, jack balkin, social media