Mike Godwin’s Techdirt Profile

mnemonic

About Mike Godwin




Posted on Techdirt - 29 July 2020 @ 1:49pm

Former Rep. Chris Cox Used His Testimony At Tuesday's Senate Hearing On The Internet's Foundational Law To Do Some Myth-Busting

from the present-at-the-creation dept

Whenever internet-law experts see a new Congressional hearing scheduled whose purpose is to explore whether Section 230—a federal statute that’s widely regarded as a foundational law of the internet—needs to be amended or repealed, we shudder. That’s because we know from experience that even some of the most thoughtful and conscientious lawmakers have internalized some broken notions about Section 230 and have the idea that this statute is responsible for everything that bothers us about today’s internet.

That’s why Tuesday’s Senate hearing about Section 230 was, in its own way, much more calming than earlier hearings on the law have been. Each of the four witnesses had substantive knowledge to share, and even if some witnesses were wrong (at least in my view) on this or that fine point, none of them was grandstanding or (as has often been the case in the past) unwittingly or intentionally deceptive about what might be wrong with 230. Each more or less acknowledged that the law which cyberlaw professor Jeff Kosseff has aptly characterized, in his book of the same name, really does contain “The Twenty-Six Words That Created the Internet.” Even Professor Olivier Sylvain of Fordham Law School, who believes Section 230’s protections to be “ripe for narrowing,” focuses on the courts’ role in interpreting the statute rather than Congress’s role in possibly amending it. Unlike some other hearings, this hearing’s witnesses had no one calling for repeal.

Kosseff, a faculty member at the U.S. naval academy at Annapolis, was himself one of the witnesses on Tuesday’s panel, which was convened by the Senate commerce committee’s Subcommittee on Communications, Technology, Innovation, and the Internet. But even though the hearing’s title as inspired by Kosseff’s book, it was former Representative Chris Cox, now a partner at the Morgan, Lewis & Bockius law firm and a board member at the tech lobbying group NetChoice, who was the star. In the 1990s, Representative Cox was an author and co-sponsor (with then-Representative, now Senator, Ron Wyden) of the bill that became Section 230. Having him as a witness on Tuesday’s panel was a bit like having James Madison show up to testify about what he was thinking when he wrote the Bill of Rights.

Cox’s testimony spotlighted the ways in which the legal immunities built into Section 230 in 1995—immunities that generally shield internet companies for liability for content created by users and subscribers—had given rise to the transformational effect those companies have had in the world of 2020. Just as important, Cox pointed out in his written testimony that the law does not shield service providers who created illegal or tortious content--”in whole or in part”--from legal liability:

Section 230 was written, therefore, with a clear fact-based test:

  • Did the person create the content? If so, that person is liable for any illegality.
  • Did someone else create the content? Then that someone else is liable.
  • Did the person do anything to develop the content created by another, even if only in part? If so, the person is liable along with the content creator.

Cox explained that this approach was aimed to accommodate the realities of being an online service provider but not to allow service providers that are clearly responsible for a crime or civil wrong to be immunized by the statute:

“Rep. Wyden and I knew that, in light of the volume of content that even in 1995 was crossing most internet platforms, it would be unreasonable for the law to presume that the platform will screen all material. We also well understood the corollary of this principle: if in a specific case a platform actually did review material and edit it, then there would be no basis for assuming otherwise. As a result, the plain language of Section 230 deprives such a platform of immunity.”

Cox used this portion of his written testimony to debunk which he called certain “myths” about Section 230—of which the first and most obvious myth is that Section 230 immunizes “websites that knowingly engage in, solicit, or support illegal activity.” Wrote Cox: “It bears repeating that Section 230 provides no protection for any website, user, or other person or business involved even in part in the creation or development of content that is tortious or criminal.”

Another of these myths had to do with the idea that 230’s purpose was to set up a separate legal rules for internet services that don’t apply in the outside world. Cox insists, however, that Section 230 simply extended to the online world the protections brick-and-mortar enterprises already had, in terms of not being liable for content they didn’t fully or partially create. (For example, if I slander someone in a restaurant, the restaurant’s proprietor shouldn’t be held liable for my using his premises to defame someone. I look forward to testing this principle when we’re all going out to restaurants again.)

Other creation myths included the idea that Section 230 was designed just to protect “an infant industry” (so is no longer necessary now that the industry is old enough to vote), or the idea that it was a favor to the tech industry (Cox says the tech companies in the 1990s mostly didn’t know enough to lobby for the provision—or else didn’t even exist then), or the idea that it was part of a “grand bargain” to help then-Senator James Exon pass his anti-porn legislation, then mostly known as the Communications Decency Act. With regard to that last theory, Cox explains that his and Wyden’s draft was “deliberately crafted as a rebuke” to Senator Exon’s approach to online porn. If service providers were going to make the world’s information available to users, Cox and Wyden reasoned, there was no way that any of the services could effectively be responsible for the “indecent” content in libraries and elsewhere that might show up on users’ screens.

The real reason Section 230 was included with Senator Exon’s Communications Decency Act language had to do with the politics of the conference committee that had to work out differences between the House and Senate versions of the Telecommunications Act of 1996. The Cox-Wyden provision was in the House version, but an overwhelming majority of senators had voted for the CDA in the Senate version. Harmonizing the two opposing provisions had some interesting consequences, as Cox’s testimony points out:

When the House and Senate met in conference on the Telecommunications Act, the House conferees sought to include Cox-Wyden and strike Exon. But political realities as well as policy details had to be dealt with. There was the sticky problem of 84 senators having already voted in favor of the Exon amendment. Once on record with a vote one way—particularly a highly visible vote on the politically charged issue of pornography—it would be very difficult for a politician to explain walking it back. The Senate negotiators, anxious to protect their colleagues from being accused of taking both sides of the question, stood firm. They were willing to accept Cox-Wyden, but Exon would have to be included, too. The House negotiators, all politicians themselves, understood. This was a Senate-only issue, which could be easily resolved by including both amendments in the final product. It was logrolling at its best.

“Perhaps part of the enduring confusion about the relationship of Section 230 to Senator Exon’s legislation has arisen from the fact that when legislative staff prepared the House-Senate conference report on the final Telecommunications Act, they grouped both Exon’s Communications Decency Act and the Internet Freedom and Family Empowerment Act into the same legislative title. So the Cox-Wyden amendment became Section 230 of the Communications Decency Act—the very piece of legislation it was designed to counter. Ironically, now that the original CDA has been invalidated, it is Ron’s and my legislative handiwork that forever bears Senator Exon’s label.”

Cox’s explanation should put to rest forever the myth that the Supreme Court’s decision in Reno v. ACLU (1997), when it struck down all other provisions of the Communications Decency Act as unconstitutional, left Section 230 alone as an incomplete fragment rendered meaningless and/or dysfunctional if standing alone. As Cox’s written testimony makes clear, Section 230 was originally crafted as a standalone statute whose purpose was to negate the effect of Stratton Oakmont v. Prodigy (1995)—a case whose judge drastically misread both prior caselaw and the facts of the case he decided—and restore something like state of the online-services law as it was understood after a federal court’s influential decision in 1991 in Cubby v. CompuServe.

One of the unfortunate aspects of Tuesday’s hearing is that Cox’s lengthy first-person account and massive debunking of common myths about Section 230 weren’t heard by most of the Senators or by the viewers who only watched the hearing online. In “person” (Cox, like the other witnesses, was beamed in via a teleconferencing system that I presume was Zoom), the former congressman departed from his written remarks to remind his audience that, among other things, Section 230 gave us Wikipedia, a free resource hosted by the Wikimedia Foundation, that serves most of us in the Western developed countries as a resource every day. This is something I wish more legislators would remember—that Wikipedia depends on Section 230 to exist in its current form and usefulness. Full disclosure: I spent a few years as general counsel and later outside counsel doing work for the Wikimedia Foundation. And, just like any other lawyer who who has worked to protect a highly valued online service, I can testify that we depended on Section 230 a lot.

Still another unfortunate aspect that is that Kosseff’s and Sylvain’s contributions, as well as those of the Internet Association’s deputy general counsel, Elizabeth Banker, were somewhat eclipsed both by Cox’s written testimony and by his live testimony as one of the two fathers of “the twenty-six words that created the internet.” But these tradeoffs were a small price to pay in order to spend so much of Tuesday morning getting myths busted and truths told. Even as someone who’s been dealing with Section 230 for almost as long as Cox has, I can say truthfully that I learned a lot.

14 Comments

Posted on Techdirt Greenhouse - 27 May 2020 @ 1:00pm

In Search Of A Grand Unified Theory Of Free Expression And Privacy

from the time-for-a-gut-check dept

Every time I ask anyone associated with Facebook’s new Oversight Board whether the nominally independent, separately endowed tribunal is going address misuse of private information, I get the same answer—that’s not the Board’s job. This means that the Oversight Board, in addition to having such an on-the-nose proper name, falls short in a more important way—its architects imagined that content issues can be tackled substantively without addressing privacy issues. Yet surely the recent scandals that have plagued Facebook and some other tech companies in recent years have shown us that private information issues and harmful-content problems have become intimately connected.

We can’t turn a blind eye to this connection anymore. We need the companies, and the governments of the world, and the communities of users, and the technologists, and the advocates, to unite behind a framework that emphasizes the deeper-than-ever connection between privacy problems and free-speech problems.

What we need most now, as we grapple more fiercely with the public-policy questions arising from digital tools and internet platforms, is a unified field theory—or, more properly—a “Grand Unified Theory” (a.k.a. “GUT”)—of free expression and privacy.

But the road to that theory is going to be hard. From the beginning three decades ago when digital civil-liberties emerged as a distinct set of issues that needed public-policy attention, the relationship between freedom of expression and personal privacy in the digital world has been a bit strained. Even the name of the first big conference to bring all the policy people, technologists, government officials, hackers, and computer cops reflected the tension. The first Computers, Freedom and Privacy conference was held in Burlingame California, in 1991, made sure that attendees knew that “Privacy” was not just a kind of “Freedom” but its own thing that deserved its own special attention.

The tensions emerged early on. It seemed self-evident to most of us back then that the relationship between freedom of expression (and freedom of assembly and freedom of inquiry) had to have some limits—including limits on what any of us could do with the private information about other people. But while it’s conceptually easy to define in fairly clear terms what counts as “freedom of expression,” the consensus about what counts as a privacy interest is murkier. Because I started out as a free-speech guy, I liked the law-school-endorsed framework of “privacy torts,” which carved out some fairly narrow privacy exceptions to the broad guarantees of expressive freedom. That “privacy torts” setup meant that, at least when we talked about “invasion of privacy,” I could say what counted as such an invasion and what didn’t. Privacy in the American system was narrow and easy to grasp.

But this wasn’t the universal view in the 1990s, and it’s certainly not the universal view in 2020. In the developed world, including the developed democracies of the European Union, the balance between privacy and free expression has been struck in a different way. The presumptions in the EU favor greater protection of personal information (and related interests like reputation) and somewhat less protection of what freedom of expression. Sure, the international human-rights source texts like the Universal Declaration of Human Rights (in Article 19) may protect “freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media regardless of frontiers.” But ranked above those informational rights (in both the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights) is the protection of private information, correspondence, “honor,” and reputation. This difference balance is reflected in European rules like the General Data Protection Regulation.

The emerging international balance, driven by the GDPR, has created new tensions between freedom of expression and what we loosely call “privacy.” (I use quotation marks because the GDPR regulates not just the use of private information but also the use of “personal” information that may not be private—like old newspaper reports of government actions to recover social-security debts. This was the issue in the leading “right to be forgotten” case prior to the GDPR.) Standing by themselves, the emerging international consensus doesn’t provide clear rules for resolving those tensions.

Don’t get me wrong: I think the idea of using international human rights instruments as guidance for content approaches on social-media platforms has its virtues. The advantage is that in international forums and tribunals it gives the companies as strong a defense as one might wish in the international environment for allowing some (presumptively protected) speech to stay up in the face of criticism and removing some (arguably illegal) speech. The disadvantages are harder to grapple with. Countries will differ on what kind of speech is protected, but the internet does not quite honor borders the way some governments would like. (Thailand's lèse-majesté is a good example.) In addition, some social-media platforms may want to create environments that are more civil, or child-friendly, or whatever, which will entail more content-moderation choices and policies than human-rights frameworks would normally allow. Do we want to say that Facebook or Google *can't* do this? That Twitter should simply be forbidden to tag a presidential tweet as “unsubstantiated”? Some governments and other stakeholders would disapprove.

If a human-rights framework doesn’t resolve the free-speech/privacy tensions, what could? Ultimately, I believe that the best remedial frameworks will involve multistakeholderism, but I think they also need to begin with a shared (consensus) ethical framework. I present the argument in condensed form here: "It’s Time to Reframe Our Relationship With Facebook.” (I also published a book last year that presents this argument in greater depth.)

Can a code of ethics be a GUT of free speech and privacy? I don’t think it can, but I do think it can be the seed of one. But it has to be bigger than a single company’s initiative—which more or less is the best we can reasonably hope Facebook’s Oversight Board (assuming it sets out ethical principles as a product of its work on content cases) will ever be. I try not to be cynical about Facebook, which has plenty of people working on these issues who genuinely mean well, and who are willing to forgo short-term profits to put better rules in place. While it's true at some sufficiently high level that the companies privilege profits over public interest, the fact is that once a company is market-dominant (as Facebook is), it may well trade off short-term profits as part of a grand bargain with governments and regulators. Facebook is rich enough to absorb the costs of compliance with whatever regimes the democratic governments come up with. (A more cynical read of Zuckerberg's public writings in the aftermath of the company’s various public writings, is that he wants the governments to get the rules in place, and then FB will comply, as it can afford to do better than most other companies, and then FB's compliance will be a defense against subsequent criticism.)

But the main reason I think reform has to come in part at the industry level rather than at the company level, is that company-level reforms, even if well-intended, tend to instantiate a public-policy version of Wittgenstein's "private language" problem. Put simply, if the ethical rules are internal to a company, the company can always change them. If they're external to a company, then there's a shared ethical framework we can use to criticize a company that transgresses the standards.

But we can’t stop at the industry level either—we need governments and users and other stakeholders to be able to step in and say to the tech industries that, hey, your industry-wide standards are still insufficient. You know that industry standards are more likely to be adequate and comprehensive when they’re buttressed both by public approval and by law. That’s what happened with medical ethics and legal ethics—the frameworks were crafted by the professions but then recognized as codes that deserve to be integrated into our legal system. There’s an international consensus that doctors have duties to patients (“First, do no harm”) and that lawyers and other professions have “fiduciary duties” to their clients. I outline how fiduciary approaches might address Big Tech’s consumer-trust problems in a series of Techdirt articles that begins here.

The “fiduciary” code-of-ethics approach to free-speech and privacy problems for Big Tech is the only way I see of harmonizing digital privacy and free-speech interests in a way that will leave most stakeholders satisfied (as most stakeholders are now satisfied with medical-ethics frameworks and with lawyers’ obligations to protect and serve their clients). Because lawyers and doctors are generally obligated to tell their clients the truth (or, if for some reason they can’t, end the relationship and refer the clients to other practitioners), and because they’re also obligated to “do no harm” (e.g., by allowing companies to use personal information in a manipulative way or to violate clients’ privacy or autonomy), these professions already have a Grand Unified Theory that protects both speech and privacy in the context of clients relationships with practitioners.

Big Tech has a better shot at resolving the contradictory demands on its speech and privacy practices if it aspires to do the same, and if it embraces an industry-wide code of ethics that is acceptable to users (who deserve client protections even if they’re not paying for the services in question). Ultimately, if the ethics code is backed by legislators and written into the law, you have something much closer to a Grand Unified Theory that harmonizes privacy, autonomy, and freedom of expression.

I’m a big booster of this GUT, and I’ve been making versions of this argument before now. (Please don’t call it “Godwin-Unified Theory”—having one “law” named after me is enough.) But here in 2020 we need to do more than argue about this approach—we need to convene and begin to hammer out a consensus about a systematic, harmonized approach that protects human needs for freedom of expression, for privacy, and for autonomy that’s reasonably free of psychological-warfare tactics of informational manipulation. The issue is not just false content, and it’s not just personal information—open societies have to incorporate a fairly high degree of tolerance for unintentionally false expression and for non-malicious or non-manipulative disclosure or use of personal information. But an open society also needs to promote supporting an ecosystem—a public sphere of discourse—in which neither the manipulative crafting of deceptive and destructive content nor the manipulative targeting of it based on our personal data is the norm. That’s an ecosystem that will require commitment from all stakeholders to build—a GUT based not on gut instincts but on critical rationalism, colloquy, and consensus.

10 Comments

Posted on Techdirt - 7 March 2019 @ 1:37pm

A Book Review Of Code And Other Laws Of Cyberspace

from the more-timely-than-you-might-think dept

Twenty years ago, Larry Lessig published the original version of his book Code and Other Laws of Cyberspace. A few years later, he put out a very updated version called Code 2.0. Both versions are classics and important pieces of the history of the internet -- and are especially interesting to look at now that issues of how much "code" is substituting as "law" have become central to so many debates. When the original book was published, in 1999, Mike Godwin wrote a review for a long defunct journal called E-Commerce Law Weekly. Given the importance of these issues today, we're republishing a moderately updated version of Godwin's original 1999 review. It's interesting to view this review through the lens of the past 20 years of history that we now have lived through.

Imagine that you could somehow assemble the pioneers of the Internet and the first political theorists of cyberspace in a room and poll them as to what beliefs they have in common. Although there would be lots of heated discussion and no unanimity on any single belief, you might find a majority could get behind something like the following four premises:

  1. The Internet does not lend itself to regulation by governments.
  2. The proper way to guarantee liberty is to limit the role of government and to prevent government from acting foolishly with regard to the Internet.
  3. The structure of the Internet—the "architecture" of cyberspace, if you will—is politically neutral and cannot easily be manipulated by government or special interests.
  4. The expansion of e-commerce and the movement of much of our public discourse to the online world will increase our freedom both as citizens and as consumers.
But what if each of these premises is at best incomplete and at worse false or misleading? (Leave aside the likelihood that they're not entirely consistent with one another.) What if the architecture of the Net can be changed by government and the dynamism of e-commerce? What if the very developments that enhance electronic commerce also undermine political freedom and privacy? The result might be that engineers and activists who are concerned about preserving democratic values in cyberspace were focusing their efforts in the wrong direction. By viewing governmental power as the primary threat to liberty, autonomy, and dignity, they'd blind themselves to the real threats—threats that it may require government to block or remedy.

It is precisely this situation in which Harvard law professor Lawrence Lessig believes we find ourselves. In his new book Code and Other Laws of Cyberspace (Basic Books, 1999), Lessig explores at length his thesis that the existing accounts of the political and legal framework of cyberspace are incomplete and that their very incompleteness may prevent us from preserving the aspects of the Internet we value most. Code is a direct assault on the libertarian perspective that informs much Internet policy debate these days. What's more, Lessig knows that he's swimming against the tide here, but he nevertheless takes on in Code a project that, although focused on cyberspace, amounts to nothing less than the relegitimization of the liberal (in the American sense) philosophy of government.

It is a measure of Lessig's thoroughness and commitment to this project that he mostly succeeds in raising new questions about the proper role of government with regard to the Net in an era in which, with the exception of a few carveouts like Internet gambling and cybersquatting, Congress and the White House have largely thrown up their hands when it comes to Internet policy. While this do-nothingism is arguably an improvement over the kind of panicky, ill-informed interventionism of 1996's Communications Decency Act (which Lessig terms "[a] law of extraordinary stupidity" that "practically impaled itself on the First Amendment"), it also falls far short, he says, of preserving fundamental civil values in a landscape reshaped by technological change.

Architecture Is Not Static

To follow Lessig's reasoning in Code, you need to follow his terminology. This is not always easy to do, since the language by which he describes the Internet as it is today and as it might someday become is deeply metaphorical. Perhaps the least problematic of his terms is "architecture," which Lessig borrows from Mitchell Kapor's Internet aphorism that "architecture is politics." Although his use of the term is a little slippery, Lessig mostly means for us to understand the term "architecture" to refer to both (a) the underlying software and protocols on which the Internet is based and (b) the kinds of applications that may run "on top of that Internet software infrastructure." And while the first kind of architecture is not by itself easily regulable, Lessig says, the second kind might make it so—for example, by incorporating the various monitoring and identification functions that already exist on proprietary systems and corporate intranets.

More difficult to get a handle on is his use of the word "code," which seems to expand and contract from chapter to chapter. At some bedrock level, Lessig means "code" to signify the software and hardware that make up the Internet environment—akin to the sense of "code" that programmers use. But he is also fond of metaphoric uses of "code" that muddy the waters. "Code is law," Lessig writes at several points, by which we may take him to mean that the Internet's software constrains and shapes our behavior with as much force as law does. And of course the book's title equates code and law.

Elsewhere, however, he writes that code is something qualitatively different from law in that it does not derive from legislative or juridical action or community norms, yet may affect us more than laws or norms do, while providing us less opportunity for amendment or democratic feedback. It does not help matters when he refers to things like bicycle locks as "real-world code." But if you can suspend your lexical disbelief for a while, the thrust of Lessig's argument survives any superficial confusions wrought by his terminology.

That argument depends heavily on the first point Lessig makes about Internet architecture, which is simply that it's malleable—shapeable by human beings who may wish to implement an agenda. The initial architecture of the Internet, he says correctly, emphasized openness and flexibility but provided little support for identifying or authenticating actual individuals or monitoring them or gathering data about them. "On the Internet it is both easy to hide that you are a dog and hard to prove that you are not," Lessig writes. But this is a version of the Internet, he says, that is already being reshaped by e-commerce, which has reasons for wanting to identify buyers, share financial data about them, and authenticate the participants in transactions. At the center of e-commerce-wrought changes is the technology of encryption, which, while it has the ability to render communications and transactions in transit, also enables an architecture of identification (through, e.g., encryption-based certification of identity and digital signatures).

The key to the creation of such an architecture, Lessig writes, is not that a government will require people to hold and use certified IDs. Instead, he writes, "The key is incentives: systems that build the incentives for individuals voluntarily to hold IDs." Lessig adds, "When architectures accommodate users who come with an ID installed and make life difficult for users who refuse to bear an ID, certification will spread quickly."

But even if you don't believe that e-commerce alone will establish an architecture of identification, he writes, there are reasons to believe that government will want to help such an architecture along. After all, a technology that enables e-commerce merchants to identify you and authorize your transactions may also have an important secondary usefulness to a government that wants to know where you've been and what you've been up to on the Internet.

And if the government wants to change the technological architecture of the Internet, there is no reason to believe it would not succeed, at least to some extent. After all, Lessig says, the government is already involved in mandating changes in existing architectures in order to effectuate policy. Among the examples of this kind of architectural intervention, he says, are (a) the Communications Assistance to Law Enforcement Act of 1994, in which Congress compelled telephone companies to make their infrastructure more conducive to successful wiretaps, (b) Congress's requiring the manufacturers of digital recording devices to incorporate technologies the extent to which perfect copies can be made, and (c) the requirement in the Telecommunications Act of 1996 that the television industry design and manufacture a V-chip to facilitate individuals' ability to automatically block certain kinds of televised content.

With an identification architecture in place, Lessig argues, what previously might seem to be an intractable Internet-regulation problem, like the prohibition of Internet gambling, might become quite manageable.

The Government and Code

An account of social activity on the Internet that deals solely with the legal framework is inadequate, Lessig argues. In Lessig's view, the actual "regulators" of social behavior come from four sources, each of which has its own dynamic. Those sources of social constraints are the market, the law, social norms, and architecture—here "architecture" means "the constructed environment in which human beings conduct their activities). "But these separate constraints obviously do not simply exist as givens in a social life," Lessig writes. "They are neither found in nature nor fixed by God," he writes, adding that each constraint "can be changed, although the mechanism of changing each is complex." The legal system, he says, "can have a significant role in this mechanics."

So can the open-source movement, which Lessig refers to as "open code." The problem with "architectural" constraints, and the thing that distinguishes them from any other kind, is that they do not depend on human awareness or judgment to function. You may choose whether or not to obey a law or a social norm, for example, and you may choose whether or not to buy or sell something in the market, but (to use the metaphor) you cannot enter a building through a door if there is no door there, and you cannot open a window if there is no window. Open code—software that is part of a code "commons," that is not owned by any individual or business, and that can be inspected and modified—can provide a "a check on state power," Lessig writes, insofar as it makes any government-mandated component of the architecture of the Net both visible to, and (potentially) alterable by, citizens. Open code, which still makes up a large part of the Internet infrastructure, is thus a way of making architecture accountable and subject to democratic feedback, he argues. "I certainly believe that government must be constrained, and I endorse the constraints that open code imposes, but it is not my objective to disable government generally," Lessig writes. But, he adds, "some values can be achieved only if government intervenes."

A Jurisprudence of Cyberspace?

One way that government intervenes, of course, is through the court system. And as Lessig notes, it may be the courts that are first called upon to interpret and preserve our social values when technology shifts the effective balance of rights for individuals. A court faced with such a shift often must engage in "translation" of longstanding individual rights into a new context, he says.

Take wiretapping, for example. Once upon a time, it was not so easy for law-enforcement agents to get access to private conversations. But once telephones had become commonplace and, as Lessig puts it, "life had just begun to move onto the wires," the government began to tap phones in order to gather evidence in criminal investigations. Does wiretapping raise Fourth Amendment concerns? The Supreme Court first answered this question in Olmstead v. United States (1928)—the answer for the majority was that wiretapping, at least when the tap was places somewhere other than on a tappee's property, did not raise Fourth Amendment issues since the precise language of the Fourth Amendment does not address the non-trespassory overhearing of conversations. That is one mode of translation, Lessig writes—the court preserved the precise language of the Fourth Amendment in a way that contracted the scope of the zone of privacy protected by the Fourth Amendment.

Another, and arguably preferable approach, Lessig says, would be to follow Justice Louis Brandeis's approach in his dissent in Olmstead—an approach that preserves the scope of the privacy zone while departing from a strict adherence to the literal language of the Amendment. Brandeis's dissent, arguing that the capture of private conversations does implicate the Fourth Amendment, was adopted by the Supreme Court forty years after Olmstead.

But what if technology raises a question for a court for which it is not clear which interpretative choice comes closer to preserving or "translating" the values inherent in the Bill of Rights? Borrowing from contract law, Lessig calls such a circumstance a "latent ambiguity." He further suggests—this is perhaps the most unfashionable of his arguments—that, instead of simply refusing to act and referring the policy question to the legislature, courts might simply attempt to make the best choice at preserving constitutional values in the hope that its choice will at minimum "spur a conversation about these fundamental values...to focus a debate that may ultimately be resolved elsewhere."

Internet Alters Copyright and Privacy

All this begins to seem far afield from the law of cyberspace, but Lessig's larger point is that the changes wrought by the Internet and related technologies are likely to raise significant "latent ambiguity" problems. He focuses on three areas in which technologies raise important questions about values but for which a passive or overliteral "translation" approach would not be sufficient. Those areas are intellectual property, privacy, and freedom of speech. In each case, the problem Lessig sees is one that is based on "private substitutes for public law"—private, non-governmental decision making that undercuts the values the Constitution and Bill of Rights were meant to preserve.

With intellectual property, and with copyright in particular, technological changes raise new problems that the nuanced established legal balances built into the law do not address. Lessig challenges the long-standing assertion, in Internet circles, at least, that the very edifice of copyright law is likely to crumble in the era of the Internet, which enables millions of perfect copies of a creative work to be duplicated and disseminated for free, regardless of whether the copyright holder has granted anyone a license. In response to that perceived threat, Lessig observes, the copyright holders have moved to force changes in technology and changes in the law.

As a result, technologically implemented copyright—protection and copyright—management schemes are coming online, and the government has already taken steps to prohibit the circumvention of such schemes. This has created a landscape in which the traditional exercise of one's rights to "fair use" of another's work under the Copyright Act may become meaningless. The fact that one technically has a right to engage in fair use is of no help when one cannot engage in any unauthorized copying. Complicating this development, Lessig believes, is the oncoming implementation of an ID infrastructure on the Internet, which may make it impossible for individuals to engage in anonymous reading.

This bears some explaining. Consider that if you buy a book in a bookstore with cash, or if you read it in the library, nobody knows what you're buying and reading. By contrast, a code-based licensing scheme in which you identify yourself online in order to obtain or view a copy of a copyrighted work may undercut your anonymity, especially if there's an Internet I.D. Infrastructure already in place. The technology changes are "private" ones—they do not involve anything we'd call "state action" and thus do not raise what we normally would call a constitutional problem—but they affect public values just as deeply as traditional constitutional problems do.

A similar argument can be made about how the Internet alters our privacy rights and expectations. Because the Internet both makes our backgrounds more "searchable" and our current behavior more monitorable, Lessig reasons, the privacy protections in our Bill of Rights may become meaningless. Once again, when the searching and monitoring is done by someone other than the government, it means that the "state action" trigger for invoking the Bill of Rights is wholly absent.

What's more, such searching and monitoring, whether done by the government or otherwise, may be invisible to the person being investigated. You will have lost your right to any meaningful privacy and you will not even know it is gone until it is too late. Lessig's analysis of the problem here is convincing, even though his proposed solution, a "property regime" for personal data that would replace today's "liability regime," is deeply problematic. This is partly because it would transmute invasions of privacy into property crimes—aren't the jails full enough without adding gossips to the inmates—and partly because the distinction he draws between property regimes and liability regimes as to which benefits the individual more is (in my view) illusory in practical terms.

Perhaps Lessig's most controversial position with regard to the threat of private action to public values is the one he has explored previously in a number of articles for law reviews and popular publications—the argument that some version of the Communications Decency Act—perhaps one that required minors to identify themselves as such so as to be blocked from certain kinds of content—is less dangerous to freedom of speech than is the private use of technologies that filter content. It is important to understand that Lessig is not actually calling for a new CDA here, although that nuance might escape some legislators.

Lessig interprets such a version of the CDA, and the architecture that might be created by it, as a kind of "zoning," which he sees as preferable to private, non-legislated filtering because, he says, zoning "builds into itself a system for its limitation. A site cannot block someone from the site without that individual knowing it." By contrast, he says, a filtering regime such as (now widely regarded as moribund) Platform for Internet Content Selection enables all sorts of censorship schemes, not just nominally child-protecting ones. PICS, because it can scale to function at the server or even network level, can be used by a government to block, say, troubling political content. And because PICS support can be integrated into the architecture of the Internet, it could be used to create compelling private incentives for people to label their Internet content. Worse, he says, such blocking would be invisible to individuals.

Lessig's Arguments Hard to Harmonize

There are many problems with Lessig's analysis here, and while it would take more space than I have here to discuss them in depth, I can at least indicate what some of the problems are. First of all, it's not at all clear that one could not create a "zoning" solution that kept the zoning-excluded users from knowing—directly at least—that they have been excluded. Second, if a zoning scheme works to exclude users identified as kids, is there any reason to think it would not work equally well in excluding users identified as Iranians or Japanese or Americans? Don't forget that incipient I.D. architecture, after all.

Third, a PICS-like scheme, implemented at the server level or higher, is actually less threatening to freedom of speech than key-word or other content filtering at the server level or higher. PICS, in order to function, requires that some high percentage of the content producers in the world buy into the self-labeling scheme before a repressive government could use it to block its citizens from disapproved content. Brute-force key-word filtering, by contrast, does not require anyone else's cooperation—a repressive government could choose its own PICS-independent criteria and implement them at the server level or elsewhere.

Fourth, there's nothing inherent in the architecture of a PICS-style scheme—in the unlikely event that such a scheme were implemented—or any other server-level filtering scheme that requires that users not be notified that blocking took place. In short, you could design that architecture so that its operation is visible.

Lessig is right to oppose the implementation of anything that might be called an architecture of filtering. But one wonders why he is so intent on saying that zoning is better than filtering when both models can operate as tools of repression. Lessig answers that question by letting us know what his real worry is, which is that individuals with filtering tools will block out those who need to be heard. Says Lessig: "[F]rom the standpoint of society, it would be terrible if citizens could simply tune out problems that were not theirs.... We must confront the problems of others and think about problems that affect our society. This exposure makes us better citizens." His concern is that we will use filtering tools to bar us from that salutary exposure.

Leaving aside the question of whether his value here is one we should embrace—it is hard to harmonize it with what Brandeis in his Olmstead dissent termed "the right to be let alone"—it seems worth noting that the Internet does not really stand as evidence to Lessig's assumption that people will use their new tools to avoid confrontation with those holding different opinions. Indeed, much of the evidence seems to point the other way, as anyone who has ever viewed a long-running Internet flame war or inspected dueling Web sites can attest. Nothing forces combatants on the Internet to stay engaged, but they do anyway. The fact is, we like to argue with each other—as Deborah Tannen has pointed out, we have embraced an "argument culture." Whether that culture is healthy is another question, of course.

But even if one disagrees with Lessig's analysis of certain particular issues, this does not detract from his main argument, which is that private decision making, enhanced by new technologies and implemented as part of the "architecture" of the Internet, may undercut the democratic values—freedom of speech, privacy, autonomy, access to information—at the core of our society. Implicit in his argument is that the traditional focus of civil libertarians, which is to challenge government interventions in speech and privacy arenas, may be counterproductive in this new context. If I read him right, Lessig is calling for a new constitutional philosophy, one rooted perhaps in Mill's essay On Liberty in which government can function as a positive public tool to preserve from private encroachments of the liberty values we articulated in the Constitution. Such a philosophy would require, however, a very imaginative "translation" of constitutional values indeed to get past the objection that the Bill of Rights is only about limiting "state action."

What Code is really about is (the author's perception of) the need for political liberals to put a positive face on the role of government without embracing statism or seeming to. Although this is clearly Lessig's project, he's pessimistic about its success—in the public debate about Internet policy, he complains, the libertarians have essentially won the field. What he would like to see, perhaps, is a constitutional structure in which something like the Bill of Rights could be invoked against challenges to personal liberty or autonomy, regardless of whether the challenges come from public or private sources. The ideology of libertarianism, he believes, will interpret the changes wrought by e-commerce and other private action as a given, like the weather. "We will watch as important aspects of privacy and free speech are erased by the emerging architecture of the panopticon, and we will speak, like modern Jeffersons, about nature making it so—forgetting that here, we are nature," he writes in a somewhat forlorn final chapter.

Lessig may be right in his gloomy predictions, but let us suppose that his worst fears are not realized and a new debate does begin about the proper role of government in cyberspace and about appropriate limitations on private crafting of the online architecture. If that happens, it may be that at least some of the thanks for that development will have to go to Lessig's Code.

In 1999, Mike Godwin (@sfmnemonic) was senior legal editor of E-Commerce Law Weekly and had just recently published Cyber Rights: Defending Free Speech in the Digital Age. Currently he is a senior fellow at R Street Institute.

10 Comments

Posted on Techdirt - 18 January 2019 @ 10:43am

The Splinters Of Our Discontent: A Review Of Network Propaganda

from the epistemic-closure dept

Years before most of us thought Donald Trump would have a shot at the presidency, the Cato Institute's Julian Sanchez put a name on a problem he saw in American conservative intellectual culture. Sanchez called it "epistemic closure," and he framed the problem this way:

"One of the more striking features of the contemporary conservative movement is the extent to which it has been moving toward epistemic closure. Reality is defined by a multimedia array of interconnected and cross promoting conservative blogs, radio programs, magazines, and of course, Fox News. Whatever conflicts with that reality can be dismissed out of hand because it comes from the liberal media, and is therefore ipso facto not to be trusted. (How do you know they're liberal? Well, they disagree with the conservative media!)  This epistemic closure can be a source of solidarity and energy, but it also renders the conservative media ecosystem fragile."

Sanchez's comments didn't trigger any kind of real schism in conservative or libertarian circles. Sure, there was some heated debate among conservatives, and a few conservative commentators, like David Frum, Bruce Bartlett, and the National Review's Jim Manzi, acknowledged that there might be some merit to Sanchez's critique. But for most people, this argument among conservatives about epistemic closure hardly counted as serious news.

But the publication last fall of Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics by Yochai Benkler, Robert Faris, and Hal Roberts—more than eight years after the original "epistemic closure" debate erupted—ought to make the issue hot again. This long, complex, yet readable study of the American media ecosystem in the run-up to the 2016 election (as well as the year afterwards) demonstrates that the epistemic-closure problem has generated what the authors call an "epistemic crisis" for Americans in general. The book also shows that our efforts to understand current political division and disruptions simplistically—either in terms of negligent and arrogant platforms like Facebook, or in terms of Bond-villain malefactors like Cambridge Analytica or Russia's Internet Research Agency—are missing the forest for the trees. It's not that the social media platforms are wholly innocent, and it's not that the would-be warpers of voter behavior did nothing wrong (or had no effect). But the seeds of the unexpected outcomes in the 2016 U.S. elections, Network Propaganda argues, were planted decades earlier, with the rise of a right-wing media ecosystem that valued loyalty and confirmation of conservative (or "conservative") values and narratives over truth.

Now, if you're a conservative, you may be reading this broad characterization of Network Propaganda as an attack on conservatism itself. Here are four reasons you shouldn't fall into that trap! First, nothing in this book challenges what might be called core conservative values (at least as they have been understood for most of the last 100 years or so). Those values typically have included favoring limited government over expansive government, preferring economic growth and rights to property over promoting equity and equality for their own sake, supporting business flexibility over labor and governmental demands, committing to certain approaches to tax policy, and so forth. Nothing in Network Propaganda is a criticism of substantive conservative values like these, or even of what may increasingly be taken as "conservative" stances in the Trump era (nationalism or protectionism or opposition to immigration, say). The book doesn't take a position on traditional liberal or progressive political stances either.

Second, nothing in the book discounts the indisputable fact that individuals and media entities on the left, and even in the center, have their own sins and excesses to account for. In fact, the more damning media criticisms in the book are aimed squarely at the more traditional journalistic institutions that made themselves more vulnerable to disinformation and distorted narratives in the name of "objectivity." Where right-wing media set out to reinforce conservative identity and narratives—doing, in fact, what they more or less always promised they were going to do—the institutional press of the left and the center frequently let their superficial commitment to objectivity result in the amplification of disinformation and distortions.

Third, there are philosophical currents on the left as well as the right that call the whole notion of objective facts and truth into question—that consider all questions of fact to represent political judgments rather than anything that might be called "factual" or "truthful." As the authors put it, reform of our media ecosystems "will have to overcome not only right-wing propaganda, but also decades of left-wing criticism of objectivity and truth-seeking institutions." Dedication to truth-seeking is, or ought to be, a transpartisan value.

Which leads us to the fourth reason conservatives should pay attention to Network Propaganda, which is the biggest one. The progress of knowledge, and of problem-solving in the real world, requires us, regardless of political preferences and philosophical approaches, to come together in recognizing the value of facts. Consider: if progressives had cocooned themselves in a media ecosystem that had cut itself from the facts—that valued tribal loyalty and shared identity over mere factual accuracy—conservatives and centrists would be justified in pointing out not merely that the left's media were unmoored but also that its insistence on doctrinal purity in the face of factual disproof was positively destructive.

But the massive dataset and analyses offered by Benkler, Faris, and Roberts in Network Propaganda demonstrate persuasively that the converse distortion has happened. Specifically, the authors took about four million online stories regarding the 2016 election or national politics generally and analyzed them through Media Cloud, a joint technological project developed by Harvard's Berkman Klein Center and MIT's Center for Civil Media over the course of the last decade. Media Cloud enabled the authors to study not only where the stories originate but also how they were linked and propagated, and how the various entities in our larger media ecosystem link to one another. The Media Cloud analytical system made it possible to study news sites, including the website versions of newspapers like the New York Times and the Wall Street Journal, along with the more politically focused websites on the left and right, like Daily Kos and Breitbart. The system also enabled the authors to study how the stories were retweeted and shared on Facebook, Twitter, and other social media, as well as how, in particular instances, television coverage supplemented or amplified online stories.

You might expect that any study of such a large dataset would show symmetrical patterns of polarization during the pre-election to post-election period the authors studied (basically, 2015 through 2017). It was, after all, an election period, which is typically a time of increased partisanship. You might also expect, given the increasing presence of social-media platforms like Facebook, Twitter, and Instagram in American public life, that the new platforms themselves, just by their very existence and popularity, shaped public opinion in new ways. And you might expect, given the now-indisputable fact that Russian "active measures" were trying to influence the American electorate in certain ways, to see clear proof either that the Russians succeeded in their disinformation/propaganda efforts (or that they failed).

Yet Network Propaganda, instantly a necessary text for those of us who study media ecologies, shows that the data point to different conclusions. The authors' Media Cloud analyses (frequently represented visually in colorful graphs as well as verbally in tables and in the text of the book itself) point to different conclusions altogether. As Benkler characterizes the team's findings in the Boston Review:

"The data was not what we expected. There were periods during the research when we were just working on identifying—as opposed to assessing—the impact of Russians, and during those times, I thought it might really have been the Russians. But as we analyzed these millions of stories, looking both at producers and consumers, a pattern repeated again and again that had more to do with the traditional media than the Internet."

That traditional media institutions are seriously culpable for the spread of disinformation is counterintuitive. The authors begin Network Propaganda by observing what most of us also observed—the rise of what briefly was called "fake news" before that term was transmuted by President Trump into shorthand for his critics. But Benkler at al. also note that that the latter half of the 20th century, mainstream journalistic institutions, informed by a wave of professionalization that dates back approximately to the founding of the Columbia University journalism school, historically had been able to overcome most of the fact-free calumnies and conspiracy theories through their commitment to objectivity and fact-checking. Yet mainstream journalism failed the culture in 2016, and it's important for the journals and the journalists to come to terms with why. But doing so means investigating how stories from the fringes interacted with the mainstream.

The fringe stories had weird staying power; in the period centering on the 2016 election, a lot of the stories that were just plain crazy—from the absurd narrative that was "Pizzagate" to claims that Jeb Bush had "close Nazi ties" (Alex Jones played a role in both of these narratives)--persistently resurfaced in the way citizens talked about the election. To the Network Propaganda authors, it became clear that in recent years something new has emerged—namely, a variety of disinformation that seems, weedlike, to survive the most assiduous fact-checkers and persist in resurfacing in the public mind.

How did this emergence happen, and should we blame the internet? Certainly this phenomenon didn't manifest in any way predicted by either the more optimistic pundits at the internet's beginnings or the backlash pessimists who followed. The optimists had believed that increased democratic access to mass media might give rise to a wave of citizen journalists who supplemented and ultimately complemented institutional journalism, leading both to more accuracy in reporting and more citizen engagement. The pessimists predicted "information cocoons" (Cass Sunstein's term) and "filter bubbles" (Eli Pariser's term) punctuated to some extent by quarrelsomeness because online media can act as disinhibition to bad behavior.

Yes, to some extent, the optimists and the pessimists both found confirmation of their predictions, but what they didn't expect, and what few if any seem to have predicted, was the marked asymmetry of how the predictions played in the 2015-2017 period with regard to the 2016 election processes and their outcome. As the authors put it, "[t]he consistent pattern that emerges from our data is that, both during the highly divisive election campaign and even more so during the first year of the Trump presidency, there is no left-right division, but rather a division between the right and the rest of the media ecosystem. The right wing of the media ecosystem behaves precisely as the echo-chamber models predict—exhibiting high insularity, susceptibility to information cascades, rumor and conspiracy theory, and drift toward more extreme versions of itself. The rest of the media ecosystem, however, operates as an interconnected network anchored by organizations, both for profit and nonprofit, that adhere to professional journalistic norms."

As a result, this period saw the appearance of disinformation narratives that targeted Trump and his primary opponents as well as Hillary Clinton, but the narratives that got more play, not just in right-wing outlets but ultimately in the traditional journalistic outlets at well, were the ones that centered on Clinton. This happened even when there were fewer available facts supporting the anti-Clinton narratives and (occasionally) more facts supporting the anti-Trump narratives. The explanation for the anti-Clinton narratives' longevity in the news cycle, the data show, is the focus of the right-wing media ecology on the two focal media nodes of Fox News and Breitbart. At times during this period, Breitbart took the lead as an influencer from Fox News, which eventually responded by repositioning itself after Trump's nomination as a solid Trump booster.

In contrast, left-wing media had no single outlet that defined orthodoxy for progressives. Instead, left-of-center outlets worked within the larger sphere of traditional media, and, because they were competing for the rest of the audience that had not committed itself to the Fox/Breitbart ecosystem, were constrained to adhere, mostly, to facts that were confirmable by traditional media institutions associated with the center-left (the New York Times and the Washington Post, say) as well as with the center-right (e.g., the Wall Street Journal). Basically, even if you were an agenda-driven left-oriented publication or online outlet, your dependence on reaching the mainstream for your audience meant that, you couldn't get away with just making stuff up, or with laundering far-left conspiracy theories from more marginal sources.

Network Propaganda's data regarding the right-wing media ecosystem—that it's insular, prefers confirmation of identity and loyalty rather than self-correction, demonizes perceived opponents, and resists disconfirmation of its favored narratives—map well to social-science political-communication theorists Kathleen Hall Jamieson and Joseph Capella's 2008 book, Echo Chamber: Rush Limbaugh And The Rise Of Conservative Media. In that book, Jamieson and Capella outlined how, as they put it, "these conservative media create a self-protective enclave hospitable to conservative beliefs." As a consequence, they write:

"[t]his safe haven reinforces conservative values and dispositions, holds Republican candidates and leaders accountable to conservative ideals, tightens their audience's ties to the Republican Party, and distances listeners, readers, and viewers from 'liberals," in general, and Democrats, in particular. It also enwraps them in a world in which facts supportive of Democratic claims are contested and those consistent with conservative ones championed."

The data analyzed by Benkler et al. in Network Propaganda support Jamieson's and Capella's conclusions from more than a decade ago. Moreover, Benkler et al. argue that the key factors in the promotion of disinformation were not "clickbait fabricators" (who generate eye-grabbing headlines to generate revenue), or Russian "active measures," or the corrosive effects of the (relatively) new social-media platforms Facebook and Twitter. The authors are aware that in making this argument they're swimming against the tide:

"Fake news entrepreneurs, Russians, the Facebook algorithm, and online echo chambers provide normatively unproblematic, nonpartisan explanations to the current epistemic crisis. For all of these actors, the strong emphasis on technology suggests a novel challenge that our normal systems do not know how to handle but that can be addressed in a nonpartisan manner. Moreover, focusing on 'fake news' from foreign sources and on Russian efforts to intervene places the blame onto foreigners with no legitimate stake in our democracy. Both liberal political theory and professional journalism consistently seek neutral justifications for democratic institutions, so visibly nonpartisan explanations such as these have enormous attraction."

Nevertheless, Network Propaganda argues, the nonpartisan explanations are inconsistent with what the data show, which the authors characterize as "a radicalization of roughly a third of the American media system." (It isn't "polarization," since the data don't show any symmetry between left and right "poles.") The authors argue that "[n]o fact emerges more clearly from our analysis of how four million political stories were linked, tweeted, and shared over a three-year period than that there is no symmetry in the architecture and dynamics of communications within the right-wing media ecosystem and outside of it." In addition, they write, "we have observed repeated public humiliation and vicious disinformation campaigns mounted by the leading sites in this sphere against individuals who were the core pillars of Republican identity a mere decade earlier." Those campaigns against Republican stalwarts came from the radicalized right-wing media sources, not from the left.

The authors acknowledge that they "do not expect our findings to persuade anyone who is already committed to the right-wing media ecosystem. [The data] could be interpreted differently. They could be viewed as a media system overwhelmed by liberal bias and opposed only by a tightly-clustered set of right-wing sites courageously telling the truth in the teeth of what Sean Hannity calls the 'corrupt, lying media,' rather than our interpretation of a radicalized right set apart form a media system anchored in century-old norms of professional journalism." But that interpretation of the data flies in the face of Network Propaganda's extensive demonstration that the traditional mainstream media—in what the authors call "the performance of objectivity"—actually had the effect of amplifying right-wing narratives rather than successfully challenging the false or distorted narratives. (The authors explore this paradox in Chapter 6.)

Democrats and progressives won't have any trouble accepting the idea that radicalized right-wing media are the primary cause of what the authors call today's "epistemic crisis." But Benkler and his co-authors want Republicans to recognize what they lost in 2016:

"The critical thing to understand as you read this book is that the epochal change reflected by the 2016 election and the first year of the Trump presidency was not that Republicans beat Democrats [but instead] that in 2016 the party of Ronald Reagan and the two presidents Bush was defeated by the party of Donald Trump, Breitbart, and billionaire Robert Mercer. As our data show, in 2017 Fox News joined the victors in launching sustained attacks on core pillars of the Party of Reagan—free trade and a relatively open immigration policy, and, most directly, the national security establishment and law enforcement when these threatened President Trump himself."

It's possible that many or even most Republicans don't yet want to hear this message—the recent shuttering of The Weekly Standard underscores one of the consequences of radicalization of right-wing media, which is that center-right outlets, more integrated with the mainstream media in terms of journalistic professionalism and factuality, have lost influence in the right-wing media sphere. (It remains to be seen whether The Bulwark helps fill the gap.)

But the larger message from Network Propaganda's analyses is that we're fooling ourselves if we blame our current culture's vulnerability to disinformation on the internet in general or on social media (or search engines, or smartphones) … or even on Russian propaganda campaigns. Blaming the Russians is trendy these days, and even Kathleen Jameson, whose 2008 book on right-wing media, Echo Chamber, informs the authors' work in Network Propaganda, has adopted the thesis that the Russians probably made the difference for Trump in 2016. Her recent book Cyberwar—published a month after Network Propaganda was published—spells out a theory of Russian influence in the 2016 election that also, predictably, raises concerns about social media, as well as focusing on the role of the Wikileaks releases of hacked DNC emails and how the mainstream media responded to those releases.

Popular accounts of Jamieson's book have interpreted Cyberwar as proof that the Russians are the central culprits in any American 2016 electoral dysfunction, even though Jamieson carefully qualifies her reasoning and conclusions in all the ways you would want a responsible social scientist to do. (She doesn't claim to have proved her thesis conclusively.) Taken together with the trend of seeing social media as inherently socially corrosive, the Russians-did-it narrative suggests that if Twitter and Facebook (and Facebook-integrated platforms like Instagram and WhatsApp) clean up their acts and find ways to purge their products of foreign actors as well as homegrown misleading advertising and "fake news," the political divisiveness we've seen in recent years will subside. But Network Propaganda provides strong reason to believe that reforming or regulating or censoring the internet companies won't solve the problems they're being blamed for. True, the book expressly endorses public-policy responses to the disinformation campaigns of malicious foreign actors as well as reforms of how the platforms handle political advertising. But, the authors insist, the problem isn't primarily the Russians, or technology—it's in our political and media cultures.

Possibly Jamieson is right to think that the Russians' "active measures" were efforts that, amplifying pre-existing political divisions through social media, were the final straw that ultimately changed the outcome of the 2016 election. Nevertheless, at its best Jamieson's book has taken a snapshot of how vulnerable our political culture was in 2016. Plus, her theory of Russian influence requires some suspension of disbelief, notably in her theory about how then-FBI-director James Comey's interventions—departures from DOJ/FBI norms—were caused somehow by the fact of the Russian campaign. Even if you accept her account, it's an account of our vulnerability that doesn't explain where the vulnerability came from.

In contrast, Network Propaganda has a fully developed theory of where that vulnerability came from, and traces it—in ways aligned with Jamieson's previous scholarship—to sources that predate the modern internet and social media. In addition, in what may be a surprise given the book's focus on what might be mistakenly taken as a problem unique to American political culture, Network Propaganda expressly places the American problems in the context of the larger currents around the world to blame internet platforms in particular for social ills:

"For those not focused purely on the American public sphere, our study suggests that we should focus on the structural, not the novel; on the long-term dynamic between institutions, culture, and technology, not only the disruptive technological moment; and on the interaction between the different media and technologies that make up a society's media ecosystem, not on a single medium, like the internet, much less a single platform like Facebook or Twitter. The stark differences we observe between the insular right-wing media ecosystem and the majority of the American media environment, and the ways in which open web publications, social media, television, and radio all interacted to produce these differences, suggest that the narrower focus will lead to systematically erroneous predictions and diagnoses. It is critical not to confound what is easy to measure (Twitter) with what is significantly effective in shaping beliefs and politically actionable knowledge in society.... Different countries, with different histories, institutional structures, and cultural practices of collective sense-making need not fear the internet's effects. There is no echo chamber or filter-bubble effect that will inexorably take a society with a well-functioning public sphere and turn it into a shambles simply because the internet comes to town."

Benkler, Faris, and Roberts expressly acknowledge, however, that it's appropriate for governments and companies to consider how they regulate political advertising and targeted messaging going forward—even if this online content can't be shown to have played a significant corrosive role in past elections, there's no guarantee that refined versions won't be more effective in the future. But even more important, they insist, is the need to address larger institutional issues affecting our public sphere. The book's Chapter 13 addresses a full range of possible reforms. These include "reconstructing center-right media" (to address what the authors think Julian Sanchez correctly characterized as an "epistemic closure" problem) as well as insisting that professional journalists recognize that they're operating in a propaganda-rich media culture, which ethically requires them to do something more than "performance of objectivity."

The recommendations also include promoting what they call a "public health approach to the media ecosystem," which essentially means obligating the tech companies and platforms to disclose "under appropriate legal constraints [such as protecting individual privacy]" the kind of data we need to assess media patterns, dysfunctions, and outcomes. They write, correctly, that we "can no more trust Facebook to be the sole source of information about the effects of its platform on our media ecosystem than we could trust a pharmaceutical company to be the sole source of research on the outcome of its drugs, or an oil company to be the sole source of measurements of particles emissions or CO2 in the atmosphere."

The fact is that the problems in our political and media culture can't be delegated to Facebook or Twitter to solve on their own. Any comprehensive, holistic solutions to our epistemic crises require not only transparency and accountability but also fully engaged democracy with full access to the data. Yes, that means you and me. It's time for our epistemic opening.

Mike Godwin (@sfmnemonic) is a distinguished senior fellow at R Street Institute.

72 Comments

Posted on Free Speech - 30 November 2018 @ 12:03pm

Our Bipolar Free-Speech Disorder And How To Fix It (Part 3)

from the social-media-and-free-speech-needs-a-new-framework dept

Part 1 and Part 2 of this series have emphasized that treating today's free-speech ecosystem in "dyadic" ways—that is, treating each issue as fundamentally a tension between two parties or two sets of stakeholders—doesn't lead to stable or predictable outcomes that adequately protect free speech and related interests.

As policymakers consider laws that affect platforms or other online content, it is critical that they consider Balkin's framework and the implications of this "new-school speech regulation" that the framework identifies. Failure to apply it could lead—indeed, has led in the recent past—to laws or regulations that indirectly undermine basic free expression interests.

A critical perspective on how to think about free speech in the twenty-first century requires that we recognize the extent to which free speech is facilitated by the internet and its infrastructure. We also must recognize that free speech is in some new ways made vulnerable by the internet and its infrastructure. In particular, free speech is particularly enhanced by the lowering barriers to entry for speakers that the internet creates. At the same time, free speech is made vulnerable insofar as the internet and the infrastructure it provides for freedom of speech is subject to legal and regulatory action that may not be transparent to users. For example, a government may seek to block the administration of a dissident website's domain name, or may seek to block the use by dissident speakers of certain payment systems.

There are of course non-governmental forces that may undermine or inhibit free speech—for example, the lowered barriers to entry make it easier for harassers or stalkers to discourage individuals from participation. This problem is in some sense an old problem in free-speech doctrine—the so-called "heckler's veto"—is a subset of this problem. The problem of harassment may give rise to users' complaints directly to the platform provider, or to demands that government regulate the platforms (and other speakers) more.

Balkin explores the methods in which government can exercise both hard and soft power to censor or regulate speech at the infrastructure level. This can include direct changes of the law aimed at compelling internet platforms to censor or otherwise limit speech. This can include pressure that doesn't rise to the level of law or regulation, as when a lawmaker warns a platform that it must figure out how to regulate certain kinds of troubling expression because "[i]f you don't control your platform, we're going to have to do something about it." It can include changes in law or regulation aimed at increasing incentives for platforms to self-police with a heavier hand. Balkin characterizes the ways in which government can regulate speech of citizens and press indirectly, through pressure on or regulation of platforms and other intermediaries like payment systems, as "New School Speech Regulation."

The important thing to remember is that government itself, although often asked to arbitrate issues that arise between internet platforms and users, is not always a disinterested party. For example, a government may have its own reasons for incentivizing platforms to collect more data (and to disclose the data it has collected), such as with National Security Letters. Because the government may regulate speech indirectly and non-transparently, there is a sense in which government cannot position itself on all issues as a neutral referee of competing interests between platforms and users. In a strong sense, the government itself may have its own interests that themselves may be in opposition to either user interests or platform interests or both.

Toward a new Framework

It is important to recognize that entities at each corner of Balkin's "triangular" model may each have valid interests. For example, governmental entities may have valid interests in capturing data about users, or in suppressing or censoring certain (narrow) classes of speech, although only within a larger human-rights context in which speech is presumptively protected. End-users and traditional media companies share a presumptive right to free speech, but also other rights consistent with Article 19 of the ICCPR:

"Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."

The companies, including but not limited to internet infrastructure companies in the top right corner of the triangle, may not have the same kind of legal status that end users or traditional media have. By the same token they may not have the same kind of presumptively necessary role in democratic as governments have. But we may pragmatically recognize that they have a presumptive right to exist, pursue profit, and innovate, on the theory that their doing so ultimately redounds to the benefit of end users and even traditional media, largely by expanding the scope of voice and access.

Properly, we should recognize all these players in the "triangular" paradigm as "stakeholders." With the exception of the manifestly illegal or malicious entities in the paradigm (e.g., "hackers" and "trolls"), entities at all three corners each have their respective interests that may be in some tension with actors at other corners of the triangle. Further, the bilateral processes between any two sets of entities may obscure or ignore the involvement of the third set in shaping goals and outcomes.

What this strongly suggests is the need for all (lawful, non-malicious) entities to work non-antagonistically towards shared goals in a way that heightens transparency and that improves holistic understanding of the complexity of internet free speech as an ecosystem.

Balkin suggests that his free-speech-triangle model is a model that highlights three problems: (1) "new school" speech regulation that uses the companies as indirect controllers and even censors of content, (2) "private governance" by companies that lacks transparency and accountability, and (3) the incentivized collection of big data that makes surveillance and manipulation of end users (and implicitly the traditional media) easier. He offers three suggested reforms: (a) "structural" regulation that promotes competition and prevents discrimination among "payment systems and basic internet services," (b) guarantees of "curatorial due process," and (c) recognition of "a new class of information fiduciaries."

Of the reforms, the first may be taken as a straightforward call for "network neutrality" regulation, a particular type of regulation of internet services that Balkin has expressly and publicly favored (e.g., his co-authored brief in the net neutrality litigation). But it actually articulates a broader pro-competition principle that has implications for our current internet free-speech ecosystem.

Specifically, the imposition of content-moderation obligations by law and regulation actually inhibits competition and discriminates in favor of incumbent platform companies. Which is to say, because content moderation requires a high degree both of capital investment (developing software and hardware infrastructure to respond to and anticipate problems) and of human intervention (because AI filters make stupid decisions, including false positives, that have free-speech impacts), highly capitalized internet incumbent "success stories" are ready to be responsive to law and regulation in ways that startups and market entrants generally are not. The second and third suggestions—that the platforms provide guarantees of "due process" in their systems of private governance, and that the companies that collect and hold Big Data meet fiduciary obligations—need less explanation. But I would add to the "information fiduciary" proposal that we would properly want such a fiduciary to be able to invoke some kind of privilege against routine disclosure of user information, just as traditional fiduciaries like doctors and lawyers are able to do.

Balkin's "triangle" paradigm, which gives us three sets of discrete stakeholders, three problems relating to the stakeholders' relationships with one another, and three reforms is a good first step to framing internet free-speech issues non-dyadically. But while the taxonomy is useful it shouldn't be limiting or necessarily reducible to three. There are arguably some additional reforms that ought to be considered, at a "meta" level (or, if you will, above and outside the corners of the free-speech triangle). With this in mind let us add the following "meta" recommendations to Balkin's three specific programmatic ones.

Multistakeholderism. The multipolar model that Balkin suggests, or any non-dyadic model, actually has been addressed in different ways by institutionalized precursors in the world of internet law and policy. That model is multistakeholderism. Those precursors, ranging from hands-on regulators and norm setters like ICANN to broader and more inclusive policy discussion forums like the Internet Governance Forum, are by no means perfect and so must be subjected to ongoing critical review and refinement. But they're better at providing a comprehensive, holistic perspective than lawmaking and court cases. Governments should be able to participate, but should be recognizes as stakeholders and not just referees.

Commitment to democratic values, including free speech, on the internet. Everyone agrees that some kinds of freedom of expression are disturbing and disruptive on the internet—yet, naturally enough, not everybody agrees about what should be banned or controlled. We need to work actively to uncouple the commitment to free speech on the internet—which we should embrace as a function of both the First Amendment and international human-rights instruments—from debates about particular free-speech problems. The road to doing this lies in bipartisan (or multipartisan, or transpartisan) commitment to free-speech values. The road away from the commitment lies expressly in the presumption that "free speech" is a value that is more "right" than "left" (or vice versa). To save free speech for any of us, we must commit in the establishment of our internet policies to what Brandeis called "freedom for the thought that we hate."

Commitment to "open society" models of internet norms and internet governance institutions. Recognition, following Karl Popper's The Open Society and Its Enemies (Chapter 7) that our framework for internet law and regulation can't be "who has the right to govern" because all stakeholders have some claims of right regarding this. And it can't be "who is the best to govern" because that model leads to disputed notions of who's best. Instead, as Popper frames it,

"For even those who share this assumption of Plato's admit that political rulers are not always sufficiently 'good' or 'wise' (we need not worry about the precise meaning of these terms), and that it is not at all easy to get a government on whose goodness and wisdom one can implicitly rely. If that is granted, then we must ask whether political thought should not face from the beginning the possibility of bad government; whether we should not prepare for the worst leaders, and hope for the best. But this leads to a new approach to the problem of politics, for it forces us to replace the question: Who should rule? by the new question: How can we so organize political institutions that bad or incompetent rulers can be prevented from doing too much damage?"

Popper's focus on institutions that prevent "too much damage" when "the worst leaders" in charge is the right one. Protecting freedom of speech in today's internet ecosystem requires protecting against the excesses or imbalances that necessarily result from merely dyadic conceptions of where the problems are or where the responsibilities for correcting the problems lie. If, for example, government or the public want more content moderation by platforms, there need to be institutions that facilitate education and improved awareness about the tradeoffs. If, as a technical and human matter it's difficult (maybe impossible) to come up with a solution that (a) scales and (b) doesn't lead to a parade of objectionable instances of censorship/non-censorship/inequity/bias, then we need create institutions in which that insight is fully shared among stakeholders. (Facebook has promised more than once to throw money at AI-based solutions, or partial solutions, to content problems, but the company is in the unhappy position of having a full wallet with nothing that's worth buying, at least for that purpose. (See "Can Mark Zuckerberg Fix Facebook Before It Breaks Democracy?"). The alternative will be increasing insistence that platforms engage in "private governance" that's both inconsistent and less accountable. In the absence of an "ecosystem" perspective, different stakeholders will insist on short-term solutions that ignore the potential for "vicious cycle" effects.

Older models for mass-medium free-speech regulation were entitles like newspapers and publishers, with high degrees of editorial control, and common carriers like the telephone and telegraph, which mostly did not make content-filtering determinations. There is likely no version of these older models that would work for Twitter or Facebook (or similar platforms) while maintaining the great increase in freedom of expression that those platforms have enabled. Dyadic conceptions of responsibility may lead to "vicious cycles," as when Facebook is pressured to censor some content in response to demands for content moderation, and the company's response creates further unhappiness with the platform (because human beings who are the ultimate arbiters of individual content-moderation decisions are fallible, inconsistent, etc.). At that point, the criticism of the platform may frame itself as a demand for less "censorship" or for more "moderation" or for the end of all unfair censorship/moderation. There may also be the inference that platforms have deliberately been socially irresponsible. Although that inference may be correct in some specific cases, the general truth is that the platforms have more typically been wrestling with a range of different, competing responsibilities.

It is safe to assume that today's mass-media platforms, including but not limited to social media, as well as tomorrow's platforms will generate new models aimed at ensuring that the freedom of speech is protected. But the only way to increase the chances that the new models will be the best possible models is to create a framework of shared free-speech and open-society values, and to ensure that each set of stakeholders has its seats at the table when the model-building starts.

Mike Godwin (@sfmnemonic) is a distinguished senior fellow at the R Street Institute.

37 Comments

Posted on Free Speech - 29 November 2018 @ 11:58am

Our Bipolar Free-Speech Disorder And How To Fix It (Part 2)

from the the-free-speech-triangle dept

In Part 1 of this series, I gave attention to law professor Jack Balkin's model of "free speech as a triangle," where each vertex of the triangle represents a group of stakeholders. The first vertex is government and intergovernmental actors. The second is internet platform and infrastructure providers, and the third is users themselves. This "triangle" model of speech actors is useful because it enables us to characterize the relationships among each set of actors, thereby illuminating how the nature of regulation of speech has changed and become more complicated than it used to be.

Take a look again at Balkin's Figure 1.

Although it's clearer when we visualize all the players in the free-speech regulation landscape that a "free-speech triangle" at least captures more complexity than the usual speakers-against-the-government or speakers-against-the-companies or companies-against-the-government models, the fact is that our constitutional law and legal traditions predispose us to think of these questions in binary rather than, uh, "trinary" terms. We've been thinking this way for centuries, and it's a hard habit to shake. But shaking the binary habit is a necessity if we're going to get the free-speech ecosystem right in this century.

To do this we first have to look at how we typically reduce these "trinary" models to the binary models we're more used to dealing with. With three classes of actors, there are three possible "dyads" of relationships: user–platform, government–platform, and user–government.

(a) Dyad 1: User complaints against platforms (censorship and data gathering)

Users' complaints about platforms may ignore or obscure the effects of government demands on platforms and their content-moderation policies.

Typically, public controversies around internet freedom of expression are framed by news coverage and analysis as well as by stakeholders themselves, as binary oppositions. If there is a conflict over content between (for example) Facebook and a user, especially if it occurs more than once, that user may conclude that that her content was removed for fundamentally political reasons. This perception may be exacerbated if the censorship occurred and was framed as a violation of the platform's terms of service. A user subject to such censorship may believe that her content is no more objectionable than that of users who weren't censored, or that her content is being censored while content that is just as heated, but representing a different political point of view, isn't being censored. Naturally enough, this outcome seems unfair, and a user may infer that the platform as a whole is politically biased against those of her political beliefs. It should be noted that complaints about politically motivated censorship apparently come from most and perhaps all sectors.

A second complaint from users may derive from data collection by a platform. This may not directly affect the direct content of a user's speech, but it may affect the kind of content she encounters, which, when driven by algorithms aimed increasing her engagement on the platform, may serve not only to urge her participation in more or more commercial transactions, but also to "radicalize" her, anger her, or otherwise disturb her. Even if an individual may judge herself more or less immune from algorithmically driven urges to view more and more radical and radicalizing content, she may be disturbed by the radicalizing effects that such content may be having on her culture generally. (See, e.g., Tufekci, Zeynep, "YouTube, the Great Radicalizer.") And she may be disturbed at how an apparently more radicalized culture around her interacts with her in more disturbing ways.

Users may be concerned both about censorship of their own content (censorship that may seem unjustified) and platforms' use of data, which may seem to be designed to manipulate them or else manipulate other people. In response, users (and others) may demand that platforms track bad speakers or retain data about who bad speakers are (e.g., to prevent bad speakers from abandoning "burned" user accounts and returning with new accounts to create the same problems) as well as about what speakers say (so as to police bad speech more). But there are two likely outcomes of a short-term pursuit of pressuring platforms to censor more or differently, or to gather less data (about users themselves) or to gather more data (about how users' data are being used). One obvious, predictable outcome of these pressures is that, to the extent the companies respond to them, governments may leverage platforms' responses to user complaints in ways that make it easier for government to pressure platforms for more user content control (not always with the same concerns that individual users have) or to provide user data (because governments like to exercise the "third-party" doctrine to get access to data that users have "voluntarily" left behind on internet companies' and platform providers' services).

(b) Dyad 2: Governments' demands on platforms (content and data)

Government efforts to impose new moderation obligations on platforms, even in response to user complaints, may result in versions of the platforms that users value less, as well as more pressure on government to intervene further.

In the United States, internet platform companies (like many other entities, including ordinary blog-hosting servers and arguably bloggers themselves) will find that their First Amendment rights are buttressed and extended by Section 230 of the Communications Decency Act, which generally prohibits content-based liability for those who reproduce on the internet content that is originated by others. Although a full discussion of the breadth and the exceptions to Section 230—which was enacted as part of the omnibus federal Telecommunications Act reform in 1996—is beyond the scope of this particular paper, it is important to underscore that Section 230 extends the scope of protection for "intermediaries" more broadly than First Amendment case law alone, if we are to judge by relevant digital-platform cases prior to 1996, might have done. But the embryonic case law in those early years of the digital revolution seemed to be moving in a direction that would have resulted in at least some First Amendment protections for platforms consistent with principles that protect traditional bookstores from legal liability for the content of particular books. One of the earliest cases prominent cases concerning online computer services, Cubby v. CompuServe (1991), drew heavily on a 1959 Supreme Court case, Smith v. California, that established that bookstores and newsstands were properly understood to deserve First Amendment protections based on their importance to the distribution of First Amendment-protected content.

Section 230's broad, bright-line protections (taken together with the copyright-specific protections for internet platforms created by the Digital Millennium Copyright Act in 1998) are widely interpreted by legal analysts and commentators as having created the legal framework that gave rise to internet-company success stories like Google, Facebook, and Twitter. These companies, as well as a raft of smaller, successful enterprises like Wikipedia and Reddit, originated in the United States and were protected in their infancy by Section 230. Even critics of the platforms—and there are many—typically attribute the success of these enterprises to the scope of Section 230. So it's no great surprise to discover that many and perhaps most critics of these companies (who may be government actors or private individuals) have become critics of Section 230 and want to repeal or amend it.

In particular, government entities in the United States, both at the federal level and at the state level, have sought to impose greater obligations on internet platforms not merely to remove content that is purportedly illegal, but also to prevent that content from being broadcast by a platform in the first place. The notice-and-takedown model of the Digital Millennium Copyright Act of 1998, which lends itself to automated enforcement and remedies to a higher degree than non-copyright-related content complaints, is frequently suggested by government stakeholders as a model for how platforms ought to respond to complaints about other types of purportedly illegal content, including user-generated content. The fact that copyright enforcement, as distinct from enforcement other communications-related crimes or private causes of action, is comparatively much simpler than most other remedies in communications law, is a fact that is typically passed over by those who are unsympathetic to today's social-media landscape.

Although I'm focusing here primarily on U.S. government entities, this tendency is also evident among the governments of many other countries, including many countries that rank as "free" or "partly free" in Freedom House's annual world freedom report. It may be reasonably asserted that the impulse of governments to offload the work of screening for illegal (or legal but disturbing) content is international. The European Union, for example, is actively exploring regulatory schemes that implicitly or explicitly impose content-policing norms on platform companies and that impose quick and large penalties if the platforms fail to comply. American platforms, which operate internationally, must abide by these systems at least with regard to their content delivery within EU jurisdictions as well as (some European regulators have argued) anywhere else in the world.

Added to governments' impulse to impose content restrictions and policing obligations on platforms is governments' hunger for the data that platforms collect. Not every aspect of the data that platforms like Google and Facebook and Twitter collect on users is publicly known, nor have the algorithms (decision-making processes and criteria implemented by computers) that the platforms use to decide what content may need monitoring, or what content users might prefer, being generally published. The reasons some aspects of the platforms' algorithmic decision-making may be generally reduced to two primary arguments. First, the platforms' particular choices about algorithmically selecting and serving content, based on user data, may reasonably classed as trade secrets, so that if they were made utterly public a competitor could free-ride on the platforms' (former) trade secrets to develop competing products. Second, if platform algorithms are made wholly public, it becomes easier for anyone—ranging from commercial interests to mischievous hackers and state actors—to "game" content so that it is served to more users by the platform algorithms.

Governments' recognition that protections for platforms has made it easier for the platforms to survive and thrive may wish to modify the protections they have granted, or to impose further content-moderation obligations on platforms as a condition of statutory protections. But even AI-assisted moderation measures will necessarily be either post-hoc (which means that lots of objectionable content will be public before the platform curates it) or pre-hoc (which means that platforms will become gatekeepers of public participation, shoehorning users into a traditional publishing model or an online-forum model as constrained by top editors as the early version of the joint Sears-IBM service Prodigy was).

(c) Dyad 3: People (and traditional press) versus government.

New, frequently market-dominant internet platforms for speakers create new government temptations and capabilities to (i) surveil online speech, (ii) leverage platforms to suppress dissident or unpopular speech or deplatform speakers, and/or (iii) employ or compel platforms to manipulate public opinion (or to regulate or suppress manipulation).

It's trivially demonstrable that some great percentage of complaints about censorship in open societies is grounded in individual speakers' or traditional publishers' complaints that government is acting to suppress certain kinds of speech. Frequently the speech in question is political speech but sometimes it is speech of other kinds (e.g., allegedly defamatory, threatening, fraudulent, or obscene) of speech. This dyad is, for the most part, the primary subject matter of traditional First Amendment law. It is also a primary focus of international free-expression law where freedom of expression is understood to be guaranteed by national or international human-rights instruments (notably Article 19 of the International Covenant on Civil and Political Rights).

But this dyad has been distorted in the twenty-first century, in which, more often than not, troubling political speech or other kinds of troubling public speech are normally mediated by internet platforms. It is easier on some platforms, but by no means all platforms, for speakers to be anonymous or pseudonymous. Anonymous or pseudonymous speech is not universally regarded by governments as a boon to public discourse, and frequently governments will want to track or even prosecute certain kinds of speakers. Tracking such speakers was difficult (although not necessarily impossible) in the pre-internet era of unsigned postcards and ubiquitous public telephones. But internet platforms have created new opportunities to discover, track, and suppress speech as a result of the platforms' collection of user data for their own purposes.

Every successful internet platform that allows users to express themselves has been a target of government demands for disclosure of information about users. In addition, internet platforms are increasingly the target of government efforts to mandate assistance (including the building of more surveillance-supportive technologies) in criminal-law or national-security investigations. In most ways this is analogous to the 1994 passage of CALEA in the United States, which obligated telephone companies (that is, providers of voice telephony) to build technologies that facilitated wiretapping. But a major difference is that the internet platforms more often than not capture far more information about users than telephone companies traditionally had done. (This generalization to some extent oversimplifies the difference, given that there is frequently convergence between the suites of services that internet platforms and telephone companies—or cable companies—now offer their users.)

Governmental monitoring may suppress dissenting (or otherwise troubling) speech, but governments (and other political actors, such as political parties) may also use internet platforms to create or potentiate certain kinds of political speech in opposition to the interests of users. Siva Vaidhyanathan documents particular uses of Facebook advertising in ways that aimed to achieve political results, including not just voting for an approved candidate but also dissuasion of some voters from voting at all, in the 2016 election.

As Vaidhyanathan writes: "Custom Audiences is a powerful tool that was not available to President Barack Obama and Governor Mitt Romney when they ran for president in 2012. It was developed in 2014 to help Facebook reach the takeoff point in profits and revenue." Plus this: "Because Facebook develops advertising tools for firms that sell shoes and cosmetics and only later invites political campaigns to use them, 'they never worried about the worst-case abuse of this capability, unaccountable, unreviewable political ads,' said Professor David Carroll of the Parsons School of Design."

There are legitimate differences of opinion regarding the proper regime for regulation of political advertising, as well as regarding the extent to which regulation of political advertising can be implemented consistent with existing First Amendment precedent. It should be noted, however, that advertising of the sort that Vaidhyanathan discusses raises issues not only of campaign spending (although in 2016, at least, the spending on targeted Facebook political advertising of the "Custom Audiences" variety seems to have been comparatively small) as of transparency and accountability. Advertising that's micro-targeted and ephemeral is arguably not accountable to the degree that an open society should require. There will be temptations for government actors to use mechanisms like "Custom Audiences" to suppress opponents' speech—and there also will be temptations by government to limit or even abolish such micro-targeted instances of political speech.

What is most relevant here is that the government may address temptations either to employ features like "Custom Audiences" or to suppress the use of those features by other political actors in non-transparent or less formal ways, (e.g., through the "jawboning" that Jack Balkin describes in his "New School Speech Regulation" paper). Platforms—especially market-dominant platforms that, as a function of their success and dominance, may be particularly targeted on speech issues—may feel pressured to remove dissident speech in response to government "jawboning" or other threats of regulation. And, given the limitations of both automated and human-based filtering, a platform that feels compelled to respond to such governmental pressure is almost certain to generate results that are inconsistent and that give rise to further dissatisfaction, complaints, and suspicions on the part of users—not just the users subject to censorship or deplatforming, but also users who witness such actions and disapprove of them.

Considered both separately and together, it seems clear that each of the traditional "dyadic" models of how to regulate free speech tend to focus on two vertices of the free-speech triangle while overlooking a third vertex, whose stakeholders may intervene or distort or exploit or be exploited by outcomes of conflicts of the other two stakeholder groups. What this suggests is that no "dyadic" conception of the free-speech ecosystem is sufficiently complex and stable enough to protect freedom of expression or, for that matter, citizens' autonomy interests in privacy and self-determination. This leaves us with the question of whether it is possible to direct our law and policy in a direction that takes into account today's "triangular" free-speech ecosystem in ways that provide stable, durable, expansive protections of freedom of speech and other valid interests of all three stakeholder groups. That question is the subject of Part 3 of this series.

Mike Godwin (@sfmnemonic) is a distinguished senior fellow at the R Street Institute.

18 Comments

Posted on Free Speech - 28 November 2018 @ 11:56am

Our Bipolar Free-Speech Disorder And How To Fix It (Part 1)

from the free-speech-and-social-media dept

When we argue how to respond to complaints about social media and internet companies, the resulting debate seems to break down into two sides. On one side, typically, are those who argue that it ought to be straightforward for companies to monitor (or censor) more problematic content. On the other are people who insist that the internet and its forums and platforms—including the large dominant ones like Facebook and Twitter—have become central channels of how to exercise freedom of expression in the 21st century, and we don't want to risk that freedom by forcing the companies to be monitors or censors, not least because they're guaranteed to make as many lousy decisions as good ones.

By reflex and inclination, I usually have fallen into the latter group. But after a couple of years of watching various slow-motion train wrecks centering on social media, I think it's time to break out of the bipolar disorder that afflicts our free-speech talk. Thanks primarily to a series of law-review articles by Yale law professor Jack Balkin, I now believe free-speech debates no longer can be simplified in terms of government-versus-people, companies versus people, or government versus companies. No "bipolar" view of free speech on the internet is going to give us the complete answers, and it's more likely than not to give us wrong answers, because today speech on the internet isn't really bipolar at all—it's an "ecosystem."

Sometimes this is hard for civil libertarians, particularly Americans, to grasp. The First Amendment (like analogous free-speech guarantees in other democracies) tends to reduce every free-speech or free-press issue to people-versus-government. The people spoke, and the government sought to regulate that speech. By its terms, the First Amendment is directed solely at averting government impulses to censor against (a) publishers' right to publish controversial content and/or (b) individual speakers' right to speak controversial content. This is why First Amendment cases most commonly are named either with the government as a listed party (e.g., Chaplinsky v. New Hampshire) or a representative of the government, acting in his or her government role as a government official, as a named party (e.g. Attorney General Janet Reno in Reno v. ACLU).

But in some sense we've always known that this model is oversimplified. Even cases in which the complainant was nominally a private party still involved government action in the form of enactment of speech-restrictive laws that gave rise to the complaint. In New York Times Inc. v. Sullivan, the plaintiff, Sullivan, was a public official, but his defamation case against the New York Times was grounded in his reputational interest as an ordinary citizen. In Miami Herald Publishing Company v. Tornillo, plaintiff Tornillo was a citizen running for a state-government office who invoked a state-mandated "right of reply" because he had wanted to compel the Herald to print his responses to editorials that were critical of his candidacy. In each of these cases, the plaintiff's demand did not itself represent a direct exercise of government power. The private plaintiffs' complaints were personal to them. Nevertheless, in each of these cases, the role of government (in protecting reputation as a valid legal interest, and in providing a political candidate a right of reply) was deemed by the Supreme Court to represent exercises of governmental power. For this reason, the Court concluded that these cases, despite their superficial focus on a private plaintiff's cause of action, nonetheless fall under the scope of the First Amendment. Both newspaper defendants won their Supreme Court appeals.

By contrast, private speech-related disputes between private entities, such as companies or individuals, normally are not judged as directly raising First Amendment issues. In the internet era, if a platform like Facebook or Twitter chooses to censor content or deny service to a subscriber because of (an asserted) violation of its Terms of Service, or if a platform like Google chooses to delist a website that offers pharmaceutical drugs in violation of U.S. law or the law of other nations, any subsequent dispute is typically understood, at least initially, as a disagreement that does not raise First Amendment questions.

But the intersection between governmental action and private platforms and publishers has become both broader and blurrier in the course of the last decade. Partly this is because some platforms have become primary channels of communication for many individuals and businesses, and some of these platforms have become dominant in their markets. It is also due in part to concern about various ways the platforms have been employed with the goal of abusing individuals or groups, perpetrating fraud or other crimes, generating political unrest, or causing or increasing the probability of other socially harmful phenomena (including disinformation such as "fake news.")

To some extent, the increasing role of internet platforms, including but not limited to social media such as Facebook and Twitter in Western developed countries, as one of the primary media for free expression was predictable. (For example, in Cyber Rights: Defending Free Speech in the Digital Age (Times Books, 1998), I wrote this: "Increasingly, citizens of the world will be getting their news from computer-based communications-electronic bulletin boards, conferencing services, and networks-which differ institutionally from traditional print media and broadcast journalism." See also "Net Backlash = Fear of Freedom," Wired, August 1995: "For many journalists, 'freedom of the press' is a privilege that can't be entrusted to just anybody. And yet the Net does just that. At least potentially, pretty much anybody can say anything online - and it is almost impossible to shut them up.")

What was perhaps less predictable, prior to the rise of market-dominant social-media platforms, is that government demands regarding content may result in "private governance" (where market-dominant companies become the agents of government demands but implement those demands less transparently than enacted legislation or recorded court cases do). What this has meant is that individual citizens concerned about exercising their freedom of expression in the internet era may find that exercising one's option to "exit" (in the Albert O. Hirschman sense) may impose great costs.

At the same time, lack of transparency about platform policy (and private government) may make it difficult for individual speakers to interpret what laws or policies the censorship of their content (or the exclusion of themselves or others) in ways that enable them to give effective "voice" to their complaints. For example, they may infer that their censorship or "deplatforming" represents a political preference that has the effect of "silencing" their dissident views, which in a traditional public forum might be clearly understood as protected by First Amendment-grounded free-speech principles.

These perplexities, and the current public debates about freedom of speech on the internet, create the need for a reconsideration of the internet free speech not as a simplistic dyad, or as a set of simplistic, self-contained dyads, but instead as an ecosystem in which decisions in one part may well lead to unexpected, undesired effects in other parts. A better approach would be to consider internet freedom of expression "ecologically," to consider expression on the internet an "ecosystem," and to think about various legal, regulatory, policy, and economic choices as "free-speech environmentalists," with the underlying goal of protecting the internet free-speech ecosystem in ways that protect individuals' fundamental rights.

Of course, individuals have more fundamental rights than freedom of expression. Notably, there is an international consensus that individuals deserve, inter alia, some kind of rights to privacy, although, as with expression, there is some disagreement about what the scope of privacy rights should be. But changing the consensus paradigm of freedom of expression so that it is understood as an ecosystem not only will improve law, regulation, and policy regarding free speech, but also will provide a model that possibly may be fruitful in other areas, like privacy.

In short, we need a theory of free speech that takes into account complexity. We need to build consensus around that theory so that stakeholders with a wide range of political beliefs nevertheless share a commitment to the complexity-accommodating paradigm. In order to do this, we need to begin with a taxonomy of stakeholders. Once we have the taxonomy, we need to identify how the players interact with one another. And ultimately we need some initiatives that suggest how we may address free-speech issues in ways that are not shortsighted, reactive, and reductive, but forward-looking, prospective, and inclusive.

The internet ecosystem: a taxonomy.

Fortunately, Jack Balkin's recent series of law-review articles has given us a head start on building that theory, outlining the complex relationships that now exist among citizens, government actors, and companies that function as intermediaries. These paradigm-challenging articles culminate in a synthesis is reflected in his 2018 law-review article "Free Speech is a Triangle."

Balkin rejects simple dyadic models of free speech. Because an infographic is sometimes worth 1000 words, it may be most convenient to reproduce Balkin's diagram of what he refers to as a "pluralistic" (rather than "dyadic") model of free speech. Here it is:

Balkin recognizes that the triangle may be taken as oversimplifying the character of particular entities within any set of parties at a "corner." For example, social-media platforms are not the same things as payment systems, which aren't the same things as search engines or standard-setting organizations. Nevertheless, entities in any given corner may have roughly the same interests and play roughly the same roles. End-users are not the same things as "Legacy Media" (e.g., the Wall Street Journal or the Guardian), yet both may be subject to "private governance" from internet platforms or subject to "old-school speech regulation" (laws and regulation) imposed by nation-states or treaties. ("New-school speech regulation" may arise when governments compel or pressure companies to exercise speech-suppressing "private governance.")

Certainly some entities within this triangularized model may be "flattened" in the diagram in ways that don't reveal the depth of their relationships to other parties. For example, a social-media company like Facebook may collect vastly more data (and use it in far more unregulated ways) than a payment system (and certainly far more than a standard-setting organization). Balkin addresses the problem of Big Data collection by social-media companies and others—including the issue of how Big Data may be used in ways that inhibit or distort free speech-- by suggesting that such data-collecting companies be considered "information fiduciaries" with obligations that may parallel or be similar to those of more traditional fiduciaries such as doctors and lawyers. (He has developed this idea further in separate articles both sole-authored and co-authored with Jonathan Zittrain.)

Properly, the information-fiduciary paradigm maps more clearly to privacy interests rather than to free-expression interests, but collection, maintenance, and use of large amounts of user data may be used in free-speech contexts. The information-fiduciary concept may not seem to be directly relevant to content issues. But it's indirectly relevant if the information fiduciary (possibly but not always at the behest of government) uses user data to try to manipulate users through content, or to disclose user content choices to government (for example).

In addition, information fiduciaries functioning as social-media platforms have a different relationship with the users, who create the content that makes these platforms attractive. In the traditional world of newspapers and radio, publishers had a close voluntary relationship with the speakers and writers who created their content, which meant that traditional-media entities had strong incentives to protect their creators generally. To some large degree, publisher and creator interests were aligned, although there are predictable frictions, as when a newspaper's or broadcaster's advertisers threaten to remove financial support for controversial speakers and writers.

With online platforms, that alignment is much weaker, if it exists at all: Platforms lack incentives to fight for their users' content, and indeed may have incentives to censor it themselves for private profit (e.g., advertising dollars). In the same way that the traditional legal or financial or medical fiduciary relationship is necessary to correct possible misalignment of incentives, the "information fiduciary" relationship ought to be imposed on platforms to correct their misaligned incentives toward private censorship. In a strong sense, this concept of information fiduciary is a key to understanding how a new speech framework is arguably necessary, and how it might work.

I've written elsewhere about how Balkin's concept of social-media companies (and others) as information fiduciaries might actually position the companies to be stronger and better advocates of free expression and privacy than they are now. But that's only one piece of the puzzle when it comes to thinking ecologically about today's internet free-speech issues. The other pieces require us to think about the other ways in which "bipolar thinking" about internet free speech not only causes us to misunderstand our problems but also tricks us into coming up bad solutions. And that's the subject I'll take up in Part 2.

Mike Godwin (@sfmnemonic) is a distinguished senior fellow at the R Street Institute.

20 Comments

Posted on Techdirt - 25 October 2018 @ 1:26pm

Last Chance To Opt Out Of #MyHealthRecord, Australians!

from the deadline-is-november-15 dept

Australia's controversial and clumsy rollout of its "My Health Record" program this summer didn't cause the "spill" -- what Australians call an abrupt turnover of party leadership in Parliament — that gave the country a new Prime Minister in August. But it didn't improve public trust in the government either. The program — which aims to create a massive nationally administered database of more or less every Australian's health care records — will pose massive privacy and security risks for the citizens it covers, with less-than-obvious benefits for patients, the medical establishment, and the government.

Citizen participation in the new program isn't quite mandatory, but it's nearly so, thanks to the government's recent shift of the program from purely voluntary to "opt-out." Months before the planned rollout, which began June 16, at least one poll suggested that a sizable minority of Australians don't want the government to keep their health information in a centralized health-records database.

In response to ongoing concern about the privacy impact of the program (check out #MyHealthRecord on Facebook and Twitter), the new government is pushing for legislative changes aimed at addressing the growing public criticism of the program. But many privacy advocates and health-policy experts say the proposed fixes, while representing some improvements on particular privacy issues, don't address the fundamental problem. Specifically, the My Health Record program, which originally was designed as a voluntary program, is becoming an all-but-mandatory health-record database for Australian citizens, held (and potentially exploited) by the government.

Australia's shifting of its electronic-health-records program to "opt-out" — which means citizens are automatically included in the program unless they take advantage of a short-term "window" to halt automatic creation of their government-held health records — is a textbook example of how to further undermine trust in a government that already has trust issues when it comes to privacy. Every government that imposes record-keeping requirements that impact citizen privacy should view Australia's abrupt shift to "opt-out" health-care records as an example of What Not To Do.

And yet: supporters of My Health Record have persisted in their commitment to "opt out" during the shift from Malcolm Turnbull's administration to that of his successor, Scott Morrison. This means that if an Australian doesn't invest time and energy into invoking her right not to be included in the database — within the less-than-one-month window that citizens currently have to make this choice — she will be included by default.

In other words, any citizen's health-care records in the program will be held by the government permanently throughout that citizen's and will persist for 30 years after that citizen's death. Even if an Australian chose later to opt out of the program, the record might still (theoretically) accessible to health-care providers and government officials. Health Minister Greg Hunt introduced legislation last summer that would address some of these complaints about the program, but it's unclear whether the Australian Parliament, which has weathered several leadership shifts over the past decade, has the focus or will to implement the changes.

The fact is, the automatic creation of your My Health Record could still result in a permanent health-care record that's outside of any individual Australian's control because the government can always repeal any law or regulation requiring deletion or limiting access. In effect, "My Health Record" is a misnomer: a more accurate name for the program would be "The Government's Health Records About You."

A great deal of Australian media coverage of the rollout has been critical of the Turnbull government's -– and later the Morrison government's -- "full steam ahead" approach. The pushback against My Health Record has been immense. Worse, citizens who have rushed to opt out of the program have found the system less than easy to navigate — whether on the Web or through a government call center. The flood of Australians who attempted to opt out of the program on the first day they were allowed to do so, found that they were unwitting beta testers, stress-testing the opt-out system. After the first-day opt-out numbers, the government has either declined or been unable to disclose how many Australians are opting out. But a Sydney Morning Herald report in July said the number of opt-outs might "run into the millions."

In kind of a weird mirror-universe adventure, Australia has managed to reproduce the same kind of public concern that sank a similar health-care effort in the United Kingdom just a few years ago. Phil Booth of the UK's Medconfidential privacy-advocacy group told the Guardian that "[t]he parallels are incredible" and that "this system seems to be the 2018 replica of the 2014 care.data." After a government-appointed commission underscored privacy and security concerns, the UK's "care.data" program was abandoned in 2016. Unfortunately for Australians, in the Australian version of the UK's "care.data" scheme, Spock has a beard.

The UK's experience suggests that the policy problem signaled by the opposition to the My Health Record initiative is bigger than Australia. That shouldn't be a surprise. After all, a developed country may provide a "universal health care" program like the United Kingdom's National Health Service, or a more "mixed" system (a public health care program supplemented by private insurers like that of Australia) or even an insurance-centric public-health program like Obamacare. But whatever the system, the appeal of "big data" approaches to create efficiencies in health care is broad, in the abstract.

But despite the theoretical appeal of #MyHealthRecord there's a paucity of actual economic research that shows that centralized health-care databases will actually provide benefits that recoup the costs of investment. (Australia's program has been estimated to cost more than $2 billion AUD so far, and it's not yet fully implemented.) No one, in or out of government, has made a business case for My Health Record that uses actual numbers. Instead, the chief argument in favor MHR is that it will enable health-care providers to share patient data more easily — which supposedly will save money — but health-care workers, much as they hate the paperwork associated with it, mostly know that there's no substitute for taking a fresh patient history at the point of intake.

The push for a national database of personal health information has been a fairly recent development, even though the country's current health-care system has been in place in more or less its current form since 1984. The Australian Department of Health announced in 2010 that the government would be spending nearly half a billion Australian dollars to build a system of what then were called Personally Controlled Electronic Health Records. The primary idea was to make it more efficient to share critical patient information among health-care providers treating the same person.

Another purported benefit would be standardization. Like the United States (where proposals to for a national health-records system have sometimes been promoted) Australia is a federal system of states and territories, each of which has its own government. The concern was that a failure to set national standards for digital health records would lead to the states and territories developing their own, possibly mutually incompatible systems. The distance among the states and territories (mostly on the coasts surrounding Australia's dry, unpopulated Outback) makes integration harder because of the distances separating different pockets of its population (now 25 million).

The 2010 announcement of the Personally Controlled Electronic Health Records program stated expressly "[a] personally controlled electronic health record will not be mandatory to receive health care." The basic model was opt-in — starting in 2012, Australians had to actively choose to create their shared digital health records. If you didn't register for the program, however, you didn't create a PCEHR. If you did register, you had the assurance that, under the government-promulgated Australian Privacy Principles, your personal health information would be strongly protected.

In practice, the PCEHR program, eventually rebranded as My Health Record, has never had much appeal to most citizens. The government burned somewhere near or past $2 billion AUD and yet, years into the program, the total number of citizens who had volunteered to "opt in" to have their health records shared and available in the program was only about 6 million. According to a March report in Australia's medical-news journal, the Medical Republic, Australia's physicians also seem to be less than sold on the value in the program either.

Prior to the latest push for a shift to "opt-out," only a few citizens saw much benefit (much less any fun or personal return) of investing the time it takes to master producing a complete and useful health record, and even those who did only rarely ended up using its key features. (Some health-fashion-forward citizens who do want to share their health-care records easily have opted to invest in more private solutions rather than rely on a centralized database that may be less controllable and less complete.)

By 2014 it was clear that the Australian government (control of which had shifted to the more conservative of the two major parties) wanted to move in closer-to-mandatory direction. It did so by announcing a wholesale conversion of the My Health Record database from opt-in to opt-out. This meant that, if you were an Australian citizen, a health record would be created automatically for you—unless you explicitly said you didn't want one. But the possibility of opting out hasn't quelled these ongoing complaints from the general public:

  1. The still-too-short, too-limited opt-out window. Australians were originally given a three-month window, starting July 16, to opt out of My Health Record. (It was later extended to November 15. Of course, critics regard the one-month extension as something less than stellar.) If you don't opt out in the approved window, an electronic health record will be created for you. By default, program provides that the government will keep the record for 30 years after your death. And the government will have the right to access the record—whether you've died or not— "for maintenance, audit and other purposes required or authorised[sic] by law."
  2. This goes on your permanent record. The law already authorizes a lot of government access (for law-enforcement agencies, court proceedings, and other non-health-related purposes). And of course the laws can be amended to authorize even more access. Were you ever treated for alcohol poisoning? Did you ever have an abortion? You may be able to limit access somewhat by tweaking the privacy controls of "My Health Record," but (unless you take strong, affirmative steps otherwise) it's never erased. And it may be demanded by a range of government authorities for all sorts of reasons under current or future laws or regulations.
  3. The disputed warrant requirement. The Australian Digital Health Agency, the relatively new government agency in charge of the program, said a warrant would be required—but that claim was contradicted by Australia's Parliamentary Library, whose analysis found that access by non-health government agencies with few if any procedural or privacy safeguards. Disturbingly, the Parliamentary Library's report was abruptly removed and revised after pushback from the Turnbull government. (The removed report has been reproduced here.) A subsequent Senate inquiry—with a report issued October 12—shows growing consensus behind adding a warrant requirement before law enforcement gets health record access, but the Australian Labor Party and the Australian Greens have dissented on the question of whether a warrant requirement fixes the problems: Per the Greens, the warrant requirement is "an improvement on the status quo, but it is an insufficient and disappointing one."
  4. And none of these criticisms even touch on the significance that a centralized health-care record database will give 900,000 health-care workers (not just doctors) comparatively unrestricted, untracked access to patient health records. By comparison, the average Australian under the pre-My Health Care system likely had to worry only about dozens of people having access to her health records — not hundreds of thousands.

Then-Prime Minister Malcolm Turnbull was dismissive of privacy concerns early on arguing that "there have been no privacy complaints or breaches with My Health Record in six years and there are over 6 million people with My Health Records." But many prominent health-care and privacy experts argue that the government's new promises to patch the system are inadequate. For example, requiring government agencies to get a warrant does nothing to protect patients from unauthorized access to their records by health-care workers with access to the My Health Record system. And the Labor members have argued that the new system needs a statutory provision that prevents health-care insurers from accessing My Health Record's data.

Typical of the external critics is former Australian Medical Association President Kerryn Phelps, who views the promises as "minor concessions" that are "woefully inadequate." Phelps, who cites a survey showing that 75 percent of doctors are themselves planning to opt out, called for "full parliamentary review" of the My Health Record program. Other critics have argued the government has painted itself into a corner due to the "sunk costs" of $2 billion AUD. Bernard Robertson-Dunn of the Australian Privacy Foundation argues that the whole problem, despite the fact that the government has spent those billions, is that Australia needs to reboot its digital-health initiative entirely.

But many of the critics of My Health Record in Parliament seem to be maneuvering to lessen the privacy harms likely to ensure from the shift to near-mandatory participation in My Health Record. In this, they may be driven by the fear that writing off the Australian health-care-records program may look too much like the abject failure that was the UK's "care.data" program. But Robertson-Dunn views the unwillingness of some members or Parliament to cut their losses as short-sighted, given the likely long-term harms the system poses to citizens' health privacy. Better to scrap My Health Record and write off the costs so far, he argues. Once that's done, he says, Australia can "[s]tart with a problem patients and doctors have and go from there."

Mike Godwin (mnemonic@gmail.com) is a distinguished senior fellow at R Street Institute.

21 Comments

Posted on Techdirt - 16 July 2018 @ 10:40am

Everything That's Wrong With Social Media Companies and Big Tech Platforms, Part 3

from the the-list-keeps-growing dept

I've written two installments in this series (part 1 is here and part 2 is here). And while I could probably turn itemizing complaints about social-media companies into a perpetual gig somewhere — because there's always going to be new material — I think it's best to list only just a few more for now. After that, we ought to step back and weigh what reforms or other social responses we really need. The first six classes of complaints are detailed in Parts 1 and 2, so we begin here in Part 3 with Complaint Number 7.

(7) Social media are bad for us because they're so addictive to us that they add up to a kind of deliberate mind control.

As a source of that generalization we can do no better than to begin with Tristan Harris's July 28, 2017 TED talk, titled "How a handful of tech companies control billions of minds every day."

Harris, a former Google employee, left Google in 2015 to start a nonprofit organization called Time Well Spent. That effort has now been renamed the Center for Humane Technology ( http://www.timewellspent.io now resolves to https://humanetech.com). Harris says his new effort — which also has the support of former Mozilla interface designer Aza Raskin and early Facebook funder Roger McNamee — represents a social movement aimed at making us more aware of the ways in which technology, including social media and other internet offerings, as well as our personal devices, are continually designed and redesigned to make them more addictive.

Yes, there's that notion of addictiveness again — we looked in Part 2 at claims that smartphones are addictive and talked about how to address that problem. But regarding the "mind control" variation of this criticism, it's worth examining Harris's specific claims and arguments to see how they compare to other complaints about social media and big tech generally. In his June 2017 TED talk. Harris begins with the observation that social-media notifications on your smart devices, may lead you to have thoughts you otherwise wouldn't think:

"If you see a notification it schedules you to have thoughts that maybe you didn't intend to have. If you swipe over that notification, it schedules you into spending a little bit of time getting sucked into something that maybe you didn't intend to get sucked into."

But, as I've suggested earlier in this series, this feature of continually tweaking content to attract your attention isn't unique to internet content or to our digital devices. This is something every communications company has always done — it's why ratings services for traditional broadcast radio and TV exist. Market research, together with attempts to deploy that research and to persuade or manipulate audiences, has been at the heart of the advertising industry for far longer than the internet has existed, as Vance Packard's 1957 book THE HIDDEN PERSUADERS suggested decades ago.

One major theme of Packard's THE HIDDEN PERSUADERS is that advertisers increasingly relied less on consumer surveys (derisively labeled "nose-counting") but on "motivational research" — often abbreviated by 1950s practitioners as "MR" — to look past what consumers say they want. Instead, the goal is to how they actually behave, and then gear their advertising content to shape or leverage consumers' unconscious desires. Packard's narratives in THE HIDDEN PERSUADERS are driven by revelations of the disturbing and even scandalous agendas of MR entrepreneurs and the advertising companies that hire them. Even so, Packard is careful in his book, in its penultimate chapter, to address what he calls "the question of validity" — that is, the question of whether "hidden persuaders'" strategies and tactics for manipulating consumers and voters are actually scientifically grounded. Quite properly, Packard acknowledges that the claims of the MR companies may have been oversold, or may have been adopted by companies who simply lack any other strategy for figuring out how to reach and engage consumers.

In spite of Packard's scrupulous efforts to make sure that no claims of advertising's superpowers to sway our thinking are accepted uncritically, our culture nevertheless has accepted at least provisionally the idea that advertising (and its political cousin, propaganda), affects human beings at pre-rational levels. It is this acceptance of the idea that content somehow takes us over that Tristan Harris invokes consistently in his writings and presentations about how social media, the Facebook newsfeed, and internet advertising work on us.

Harris prefers to describe how these online phenomena affect us in deterministic ways:

"Now, if this is making you feel a little bit of outrage, notice that that thought just comes over you. Outrage is a really good way also of getting your attention. Because we don't choose outrage — it happens to us."

"The race for attention [is] the race to the bottom of the brainstem."

Nothing Harris says about the Facebook newsfeed would have seemed foreign to a Madison Avenue advertising executive in, say, 1957. (Vance Packard includes commercial advertising as well as political advertising as centerpieces of what he calls "the large-scale efforts being made, often with impressive success, to channel our unthinking habits, our purchasing decisions, and our thought processes by the use of insights gleaned from psychiatry and the social sciences.") Harris describes Facebook and other social media in ways that reflect time-honored criticisms of advertising generally, and mass media generally.

But remember that what Harris says about internet advertising or Facebook notifications or the Facebook news feed is true of all communications. It is the very nature of communications among human beings that they give us thoughts we would not otherwise have. It is the very nature of hearing things or reading things or watching things that we can't unhear them, or unread them, or unwatch them. This is not something uniquely terrible about internet services. Instead it is something inherent in language and art and all communications. (You can find a good working definition of "communications" in Article 19 of the United Nations' Universal Declaration of Human Rights, which states that individuals have the right "to seek, receive, or impart information.") That some people study and attempt to perfect the effectiveness of internet offerings — advertising or Facebook content or anything else — is not proof that they're up to no good. (They arguably are exercising their human rights!) Similarly, the fact that writers and editors, including me, try to study how words can be more effective when it comes to sticking in your brain is not an assault on your agency.

It should give us pause that so many complaints about Facebook, about social media generally, about internet information services, and about digital devices actively (if maybe also unconsciously) echo complaints that have been made about any new mass medium (or mass-media product). What's lacking in modern efforts to criticize social media in particular — and especially when it comes to big questions like whether social media are damaging to democracy — is the failure of most critics to be looking at their own hypotheses skeptically, seeking falsification (which philosopher Karl Popper rightly notes is a better test of the robustness of a theory) rather than verification.

As for all the addictive harms that are caused by combining Facebook and Twitter and Instagram and other internet services with smartphones, isn't it worth asking critics whether they've considered turning notifications off for the social-media apps?

(8) Social media are bad for us because they get their money from advertising, and advertising — especially effective advertising — is inherently bad for us.

Harris's co-conspirator Roger McNamee, whose authority to make pronouncements on what Facebook and other services are doing wrong derives primarily from his having gotten richer from them, is blunter in his assessment of Facebook as a public-health menace:

"Relative to FB, the combination of an advertising model with 2.1 billion personalized Truman Shows on the ubiquitous smartphone is wildly more engaging than any previous platform ... and the ads have unprecedented effectiveness."

There's a lot to make fun of here--the presumption that 2.1 billion Facebook users are just creating "personalized Truman Shows," for example. Only someone who fancies himself part of an elite that's immune to what Harris calls "persuasion" would presume to draw that conclusion about the hoi polloi. But let me focus instead on the second part--the bit about the ads with "unprecedented effectiveness." Here the idea is, obviously, that advertising may be better for us when it's less effective.

Let's allow for a moment that maybe that claim is true! Even if that's so, advertising has played a central role in Western commerce for at least a couple of centuries, and in world commerce for at least a century, and the idea that we need to make advertising less effective is, I think fairly clearly, a criticism of capitalism generally. Now, capitalism may very well deserve that sort of criticism, but it seems like an odd critique coming from someone who's already profited immensely from that capitalism.

And it also seems odd that it's focused particularly on social media when, as we have the helpful example of THE HIDDEN PERSUADERS to remind us, we've been theoretically aware of the manipulations of advertising for all of this century and at least half of the previous one. If you're going to go after commercialism and capitalism and advertising, you need to go big--you can't just say that advertising suddenly became a threat to us because it's more clearly targeted to us based on our actual interests. (Arguably that's a feature rather than a bug.)

In responding to these criticisms, McNamee says "I have no interest in telling people how to live or what products to use." (I think the meat of his and Harris's criticisms suggests otherwise.) He explains his concerns this way:

"My focus is on two things: protecting the innocent (e.g., children) from technology that harms their emotion development and protecting democracy from interference. I do not believe that tech companies should have the right to undermine public health and democracy in the pursuit of profits."

As is so often the case with entrepreneurial moral panics, the issue ultimately devolves to "protecting the innocent" — some of whom surely are children but some other proportion of whom constitute the rest of us. In an earlier part of his exploration of these issues on the venerable online conferencing system The WELL, McNamee makes clear, in fact, that he really is talking about the rest of us (adults as well as children):

"Facebook has 2.1 billion Truman Shows ... each person lives in a bubble tuned to their emotions ... and FB pushes emotional buttons as needed. Once it identifies an issue that provokes your emotions, it works to get you into groups of like-minded people. Such filter bubbles intensify pre-existing beliefs, making them more rigid and extreme. In many cases, FB helps people get to a state where they are resistant to ideas that conflict with the pre-existing ones, even if the new ideas are demonstrably true."

These generalizations wouldn't need much editing to fit 20th-century criticisms of TV or advertising or comic books or 19th-century criticisms of dime novels or 17th-century criticisms of the theater. What's left unanswered is the question of why this new mass medium is going to doom us when none of the other ones managed to do it.

(9) Social media need to be reformed so they aren't trying to make us do anything or get anything out of us.

It's possible we ultimately may reach some consensus on how social media and big internet platforms generally need to be reformed. But it's important to look closely at each reform proposal to make sure we understand what we're asking for and also that we're clear on what the reforms might take away from us. Once Harris's TED talk gets past the let-me-scare-you-about-Facebook phase, it gets better — Harris has a program for reform in mind. Specifically, he calls for what he calls "three radical changes to our society," which I will paraphrase and summarize here.

First, Harris says, "we need to acknowledge that we are persuadable." Here, unfortunately, he elides the distinction between being persuaded (which involves evaluation and crediting of arguments or points of view) and being influenced or manipulated (which may happen at an unconscious level). (In fairness, Vance Packard's THE HIDDEN PERSUADERS is guilty of the same elision.) But this first proposition isn't radical at all — even if we're sticks-in-the-mud, we normally believe we are persuadable. It may be harder to believe that we are unconsciously swayed by how social media interact with us, but I don't think it's exactly a radical leap. We can take it as a given, I think, that internet advertising and Facebook's and Google's algorithms try to influence us in various ways, and that they sometimes succeed. The next question then becomes whether this influence is necessarily pernicious, but Harris finds passes quickly over this question, assuming the answer is yes.

Second, Harris argues, we need new models and accountability systems, guaranteeing accountability and transparency for the ways in which our internet services and digital devices try to influence us. Here there's very little to argue with. Transparency about user-experience design that makes us more self-aware is all to the good. So that doesn't seem like a particularly radical goal either.

It's in Harris's third proposal — "We need a design renaissance" — that you actually do find something radical. As Harris explains it, we need to redesign our interactions with services and devices so that we're never persuaded to do something that we may not initially want to do. He states, baldly, that "the only form of ethical persuasion that exists is when the goals of the persuader are aligned with the goals of the persuadee." This is a fascinating proposition that, so far as I know, is not particularly well-grounded in fact or in the history of rhetoric or in the history of ethics. It seems clear that sometimes it's necessary to persuade people of ideas that they may be predisposed not to believe, and that, in fact, they may be more comfortable not believing.

Given that fact, it follows that If we are worried about whether Facebook's algorithms lead to "filter bubbles," we should call for (or design) a system around the idea of never persuading anyone whose goals aren't already aligned with yours. Arguably, such a social-media platform might be more prone to filter bubbles rather than less so. One doesn't get the sense, reviewing Harris's presentations or other public writings and statements from his allies like Roger McNamee, either that they've compared current internet communications with previous revolutions driven by new mass-communications platforms, or analyzed their theories in light of the centuries of philosophical inquiry regarding human autonomy, agency, and ethics.

Moving past Harris's TED talk, we next must consider McNamee's recent suggestion that Facebook move from an advertising-supported to for-pay model. In a February 21 Washington Post op-ed, McNamee wrote the following:

"The indictments brought by special counsel Robert S. Mueller III against 13 individuals and three organizations accused of interfering with the U.S. election offer perhaps the most powerful evidence yet that Facebook and its Instagram subsidiary are harming public health and democracy. The best option for the company — and for democracy — is for Facebook to change its business model from one based on advertising to a subscription service."

In a nutshell, the idea here is that the incentives of advertisers, who want to compete for your attention, will necessarily skew how even the most well-meaning version of advertising-supported Facebook interacts with you, and not for the better. So the fix, he argues, is for Facebook to get rid of advertising altogether. "Facebook's advertising business model is hugely profitable," he writes, "but the incentives are perverse."

It's hard to escape the conclusion that McNamee believes either (a) advertising is inherently bad, or (b) advertising made more effective by automated internet platforms is particularly bad. Or both. And maybe advertising is, in fact, bad for us. (That's certainly a theme of Vance Packard's THE HIDDEN PERSUADERS, as well as of more recent work such as Tim Wu's book 2016 book THE ATTENTION MERCHANTS.) But it's hard to escape the conclusion that McNamee, troubled by Brexit and by President Trump's election, wants to kick the economic legs out from under Facebook's (and, incidentally, Google's and Bing's and Yahoo's) economic success. Algorithm-driven serving of ads is bad for you! It creates perverse incentives! And so on.

It's true, of course, that some advertising algorithms have created perverse incentives (so that Candidate Trump's provocative ads were seen as more "engaging" and therefore were sold cheaper — or, alternatively, more expensively — than Candidate Clinton's. I think the criticism of that particular algorithmic approach to pricing advertising is valid. But there are other ways to design algorithmic ad service, and it seems to me that the companies that have been subject to the criticisms are being responsive to them, even in the absence of regulation. This, I think, is the proper way to interpret Mark Zuckerberg's newfound reflection (and maybe contrition) over Facebook's previous approach to its users' experience, and his resolve — honoring without mentioning Tristan Harris's longstanding critique — that "[o]ne of our big focus areas for 2018 is making sure the time we all spend on Facebook is time well spent."

Some Alternative Suggestions for Reform and/or Investigation

It's not too difficult, upon reflection, to wonder whether the problem of "information cocoons" or "filter bubbles" is really as terrible as some critics have maintained. If hyper-addictive filter-bubbles have historically unprecedented power to overcome our free will, surely presumably have this effect even on most assertive, independently thinking, strong-minded individuals — like Tristan Harris or Roger McNamee. Even six-sigma-degree individualists might not escape! But the evidence that this is, in fact, the case, is less than overwhelming. What seems more likely (especially in the United States and in the EU) is that people who are dismayed by the outcome of the Brexit referendum or the U.S. election are trying to find a Grand Unifying Theory to explain why things didn't work out they way they'd expected. And social media are new, and they seem to have been used by mischievous actors who want to skew political processes, so it follows that the problem is rooted in technology generally or in social media or in smartphones in particular.

But nothing I write here should be taken as arguing that social media definitely aren't causing or magnifying harms. I can't claim to know for certain. And it may well be the case, in fact, that some large subset of human beings create "filter bubbles" for themselves regardless of what media technologies they're using. That's not a good thing, and it's certainly worth figuring out how to fix that problem if it's happening, but focusing on how that problem as a presumed phenomenon specific to social media perhaps focuses on a symptom of the human condition rather than a disease grounded in technology.

In this context, then, the question is, what's the fix? There are some good suggestions for short-term fixes, such as the platforms' adopting transparency measures regarding political ads. That's an idea worth exploring. Earlier in this series I've written about other ideas as well (e.g., using grayscale on our iPhones).

There are, of course, more general reforms that aren't specific to any particular platform. To start with, we certainly need to address more fundamental problems — meta-platform problems, if you will — of democratic politics, such as teaching critical thinking. We actually do know how to teach critical thinking — thanks to the ancient Greeks we've got a few thousand years of work done already on that project — but we've lacked the social will to teach it universally. It seems to me that this is the only way by which a cranky individualist minority that's not easily manipulated by social media, or by traditional media, can become the majority. Approaching all media (including radio, TV, newspapers, and other traditional media — not just internet media, or social media) with appropriate skepticism has to be part of any reform policy that will lead to lasting results.

It's easy, however, to believe that education — even the rigorous kind of education that includes both traditional critical-thinking skills and awareness of the techniques that may be used in swaying our opinions — will not be enough. One may reasonably believe that education can never be enough, or that, even when education is sufficient to change behavior (consider the education campaigns that reduced smoking or led to increased use of seatbelts), education all by itself simply takes too long. So, in addition to education reforms, there probably are more specific reforms — or at least a consensus as to best practices — that Facebook, other platforms, advertisers, government, and citizens ought to consider. (It seems likely that, to the extent private companies don't strongly embrace public-spirited best-practices reforms, governments will be willing to impose such reforms in the absence of self-policing.)

One of the major issues that deserve more study is the control and aggregation of user information by social-media platforms and search services. It's indisputable that online platforms have potentiated a major advance in market research — it's trivially easy nowadays for the platforms to aggregate data as to which ads are effective (e.g., by inspiring users to click through to the advertisers' websites). Surely we should be able to opt out, right?

But there's an unsettled public-policy question about what opting out of Facebook means or could mean. In his testimony earlier this year at Senate and House hearings on Facebook, Mark Zuckerberg has consistently stressed that individual users do have some high degree of control over the data (pictures, words, videos, and so on) that they've contributed to Facebook, and that users can choose to remove the data they've contributed. Recent updates in Facebook's privacy policy seem to underscore users' rights in this regard.

It seems clear that Facebook is committing itself at least to what I call Level 1 Privacy: you can erase your contributions from Facebook altogether and "disappear," at least when it comes to information you have personally contributed to the platform. But does it also mean that even other people who've shared my stuff no longer can share it (in effect, allowing me to depart and punch holes in other people's sharing of my stuff when I depart?

If Level 1 Privacy relates to the information (text, pictures, video, etc., that I've posted), that's not the end of the inquiry. There's also what I have called Level 2 Privacy, centering on what Facebook knows about me, or can infer from my having been on the service, even after I've gone. Facebook has had a proprietary interest in drawing inferences from how we interact with their service and using that to inform what content (including but not limited to ads) that Facebook serves to us. That's Facebook's data, not mine, because FB generated it, not me. If I leave Facebook, surely Facebook retains some data about me based on my interactions on the platform. (We also know, in the aftermath of Zuckerberg's testimony before Congress, that Facebook manages to collect data about people who themselves are not users of the service.)

And then there's Level 3 Privacy, which is the question of what Facebook can and should do with this inferential data that it has generated. Should Facebook share it with third parties? What about sharing it with governments? If I depart and leave a resulting hole in Facebook content, are there still ways to connect the dots so that not just Facebook itself, but also third-party actors, including governments, can draw reliable inferences about the now-absent me? In the United States, there arguably may be Fourth Amendment issues involved, as I've pointed out in a different context elsewhere. We may reasonably conclude that there should be limits on how such data can be used and on what inferences can be drawn. This is a public-policy discussion that needs to happen sooner rather than later.

Apart from privacy and personal-data concerns, we ought to consider what we really think about targeted advertising. If the criticism of targeted advertising, "motivational research," and the like historically has been that the ads are pushing us, then the criticism of internet advertising seems to be that internet-based ads are pulling us or even seducing us, based on what can be inferred about our inclinations and preferences. Here I think the immediate task has to be to assess whether the claims made by marketers and advertisers regarding the manipulative effects ads have on us are scientifically rigorous and testable. If the claims stand up to testing, then we have some hard public-policy questions we need to ask about whether and how advertising should be regulated. But if they aren't — if, in fact, our individual intuitions that we retain freedom and autonomy even in the face of internet advertising and all the data that can be gathered about us — then we need to assert that that freedom and autonomy and acknowledge that, just maybe, there's nothing categorically oppressive about being invited to engage in commercial transactions or urged to vote for a particular candidate.

Both the privacy questions and the advertising questions are big, complex questions that don't easily devolve to traditional privacy talk. If in fact we need to tackle these questions pro-actively, I think we must begin by defining what the problems are in ways that all of us (or at least most of us) agree on. Singling out Facebook is the kind of single-root-cause theory of what's wrong with our culture today may appeal to us as human beings — we all like straightforward storylines — but that doesn't mean it's correct. Other internet services harvest our data too. And non-internet companies have done so (albeit in more primitive ways) for generations. It is difficult to say they never should do so, and it's difficult to frame the contours of what best practices should be.

But if we're going to grapple with the question of regulating social-media platforms and other internet services, thinking seriously about what best practices should be, generally speaking, is the task that lies before us now. Offloading the public-policy questions to the platforms themselves — by calling on Facebook or Twitter or Google to censor antisocial content, for example — is the wrong approach, because it dodges the big questions that we need to answer. Plus, it would likely entrench today's well-moneyed internet incumbents.

Nobody elected Mark Zuckerberg or Jack Dorsey (or Tim Cook or Sundar Pichai) to do that for us. The theory of democracy is that we decide the public-policy questions ourselves, or we elect policymakers to do that for us. But that means we each have to do the heavy lifting of figuring out what kinds of reforms we think we want, and what kind of commitments we're willing to make to get the policies right.

Mike Godwin (mnemonic@gmail.com) is a Distinguished Senior Fellow at R Street Institute.

42 Comments

Posted on Techdirt - 5 June 2018 @ 12:07pm

Has Facebook Merely Been Exploited By Our Enemies? Or Is Facebook Itself The Real Enemy?

from the everything's-turning-up-facebook dept

Imagine that you're a new-media entrepreneur in Europe a few centuries back, and you come up with the idea of using moveable type in your printing press to make it easier and cheaper to produce more copies of books. If there are any would-be media critics in Europe taking note of your technological innovation, some will be optimists. The optimists will predict that cheap books will hasten the spread of knowledge and maybe even fuel a Renaissance of intellectual inquiry. They'll predict the rise of newspapers, perhaps, and anticipate increased solidarity of the citizenry thanks to shared information and shared culture.

Others will be pessimists—they'll foresee that the cheap spread of printed information will undermine institutions, will lead to doubts about the expertise of secular and religious leaders (who are, after all, better educated and better trained to handle the information that's now finding its way into ordinary people's hands). The pessimists will guess, quite reasonably, that cheap printing will lead to more publication of false information, heretical theories, and disruptive doctrines, which in turn may lead, ultimately, to destructive revolutions and religious schisms. The gloomiest pessimists will see, in cheap printing and later in the cheapness of paper itself—making it possible for all sorts of "fake news" to be spread--the sources of centuries of strife and division. And because the pain of the bad outcomes of cheap books is sharper and more attention-grabbing than contemplation of the long-term benefits of having most of the population know how to read, the gloomiest pessimists will seem to many to possess the more clear-eyed vision of the present and of the future. (Spoiler alert: both the optimists and the pessimists were right.)

Fast-forward to the 21st century, and this is just where we're finding ourselves when we look at public discussion and public policy centering on the internet, digital technologies, and social media. Two recent books written in the aftermath of recent revelations about mischievous and malicious exploitation of social-media platforms—especially Facebook and Twitter—exemplify this zeitgeist in different ways. And although both of these books are filled with valuable information and insights, they also yield (in different ways) to the temptation to see social media as the source of more harm than good. Which leaves me wanting very much both to praise what's great in these two books (which I read back-to-back) and to criticize them where I think they've gone too far over to the Dark Side.

The first book is Clint Watts's MESSING WITH THE ENEMY: SURVIVING IN A SOCIAL MEDIA WORLD OF HACKERS, TERRORISTS, RUSSIANS, AND FAKE NEWS. Watts is a West Point graduate and former FBI agent who's an expert on today's information warfare, including efforts by state actors (notably Russia) and non-state actors (notably Al Qaeda and ISIS) to exploit social media both to confound enemies and to recruit and inspire allies. I first heard of the book when I attended a conference at Stanford this spring where Watts—who has testified several times on these issues—was a presenter. His presentation was an eye-opening, erasing whatever lingering doubt I might have had about the scope and organization of those who want to use today's social media for malicious or destructive ends.

In MESSING WITH THE ENEMY Watts relates in a bracing yet matter-of-fact tone not only his substantive knowledge as a researcher and expert in social-media information warfare but also his first-person experiences in engaging with foreign terrorists active on social-media platforms and in being harassed by terrorists (mostly virtually) for challenging them in public exchanges. "The internet brought people together," Watts writes, "but today social media is tearing everyone apart." He notes the irony of social media's receiving premature and overgenerous credit for democratic movements against various dictatorships but later being exploited as platforms for anti-democratic and terrorist initiatives:

"Not long after many across the world applauded Facebook for toppling dictators during the Arab Spring revolutions of 2010 and 2011, it proved to be a propaganda platform and operational communications network for the largest terrorist mobilization in world history, bringing tens of thousands of foreign fighters under the Islamic State's banner in Syria and Iraq."

And it wasn't just non-state terrorists who learned quickly how to leverage social-media platforms; an increasingly activist and ambitious Russia, under the direction of Russian President Vladimir Putin, did so as well. Watts argues persuasively that Russia not only assisted and sponsored relatively inexpensive disinformation and propaganda campaigns using the social-media platforms to encourage divisiveness and lack of faith in government institutions (most successfully with the Brexit vote and the 2016 American elections) but also actively supported the hacking of the Democratic National Committee computer network which led to email dumps (using Wikileaks as a cutout). The security breaches, together with "computational propaganda"—social-media "bots" that mimicked real users in spreading disinformation and dissension—played an important role in the U.S. election, Watts writes, helping "the race remain close at times when Trump might have fallen completely out of the running." Even so, Watts doesn't believe Russian propaganda efforts alone would have tilted the outcome of the election—what it did instead was hobble support for Clinton so much that when, when FBI Director James Comey announced, one week before the election, that the Clinton email-server investigation had reopened, the Clinton campaign couldn't recover. "Without the Comey letter," he writes, "I believe Clinton would have won the election." Later in the book he connects the dots more explicitly: "Without the Russian influence effort, I believe Trump would not have been within striking distance of Clinton on Election Day. Russian influence, the Clinton email investigation, and luck brought Trump a victory—all of these forces combined."

Where Watts's book focuses on bad actors who exploit the openness of social-media platforms for various malicious ends, Siva Vaidhyanathan's ANTISOCIAL MEDIA: HOW FACEBOOK DISCONNECTS US AND UNDERMINES DEMOCRACY argues that the platforms—and especially the Facebook platform—is inherently corrosive to democracy. (Full disclosure: I went to school with Vaidhyanathan, worked on our student newspaper with him, and I consider him a friend.) Acknowledging his intellectual debt to his mentor, the late social critic Neil Postman, Vaidhyanathan blames the negative impacts of various exploitations of Facebook and other platforms on the platforms themselves. Postman was a committed technopessimist, and Vaidhyanathan takes time to chart in ANTISOCIAL MEDIA how Postman's general skepticism about new information technologies ultimately led his younger colleague to temper his originally optimistic view of the internet and digital technologies generally. If you read Vaidhyanathan's work over time, you find in his writing a progressively darker view of the internet and its ongoing evolution, taking a significantly more pessimistic turn around the time of his 2011 book, THE GOOGLIZATION OF EVERYTHING (AND WHY WE SHOULD WORRY). In his earlier book, Vaidhyanathan took pains to be as fair-minded as he could in raising questions about Google and whether it can or should be trusted to play such an outsized role in our culture as the mediator of so much of our informational resources. He was skeptical (not unreasonably) about whether Google's confidence in both its own good intentions and its own expertise is sufficient reason to trust the company—not least because a powerful company can stay around as a gatekeeper for the internet long past the time its well-intentioned founders depart or retire.

With ANTISOCIAL MEDIA, Vaidhyanathan cuts Mark Zuckerberg (and his COO, Sheryl Sandberg) rather less of a break. Facebook's leadership, as I read Vaidhyanathan's take, is both more arrogant than Google's and more heedless of the consequences of its commitment to connect everyone in the world through the platform. Synthesizing a full range of recent critiques of Facebook's design as a platform, he relentlessly characterizes Facebook as driving us to shallow, reactive reactions to one another rather than promoting reflective discourse that might improve or promote our shared values. Facebook, in his view, distracts us instead of inspiring us to think. It's addictive for us in something like the same way gambling or potato chips can be addictive for us. Facebook privileges the visual (photographs, images, GIFs, and the like), he insists, over the verbal and discursive.

And of course even the verbal content is either filter-bubbly—as when we convene in private Facebook groups to share, say, our unhappiness about current politics—or divisive (so that we share and intensify our outrage about other people's bad behavior, maybe including screenshots of something awful someone has said elsewhere on Facebook or on Twitter). Vaidhyanathan suggests that at one point our political discourse as ordinary citizens was more rational and reflective, but now is more emotion- and rage-driven and divisive. Me, I think the emotionalism and rage was always there.

Even when Vaidhyanathan allows that there may be something positive about one's interactions on Facebook, he can't quite help himself from being reductive and dismissive about it:

"Nor is Facebook bad for everyone all the time. In fact, it's benefited millions individually. Facebook has also allowed people to find support and community despite being shunned by friends and family or being geographically isolated. Facebook is still our chief source of cute baby and puppy photos. Babies and puppies are among the things that make life worth living. We could all use more images of cuteness and sweetness to get us through our days. On Facebook babies and puppies run in the same column as serious personal appeals for financial help with medical care, advertisements for and against political candidates, bogus claims against science, and appeals to racism and violence."

In other words, Facebook may occasionally make us feel good for the right reasons (babies and puppies) but that's about the best most people can hope for from the platform. Vaidhyanathan has a particular antipathy towards Candy Crush, which you can connect to your Facebook account—a video game that certainly seems vacuous, but also seems innocuous to me. (I've never played it myself.)

Given his antipathy towards Facebook, you might think that Vaidhyanathan's book is just another reworking of the moral-panic tomes that we've seen a lot of in the last year or two, which decry the internet and social media much the same way previous generations of would-be social critics complained about television, or the movies, or rock music, or comic books. (Hi, Jonathan Taplin! Hi, Franklin Foer!) But that's a mistake, primarily because Vaidhyanathan digs deep into choices—some technical and some policy-driven—that Facebook has made that facilitated bad actors' using the platform maliciously and destructively. Plus, Vaidhyanathan, to his credit, gives attention to how oppressive governments have learned to use the platform to stifle dissent and mute political opposition. (Watts notes this as well.) I was particularly pleased to see his calling out how Facebook is used in India, in the Philippines, and in Cambodia—all countries where I've been privileged to work directly with pro-democracy NGOs.

What I find particularly valuable is Vaidhyanathan's exploration of Facebook's advertising policies and their effect on political ads—I learned plenty from ANTISOCIAL MEDIA about the company's "Custom Audiences from Customer Lists," including this disturbing bit:

"Facebook's Custom Audiences from Customer Lists also gives campaigns an additional power. By entering email addresses of those unlikely to support a candidate or those likely to support an opponent, a campaign can narrowly target groups as small as twenty people and dissuade them from voting at all. 'We have three major voter suppression operations under way,' a campaign official told Bloomberg News just weeks before the election. The campaign was working to convince white leftists and liberals who had supported socialist Bernie Sanders in his primary bid against Clinton, young women, and African American voters not to go to the polls on election day. The campaign carefully targeted messages on Facebook to each of these groups. Clinton's former support for international trade agreements would raise doubts among leftists. Her husband's documented affairs with other women might soften support for Clinton among young women...."

What one saw in Facebook's deployment of the Custom Audiences feature is something fundamentally new and disturbing:

"Custom Audiences is a powerful tool that was not available to President Barack Obama and Governor Mitt Romney when they ran for president in 2012. It was developed in 2014 to help Facebook reach the takeoff point in profits and revenue. Because Facebook develops advertising tools for firms that sell shoes and cosmetics and only later invites political campaigns to use them, 'they never worried about the worst-case abuse of this capability, unaccountable, unreviewable political ads,' said Professor David Carroll of the Parsons School of Design. Such ads are created on a massive scale, targeted at groups as small as twenty, and disappear, so they are never examined or debated."

Vaidhyanathan quite properly criticizes Mark Zuckerberg's late-to-the-party recognition that perhaps Facebook may much more of a home to divisiveness and political mischief (and general unhappiness) than he previously had been willing to admit. And he's right to say that some of Zuckerberg's framing of new design directions for Facebook may be as likely to cause harm (e.g., more self-isolation in filter bubbles) than good. "The existence of hundreds of Facebook groups devoted to convincing others that the earth is flat should have raised some doubt among Facebook's leaders that empowering groups might not enhance the information ecosystem of Facebook," he writes. "Groups are as likely to divide us and make us dumber as any other aspect of Facebook."

But here I have to take issue with my friend Siva, because he overlooks or dismisses the possibility that Facebook's increasing support for "groups" of like-minded users may ultimately add up to a net social positive. For example, the #metoo groups seem to have enabled more women (and men) to come forward and talk frankly about their experiences with sexual assault and to begin to hold perpetrators of sexual assault and sexual harassment accountable. The fact that some folks also use Facebook groups for more frivolous or wrongheaded reasons (like promoting flat-earthism) strikes me as comparatively inconsequential.

Vaidhyanathan's also too quick, it seems to me, to dismiss the potential for Facebook and other platforms to facilitate political and social reform in transitional democracies and developing countries. Yes, bad governments can use social media to promote support for their regimes, and I don't think it's particularly remarkable that oppressive governments (or non-state actors like ISIS) learn to use new communications media maliciously. Governments may frequently be slow, but they're not invariably stupid—so it's no big surprise, for example that Cambodian prime minister Hun Sen has figured out how to use his Facebook page to drum up support for his one-party rule, which has driven out opposition press and the opposition Cambodia National Rescue Party.

But Vaidhyanathan overlooks how some activists are using Facebook's private groups to organize reform or opposition activities. In researching this review, I reached out to friends and colleagues in Cambodia, the Philippines and elsewhere to confirm whether the platform is useful to them—certainly they're cautious about what they say in public on Facebook, but they definitely use private groups for some organizational purposes. What makes the platform useful to activists is that it's accessible, easy to use, and amenable to posting multimedia sources (like pictures and videos of police and soldiers acting brutally towards protestors). And it's not just images--when I worked with activists in Cambodia on developing a citizen-rights framework as a response to their government's abrupt initiation of "cybercrime" legislation (really an effort to suppress dissenting speech), I suggested they work collaboratively in the MediaWiki software that Wikipedia's editors use. But the Cambodian activists quickly discovered that Facebook was an easier platform for technically less proficient users to learn quickly and use to review draft texts together. I was surprised at this, but also encouraged. Even though I had my own doubts whether Facebook was the right tool for the job, I figured they didn't need yet another American trying to tell them how to manage their own collaborations.

Like Watts's book, Vaidhyanathan's is strongest where it's built on independent research that doesn't merely echo what other critics have said. And both books are weakest when they uncritically import notions like Eli Pariser's "filter bubble" hypothesis or the social-media-makes-us-depressed hypothesis. (Both these notions are echoes of previous moral panics about previous new media, including broadcasting in the 20th century and cheap paper in the 19th. And both have been challenged by researchers.) Vaidhyanathan's so certain of the meme that Facebook's Free Basics program is an assault on network neutrality that he mostly doesn't investigate the program itself in any detail. The result is that his book (to this reader, anyway) seems to conflate Free Basics (a collection of low-bandwidth resources that Facebook provided a zero-rated platform for) with Facebook Zero (a zero-rated low-bandwidth version of Facebook by itself). In contrast, the Wikipedia articles on Free Basics and Facebook Zero lead off with warnings not to confuse the two.

In addition to the strengths and weaknesses the two books share, they also have a certain rhetorical approach in common—largely, in my view, because both authors want to push for reform, and because they want to challenge with the sunny-yet-unwarranted optimism with which Zuckerberg and Sandberg and other boosters have characterized social media. In effect, both authors seem to take the approach that, as we learn to be much more critical of social-media platforms, we don't need to worry about throwing out the baby with the bathwater—because, really, there is no baby. (If we bail on Facebook altogether, it's only the frequent baby pictures that we'd lose.)

Even so, both books also share an unwillingness to call for simple opposition to Facebook and other social-media platforms merely because they're misused. Watts argues persuasively instead for more coherent and effective positive messaging about American politics and culture—of the sort that used to be the province of the United States Information Agency. (I think he'd be happy if the USIA were revived; I would be too.) He also calls for an "equivalent of Consumer Reports" to "be created for social media feeds," which also strikes me as a fine idea.

Vaidhyanathan's reform agenda is less optimistic. For one thing, he's dismissive of "media literacy" as a solution because he doubts "we could even agree on what that term means and that there would be some way to train nearly two billion people to distinguish good from bad content." He has some near-term suggestions—for example, he'd like to see an antitrust-type initiative to break up Facebook, although it's unclear to me whether multiple competing Facebooks or a disassembled Facebook would be less hospitable to the kind of shallowness and abuses he sees in the platform's current incarnation. But mostly he calls for a kind of cultural shift driven by social critics and researchers like himself:

"This will be a long process. Those concerned about the degradation of public discourse and the erosion of trust in experts and institutions will have to mount a campaign to challenge the dominant techno-fundamentalist myth. The long, slow process of changing minds, cultures, and ideologies never yields results in the short term. It sometimes yields results over decades or centuries."

I agree that it frequently takes decades or even longer to truly assess how new media affect our culture for good or for ill. But as long as we're contemplating all those years of effort, I see no reason not to put media literacy on the agenda as well. I think there's plenty of evidence that people can learn to read what they see on the internet critically and do better than simply cherry-pick sources that agree with them—a vice that, it must be said, predates social media and the internet itself. The result of increasing skepticism about media platforms and the information we find in them may also lead (as Watts warns us) to more distrust of "experts" and "expertise," with the result that true expertise is more likely to be unfairly and unwisely devalued. But my own view is that skepticism and critical thinking—even about experts with expertise—is generally positive. For example, it may be annoying to today's physicians that patients increasingly resort to the internet about their real or imagined health problems—but engaged patients, even if they have to be walked back from foolish ideas again and again, are probably better off than the more passive health-care consumers of previous generations.

I think Vaidhyanathan is right, ultimately, to urge that we continue to think about social media critically and skeptically, over decades—and, you know, forever. But I think Watts offers the best near-term tactical solution:

"On social media, the most effective way to challenge a troll comes from a method that's taught in intelligence analysis. To sharpen an analyst's skills and judgment, a supervisor or instructor will ask the subordinate two questions when he or she provides an assessment: 'What do those who disagree with your assessment think, and why?' The analyst must articulate a competing viewpoint. The second question is even more important: 'Under what conditions, specifically, would your assessment be wrong?' [...] When I get a troll on Facebook, I'll inquire, 'Under what circumstance would you admit you were wrong?' or 'What evidence would convince you otherwise?" If they don't answer or can't articulate their answer, then I disregard them on that topic indefinitely."

Watts's heuristic strikes me as the perfect first entry in the syllabus for media literacy in particular and for criticism of social media in general.

In sum, I think both MESSING WITH THE ENEMY and ANTISOCIAL MEDIA deserve to be on every internet-focused policymaker's must-read list this season. I also think it's best that readers honor these books by reading them with the same clear-eyed skepticism that their authors preach.

Mike Godwin (@sfmnemonic) is a Distinguished Senior Fellow at R Street Institute.

35 Comments

Posted on Techdirt - 21 May 2018 @ 3:33pm

Real Security Begins At Home (On Your Smartphone)

from the not-with-the-fbi dept

When the FBI sued Apple a couple of years ago to compel Apple's help in cracking an iPhone 5c belonging to alleged terrorist Syed Rizwan Farook, the lines seemed clearly drawn. On the one hand, the U.S. government was asserting its right (under an 18th-century statutory provision called the All Writs Act) to force Apple to develop and implement technologies enabling the Bureau to gather all the evidence that might possibly be relevant in the San Bernardino terrorist-attack case. On the other, a leading tech company challenged the demand that it help crack the digital-security technologies it had painstakingly developed to protect users — a particularly pressing concern given that these days we often have more personal information on our handheld devices than we used to keep in our entire homes.

What a difference a couple of years has made. The Department of Justice's Office of Inspector General (OIG) released a report in March on the FBI's internal handling of issue of whether the Bureau truly needed Apple's assistance. The report makes clear that, despite what the Bureau said in its court filings, the FBI hadn't explored every alternative, including consultation with outside technology vendors, in cracking the security of the iPhone in question. The report also seemed to suggest that some department heads in the government agency were less concerned with the information that might be on that particular device than they were with setting a general precedent in court. Their goal? To establish as a legal precedent that Apple and other vendors have a general obligation to develop and apply technologies to crack the very digital security measures they so painstakingly implemented to protect their users.

In the aftermath of that report, and in heartening display of bipartisanship, Republican and Democratic members of Congress came together last week to introduce a new bill, the Secure Data Act of 2018, aimed at limiting the ability of federal agencies to seek court orders broadly requiring Apple and other technology vendors to help breach their own security technologies. (The bill would exclude court orders based on the comparatively narrow Communications Assistance to Law Enforcement Act—a.k.a. CALEA, passed in 1994--which requires telecommunications companies to assist federal agencies in implementing targeted wiretaps.)

This isn't the first time members of Congress in both parties have tried to limit the federal government's ability to demand that tech vendors build "backdoors" into their products. Bills similar to this year's Secure Data Act have been introduced a couple of times before in recent years. What makes this year's bill different, though, is the less-than-flattering light cast by the OIG report. (The bill's sponsors have expressly said as much.) At the very least the report makes clear that the FBI's own bureaucratic handling of the research into whether technical solutions were available to hack the locked iPhone led to both confusion as to what was possible and to delays in resolving that confusion.

But worse than that is the report's suggestion that some technologically challenged FBI department heads didn't even know how to frame (or parse) the questions about whether the agency possessed, or had access to, technical solutions to crack the iPhone's problem. And even worse is the report's account that at least some Bureau leaders may not even have wanted to discover such a technical was already available—because that discovery could undermine litigation they hoped would establish Apple's (and other vendors') general obligation to hack their own digital security if a court orders them to. As the report puts it:

After the outside vendor successfully demonstrated its technique to the FBI in late March, [Executive Assistant Director Amy] Hess learned of an alleged disagreement between the CEAU [Cryptographic and Electronic Analysis Unit] and ROU [Remote Operations Unit] Chiefs over the use of this technique to exploit the Farook iPhone – the ROU Chief wanted to use capabilities available to national security programs, and the CEAU Chief did not. She became concerned that the CEAU Chief did not seem to want to find a technical solution, and that perhaps he knew of a solution but remained silent in order to pursue his own agenda of obtaining a favorable court ruling against Apple. According to EAD Hess, the problem with the Farook iPhone encryption was the "poster child" case for the Going Dark challenge.

There's a lot to unpack here, and one key question is whether "capabilities available to national security programs" — that is, technologies used for FBI's counterintelligence programs — can and should be used in pursing criminal investigations and prosecutions. (If such technologies are used in criminal cases, the technologies may have to be revealed as part of court proceedings, which would bother the counterintelligence personnel in the FBI who don't want to publicize the tools they use.) But the case against Apple Inc. was based on a blanket assertion by FBI that neither its technical divisions nor the vendors the agency works with had access to any technical measures to break into Farook's company-issued iPhone. (Farook had destroyed his personal iPhones, and the FBI's eventually successful unlocking of his employer-issued phone apparently produced no evidence relating to the terrorist plot.)

Was the problem just bureaucratic miscommunication? The OIG report concludes that this was the fundamental source of internal misunderstandings about whether FBI did have access to technical solutions that didn't require drafting Apple into compelled cooperation to crack their own security. (The report recommends some structural reforms to address this.) And certainly there's evidence in the report that miscommunication plus the occasional lack of technical understanding did create problems within the Bureau.

But the OIG report also suggests that some individuals within the Bureau actually may have preferred to be able to argue that the FBI didn't have any alternative but to seek to compel Apple's technical assistance:

The CEAU Chief told the OIG that, after the outside vendor came forward [with a technical solution], he became frustrated that the case against Apple could no longer go forward, and he vented his frustration to the ROU Chief. He acknowledged that during this conversation between the two, he expressed disappointment that the ROU Chief had engaged an outside vendor to assist with the Farook iPhone, asking the ROU Chief, "Why did you do that for?" According to the CEAU Chief, his unit did not ask CEAU's partners to check with their outside vendors. CEAU was only interested in knowing what their partners had in hand – indicating that checking with "everybody" did not include OTD's trusted vendors, at least in the CEAU Chief's mind.

I have to note here, of course, that the FBI has consistently opposed strong encryption and other essential digital-security technologies since the "Crypto Wars" of the 1990s. This isn't due to any significant failures of the agency to acquire evidence it needs; instead, it's due to the FBI's fears that its ability to capture digital evidence of any sort may someday be significantly hindered by encryption and other security tech. That opposition to strong security tech has been baked into FBI culture for a while, and it's at the root of agency's fears of "the Going Dark challenge."

Let's be real: it's not clear that encryption will ever be the problem the FBI thinks it is, given that we live in what law professor Peter Swire has called "The Golden Age of Surveillance." But if the day that digital-security technology significantly hinders criminal investigations ever does come, then it would be appropriate for Congress to consider whether CALEA should be updated, or whether a new CALEA-like framework for technology companies like Apple should be enacted.

But that day hasn't come yet. That's why I favor passage of the Secure Data Act of 2018 — it would limit federal agencies' ability to impose general-purpose technology mandates through the courts' interpretation of a two-century-old ambiguous statute. (Among other features, the Act also would effectively clarify that that the All Writs Act, general-purpose statutory provision from 18th century can't be invoked all by itself to compel technology companies to undermine the very digital security measures they've been working so hard to strengthen.) In the long term, our security (in both cyberspace and meatspace) is going to depend much more on whether we all have technical tools that protect our information and data than it will depend on the FBI's has a legal mandate compelling Apple to hack into our iPhones.

Of course, I may be wrong about this. But I share Apple CEO Tim Cook's argument that this public-policy issue ought to be fully debated by our lawmakers, which is a better venue for policy development than a lawsuit filed based on a single dramatic incident like the terrorist attack in San Bernardino.

Mike Godwin (@sfmnemonic) is a Distinguished Senior Fellow with R Street Institute.

10 Comments

Posted on Techdirt - 6 March 2018 @ 12:06pm

Mike Godwin's First Essay On Encryption And The Constitution

from the going-back dept

Mike Godwin (you know who he is) was recently going through some of his earlier writings, and came across an essay (really an outline) he had written to the Cypherpunks email list 25 years ago, in April of 1993 concerning the Clipper Chip and early battles on encryption and civil liberties. If you don't recall, the Clipper Chip was an early attempt by the Clinton administration to establish a form of backdoored encryption, using a key escrow system. What became quite clear in reading through this 25-year-old email is just how little has changed in the past 25 years. As we are in the midst of a new crypto war, Godwin has suggested republishing this essay from so long ago to take a look back at what was said back then and compare it to today.

From: Mike Godwin
Subject: Some thoughts on Clipper and the Constitution To: e*c Date: Mon, 26 Apr 93 11:15:17 EDT

Note: These notes were a response to a question during Saturday's Cypherpunks meeting about the possible implications of the Clipper Chip initiative on Fourth Amendment rights. Forward to anyone else who might think these interesting.

--Mike

Notes on Cryptography, Digital Telephony, and the Bill of Rights By Mike Godwin

I.Introduction

A. The recent announcement of the federal government's "Clipper Chip" has started me thinking again about what the principled "pure Constitutional" arguments a) opposed to Digital Telephony and b) in favor of the continuing legality of widespread powerful public-key encryption.

B. These notes do *not* include many of the complaints that have already been raised about the Clipper Chip initiative, such as:

(1) Failure of the Administration to conduct an inquiry before embracing a standard,
(2) Refusal to allow public scrutiny of the chosen encryption algorithm(s), which is the normal procedure for testing a cryptographic scheme, and
(3) Failure of the administration to address the policy questions raised by the Clipper Chip, such as whether the right balance between privacy and law-enforcement needs has been struck.

C. In other words, they do not address complaints about the federal government's *process* in embracing the Clipper Chip system. They do, however, attempt to address some of the substantive legal and Constitutional questions raised by the Clipper Chip and Digital Telephony initiatives.

II. Hard Questions from Law Enforcement

A. In trying to clarify my own thinking about the possible Constitutional issues raised by the government's efforts to guarantee access to public communications between individuals, I have spoken and argued with a number of individuals who are on the other side of the issues from me, including Dorothy Denning and various respresentatives of the FBI, including Alan McDonald.

B. McDonald, like Denning and other proponents both of Digital Telephony and of a standard key-escrow system for cryptography, is fond of asking hard questions: What if FBI had a wiretap authorization order and couldn't implement it, either because it was impossible to extract the right bits from a digital-telephony data stream, or because the communication was encrypted? Doesn't it make sense to have a law that requires the phone companies to be able to comply with a wiretap order?

C. Rather than respond to these questions, for now at least let's ask a different question. Suppose the FBI had an authorization order for a secret microphone at a public restaurant. Now suppose it planted the bug, but couldn't make out the conversation it was authorized to "seize" because of background noise at the restaurant. Wouldn't it make sense to have a law requiring everyone to speak more softly in restaurants and not to clatter the dishes so much?

D. This response is not entirely facetious. The Department of Justice and the FBI have consistently insisted that they are not seeking new authority under the federal wiretap statutes ("Title III"). The same statute that was drafted to outline the authority for law enforcement to tap telephonic conversations was also drafted to outline law enforcement's authority to capture normal spoken conversations with secret or remote microphones. (The statute was amended in the middle '80s by the Electronic Communications Privacy Act to protect "electronic communications," which includes e-mail, and a new chapter protecting _stored_ electronic communications was also added.)

E. Should we understand the law the way Digital Telephony proponents insist we do--as a law designed to mandate that the FBI (for example) be guaranteed access to telephonic communications? Digital Telephony supporters insist that it merely "clarifies" phone company obligations and governmental rights under Title III. If they're right, then I think we have to understand the provisions regarding "oral communications" the same way. Which is to say, it would make perfect sense to have a law requiring that people speak quietly in public places, so as to guarantee that the government can bug an oral conversation if it needs to.

F. But of course I don't really take Digital Telephony as an initiative to "clarify" governmental prerogatives. It seems clear to me that Digital Telephony, together with the "Clipper" initiative, prefigure a government strategy to set up an information regime that precludes truly private communications between individuals who are speaking in any way other than face-to-face. This I think is an expansion of government authority by almost any analysis.

III. Digital Telephony, Cryptography, and the Fourth Amendment

A. In talking with law enforcement representatives such as Gail Thackeray, one occasionally encounters the view that the Fourth Amendment is actually a _grant_ of a Constitutional entitlement to searches and seizures. This interpretation is jolting to those who have studied the history of the Fourth Amendment and who recognize that it was drafted as a limitation on government power, not as a grant of government power. But even if one doesn't know the history of this amendment, one can look at its language and draw certain conclusions.

B. The Fourth Amendment reads: "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."

C. Conspicuously missing from the language of this amendment is any guarantee that the government, with properly obtained warrant in hand, will be _successful_ in finding the right place to be searched or persons or things to be seized. What the Fourth Amendment is about is _obtaining warrants_--similarly, what the wiretap statutes are about is _obtaining authorization_ for wiretaps and other interceptions. Neither the Fourth Amendment nor Title III nor the other protections of the ECPA constitute an law-enforcement _entitlement_ for law enforcement.

D. It follows, then, that if digital telephony or widespread encryption were to create new burdens for law enforcement, this would not, as some law-enforcement representatives have argued, constitute an "effective repeal" of Title III. What it would constitute is a change in the environment in which law enforcement, along with the rest of us, has to work. Technology often creates changes in our social environment--some, such as the original innovation of the wiretap, may aid law enforcement, while others, such as powerful public-key cryptography, pose the risk of inhibiting law enforcement. Historically, law enforcement has responded to technological change by adapting. (Indeed, the original wiretaps were an adaptation to the widespread use of the telephone.) Does it make sense for law enforcement suddenly to be able to require that the rest of society adapt to its perceived needs?

IV. Cryptography and the First Amendment

A. Increasingly, I have come to see two strong links between the use of cryptography and the First Amendment. The two links are freedom of expression and freedom of association.

B. By "freedom of expression" I mean the traditionally understood freedoms of speech and the press, as well as freedom of inquiry, which has also long been understood to be protected by the First Amendment. It is hard to see how saying or publishing something that happens to be encrypted could not be protected under the First Amendment. It would be a very poor freedom of speech indeed that dictated that we could *never* choose the form in which we speak. Even the traditional limitations on freedom of speech have never reached so far. My decision to encrypt a communication should be no more illegal than my decision to speak in code. To take one example, suppose my mother and I agree that the code "777", when sent to me through my pager, means "I want you to call me and tell me how my grandchild is doing." Does the FBI have a right to complain because they don't know what "777" means? Should the FBI require pager services never to allow such codes to be used? The First Amendment, it seems to me, requires that both questions be answered "No."

C. "Freedom of association" is a First Amendment right that was first clearly articulated in a Supreme Court case in 1958: NAACP v. Alabama ex rel. Patterson. In that case, the Court held that Alabama could not require the NAACP to disclose a list of its members residing in Alabama. The Court accepted the NAACP's argument that disclosure of its list would lead to reprisals on its members; it held such forced disclosures, by placing an undue burden on NAACP members' exercise of their freedoms of association and expression, effectively negate those freedoms. (It is also important to note here that the Supreme Court in effect recognized that anonymity might be closely associated with First Amendment rights.)

D. If a law guaranteeing disclosure of one's name is sufficiently "chilling" of First Amendment rights to be unconstitutional, surely a law requiring that the government be able to read any communications is also "chilling," not only of my right to speak, but also of my decisions on whom to speak to. Knowing that I cannot guarantee the privacy of my communications may mean that I don't conspire to arrange any drug deals or kidnapping-murders (or that I'll be detected if do), but it also may mean that I choose not to use this medium to speak to a loved one, or my lawyer, or to my psychiatrist, or to an outspoken political activist. Given that computer-based communications are likely to become the dominant communications medium in the next century, isn't this chilling effect an awfully high price to pay in order to keep law enforcement from having to devise new solutions to new problems?

V. Rereading the Clipper Chip announcements

A. It is important to recognize that the Clipper Chip represents, among other things, an effort by the government to pre-empt certain criticisms. The language of announcements makes clear that the government wants us to believe it has recognized all needs and come up with a credible solution to the dilemma many believe is posed by the ubiquity of powerful cryptography.

B. Because the government is attempting to appear to take a "moderate" or "balanced" position to the issue, its initiative will tend to pre-empt criticisms of the government's proposal on the grounds of *process* alone.

C. But there is more to complain about here than bad process. My rereading of the Clipper Chip announcements will reveal that the government hopes to develop a national policy that includes limitations on some kinds of cryptography. Take the following two statements, for example:

D. 'We need the "Clipper Chip" and other approaches that can both provide law-abiding citizens with access to the encryption they need and prevent criminals from using it to hide their illegal activities.'

E. 'The Administration is not saying, "since encryption threatens the public safety and effective law enforcement, we will prohibit it outright" (as some countries have effectively done); nor is the U.S. saying that "every American, as a matter of right, is entitled to an unbreakable commercial encryption product." '

F. It is clear that neither Digital Telephony nor the Clipper Chip make any sense without restrictions on other kinds of encryption. Widespread powerful public-key encryption, for example, would render useless any improved wiretappability in the communications infrastructure, and would render superfluous any key-escrow scheme.

G. It follows, then, that we should anticipate, consistent with these two initiatives, an eventual effort to prevent or inhibit the use of powerful private encryption schemes in private hands.

H. Together with the Digital Telephony and Clipper Chip initiatives, this effort would, in my opinion, constitute an attempt to shift the Constitutional balance of rights and responsibilities against private entities and individuals and in favor of law enforcement. They would, in effect, create _entitlements_ for law enforcement where none existed before.

I. As my notes here suggest, these initiatives may be, in their essence, inconsistent with Constitutional guarantees of expression, association, and privacy.

21 Comments

Posted on Techdirt - 8 February 2018 @ 12:34pm

Mike Godwin Remembers John Perry Barlow

from the more-rememberances dept

Earlier today we posted Mike Masnick's post about the passing of John Perry Barlow, but Mike Godwin, who was EFF's first lawyer among other things, sent over his memories of Barlow as well, which are well worth reading.

It’s the nature of having known John Perry Barlow, and having been his friend, that you can’t write about what it means to have lost him Wednesday morning (he died in his sleep at the too-young age of 70) without writing about how he changed your life. So, I ask your forgiveness in advance if I say too much about myself here on the way to saying more about John.

I can and will testify that I had a life before I met John Perry Barlow. At the beginning of 1990 I was finishing up law school in Texas (only one more semester and then the bar exam!) and was beginning to think about my professional future (how about being a prosecutor in Houston?) and my personal future (should my long-term girlfriend and I get married?).

That was the glide path I was on before Grateful Dead lyricist John Perry Barlow, together with software entrepreneur Mitch Kapor and Sun Microsystems pioneering programmer John Gilmore, decided to start what would shortly be known as the Electronic Frontier Foundation (EFF). EFF disrupted all my inertial, half-formed plans and changed my life forever. (I didn’t, for example, become a prosecutor.) And John Perry Barlow was the red-hot beating heart of EFF.

I’d been feeling tremors in the Force before EFF even had a name, though. For reasons I can’t quite explain, I’d found ways to persuade people, including my university, to give me access to internet-capable accounts and services so that I could see the rest of the digital world as it was then represented in Usenet. I’d been a BBS hobbyist in the 1980s, but I thought I’d exhausted the BBS scene in Austin and wanted to know more of the larger digital world. Thanks to Usenet, over the Christmas break before my last semester of law school I’d become friends online with Clifford Stoll, whose book “The Cuckoo’s Egg” detailed how he had detected and helped thwart a foreign plot to hack into U.S. academic and research computers. Cliff had included his email address in the book and, as we so often did in those days, I just fired off a note to him and got to know him.

At about the same time, at my girlfriend’s urging, we spent a couple of days in San Francisco at MacWorld Expo, where I first met Mitch Kapor, who wore a Hawaiian shirt and demo’d what became for years my favorite Mac application, On Location. Other things were happening as well, and my computer-hobbyist nature— never too far in the background during my law-student years—kept me attuned to what seemed to be happening in the larger world which, as I would have framed it back then, seemed to reflect a convergence of my interests in constitutional law and cyberspace.

Just a month or two later, I came across the March 1990 issue of Harper’s Magazine, and there on the cover was this colloquy edited by Jack Hitt and Paul Tough titled, “Is Computer Hacking a Crime? (Harper’s theoretically makes a download of that old article available, but the links don’t work. You can find a transcribed version here). I wasn’t a subscriber, but I knew I had to read this. And there was Barlow – whose name I didn’t recognize – along with luminaries like Stewart Brand (former Merry Prankster, later the founder of The Whole Earth Catalog and The Whole Earth Review), Richard Stallman (founder and chief visionary of the Free Software movement that gave birth to the Linux operating system) and my new friend Cliff Stoll. They all had lots of opinions about computer hacking, but the participant whose words spoke most clearly to me was Barlow:

“BARLOW [Day 1, 11:54 A.M.]: Hackers hack. Yeah, right, but what's more to the point is that humans hack and always have. Far more than just opposable thumbs, upright posture, or excess cranial capacity, human beings are set apart from all other species by an itch, a hard-wired dissatisfaction. Computer hacking is just the latest in a series of quests that started with fire hacking. Hacking is also a collective enterprise. It brings to our joint endeavors the simultaneity that other collective organisms -- ant colonies, Canada geese -- take for granted. This is important, because combined with our itch to probe is a need to connect. Humans miss the almost telepathic connectedness that I've observed in other herding mammals. And we want it back. Ironically, the solitary sociopath and his 3:00 A.M. endeavors hold the most promise for delivering species reunion.”

This was a guy who really got it! A guy who recognized the itchiness in my brain compelling me to stay up nights finding ways to get into campus mainframes back in the 1970s, that had me tinkering with Apple II computers, with PCs and with Macs in the 1980s, and that had driven me to join the global Usenet conversation in just the last few months. Barlow saw that what we were doing with computers now (that is, in the 1980s and 1990s, at the dawn of the public internet) was essentially human—that human beings, being what they are, couldn’t stop themselves from doing it. And look at the line Barlow draws in this contribution (his first in the public colloquy in Harper’s)--it’s a line connecting human beings’ invention/discovery of fire (or “fire hacking”) with our use of computers to communicate with one another. “This is important, because combined with our itch to probe is the need to connect.” We miss our “almost telepathic connectedness.” And, as Barlow wrote, “we want it back.”

During my law school years—as well as the year I took off to serve as editor of the University of Texas student newspaper, The Daily Texan—I’d relied on computer BBSes to stay connected with people outside my studies, outside my work. Yet I’d begun to recognize that computer communications were just the same kinds of speech that our Constitution and Bill of Rights were meant to protect. I tried to persuade a favorite professor to let me write a research paper, for credit, on the First Amendment and computer bulletin boards. The professor (an immensely well-regarded First Amendment scholar, and deservedly so) shut me down, essentially saying that First Amendment doctrine was all settled, and that computer bulletin-board systems didn’t really alter fundamental questions about, say, publisher liability or what counts as speech or the press. Barlow, speaking in the Harper’s-sponsored forum on the WELL’s conferencing system, had seen something in the nascent online world that my professor had missed, and that I’d already had inklings about.

You also see in Barlow’s participation in that Harper’s forum certain long-term traits that sometimes bugged those of us who loved him. Barlow frequently yielded to the temptation to utter oracular pronouncements, to jump to conclusions before he’d done the reading. In what started out as a minor contretemps with “Acid Phreak” and “Phiber Optik,” participants who championed the exploratory hacking of computer systems—especially those of corporate giants—Barlow wrote this:

“BARLOW [Day 19, 9:48 P.M.]: Let me define my terms. Using hacker in a midspectrum sense (with crackers on one end and Leonardo da Vinci on the other), I think it does take a kind of genius to be a truly productive hacker. I'm learning PASCAL now, and I am constantly amazed that people can string those prolix recursions into something like PageMaker. It fills me with the kind of awe I reserve for splendors such as the cathedral at Chartres. With crackers like Acid and Optik, the issue is less intelligence than alienation. Trade their modems for skateboards and only a slight conceptual shift would occur. Yet I'm glad they're wedging open the cracks. Let a thousand worms flourish.”

To which Phiber Optik responded with this:

“OPTIK [Day 10, 10:11 P.M.]: You have some pair of balls comparing my talent with that of a skateboarder. Hmm... This was indeed boring, but nonetheless: [Editor's note: At this point in the discussion, Optik -- apparently having hacked into TRW's computer records -- posted a copy of Mr. Barlow's credit history. In the interest of Mr. Barlow's privacy -- at least what's left of it -- Harper's Magazine has not printed it.] I'm not showing off. Any fool knowing the proper syntax and the proper passwords can look up credit history. I just find your high-and-mighty attitude annoying and, yes, infantile.”

Barlow was stunned, just as you or I would have been, to see TRW’s version of his credit history—including its errors—published online. But the next thing he did was brilliant, and it’s not something anyone else would necessarily do. As Barlow recounts it in an article he wrote later that spring:

“I've been in redneck bars wearing shoulder-length curls, police custody while on acid, and Harlem after midnight, but no one has ever put the spook in me quite as Phiber Optik did at that moment. I realized that we had problems which exceeded the human conductivity of the WELL's bandwidth. If someone were about to paralyze me with a spell, I wanted a more visceral sense of him than could fit through a modem.

“I e-mailed him asking him to give me a phone call. I told him I wouldn't insult his skills by giving him my phone number and, with the assurance conveyed by that challenge, I settled back and waited for the phone to ring. Which, directly, it did.

“In this conversation and the others that followed I encountered an intelligent, civilized, and surprisingly principled kid of 18 who sounded, and continues to sound, as though there's little harm in him to man or data. His cracking impulses seemed purely exploratory, and I've begun to wonder if we wouldn't also regard spelunkers as desperate criminals if AT&T owned all the caves.”

This is where you see one of Barlow’s great gifts, fully as much of a talent as his lyrical wordsmithing. Barlow saw past his own feelings of fear and uncertainty and reached out to the human being behind the hacker handle, and found, in Phiber Optik, someone who deserved, in Barlow’s view, more admiration than fear. As he wrote about it in 1990, “The terrifying poses which Optik and Acid had been striking on screen were a media-amplified example of a human adaptation I'd seen before: One becomes as he is beheld. They were simply living up to what they thought we and, more particularly, the editors of Harper's, expected of them. Like the televised tears of disaster victims, their snarls adapted easily to mass distribution.”

Barlow also wrote this:

“Months later, Harper's took Optik, Acid and me to dinner at a Manhattan restaurant which, though very fancy, was appropriately Chinese. Acid and Optik, as material beings, were well-scrubbed and fashionably-clad. They looked to be dangerous as ducks.”

They looked to be dangerous as ducks. I’d have given a toe, or maybe even finger, to have written a sentence that apt.

Barlow’s larger insight—that maybe our sense of the threats of computers and the internet and the first generation of human beings to grow up with super-duper computer skills was just another iteration of our human fear of change and the new—informed Barlow’s co-founding of a new civil-liberties organization, originally pitched as the “the Computer Liberty Foundation.” The other co-founders—Mitch Kapor and John Gilmore, themselves breathtakingly remarkable people just as much as Barlow was (here I first typed “as Barlow is” because he still feels so present)—recognized that “Computer Liberty Foundation” was a bit clunky. Barlow, the poet who’d also been a rancher in Pinedale, Wyoming, coined the name that stuck: Electronic Frontier Foundation.

EFF, which was then primarily just Barlow, Kapor, and Gilmore, eventually decided they needed an in-house lawyer to help with the legal cases that were bubbling up with increasing frequency. I’d already been active in publicizing those cases, starting as a law student, then as a recent law graduate, even as I was studying for the Texas bar exam. Marc Rotenberg, then head of the Washington office of the Computer Professionals for Social Responsibility, had reached out to me as a possible staff member; CPSR was the recipient of EFF’s first grant for cyberspace legal research, and they needed to staff up. Rotenberg flew me to EFF’s first big press conference—this at the Washington Press Club—and it was there that I met Kapor (again) and Barlow for the first time. I got to hang out with these guys not just that day but also in the evening at a dinner meeting that included other people who’d later be EFF supporters and even board members. The main thing I remember from the dinner meeting is talking to Barlow—he’d called himself an “information mystic” (I think he was just trying out the term for size), and I piped up about Claude Shannon and information theory and my understanding of information as something more scientific than mystical. Of course, Barlow already knew about Shannon, about Teilhard de Chardin’s notion of the “noosphere,” about Aristotle’s precursor concept of “substantial form.” I knew instantly that I would get along with this guy.

I got recruited, not just by CPSR, but by EFF, and I became EFF’s first staff counsel (and, in fact, EFF’s first full-time employee). The nine years I spent at EFF were my first nine years as a lawyer, and every single one of those years was a year of revelation, always informed by Barlow’s openness, adventurousness and willingness to grapple with new problems and new ideas.

Ultimately, Barlow didn’t think every looming problem in cyberspace was no more dangerous than a duck. Like the rest of us at EFF (which began to expand in the following years), Barlow recognized the fear of encryption technology, the fear of computer-facilitated copyright infringement, the fear of “cyberporn” as the kind of neophobia so common in eras of technological change. When Congress passed the Communications Decency Act in 1996, which would have imposed massive censorship on the now-blooming internet, he channeled the anxiety all of us were feeling into his crafting of “A Declaration of the Independence of Cyberspace.”

I confess I didn’t much like this Declaration when Barlow shared it and later published it. With what Barlow admitted was “characteristic grandiosity,” the Declaration asserted that traditional, terrestrial governments “have no sovereignty where we gather” (that is, in cyberspace), and that “the global social space we are building” is “naturally independent of the tyrannies you seek to impose upon us.” By then I was already deep in my work for EFF on the constitutional challenge to the Communications Decency Act, and the hard fact that haunted my days was how fragile this new global social space was, and how little independence of the tyrannies it might ultimately have.

I was missing the forest for the trees. The simple fact is this: Barlow inspired a new generation of lawyers and activists to devote time and energy into preserving the great new world the internet and other digital technologies was giving us. As I wrote earlier this year in an essay for Cato Unbound:

“Here I must share some late-breaking news from the 1990s: the actual cyber-activists of that period (and here I must include myself) did not interpret Barlow’s cri de coeur as political philosophy. Barlow, best known prior to his co-founding of the Electronic Frontier Foundation as a songwriter for the Grateful Dead, was writing to inspire activism, not to prescribe a new world order, and his goal was to be lyrical and aspirational, not legislative. Barlow wrote and published his “Declaration” in the short days and weeks after Congress passed, and President Clinton signed into law, a telecommunications bill that aimed, in part, to censor the internet. No serious person – and certainly not the Electronic Frontier Foundation and other organizations that successfully challenged the Communications Decency Act provisions of that bill – believed that cyberspace would be “automagically” independent of the terrestrial world and its governments. Barlow’s “Declaration” is best understood, as Wired described it two decades later, as a “rallying cry.” Similarly, nobody thinks “The Star-Spangled Banner” or “America the Beautiful” or “This Land Is Your Land” is a constitution. (And of course the original Declaration of Independence isn’t one either.)”

Barlow had written his own inspirational anthem, and I’d like to think he’d particularly appreciate my comparing it to Woody Guthrie’s great song.

I can say one more thing about Barlow—about seeing him once again, for the last time in person, when a couple of friends and I visited him in spring of 2016 at John Gilmore’s house, where Barlow was continuing his long efforts at recovery from a heart attack and other problems that had reduced his mobility and energy but had not diminished his fundamentally optimistic outlook—optimism not just for himself and those he loved but for all of us. It was good to talk to John Perry Barlow that evening, to chat about nothing in particular, to reminisce a little. I had loved the man pretty much from the start and, circumstances being what they were, it was not the simple love of hero-worship from an adoring fan. Instead, it was the complicated, tricky love for someone with whom I got to share so many great moments of my life over many great (and not-so-great) years. It’s the love you end up having for lifelong friends, or for family members you’ve occasionally quarreled with over the years, but with whom you’ve shared so much, and with whom you’ve been able to do so much good work, that even when you disagree with them, you know ultimately all will be forgiven.

I can tell you what it felt like to sit down and catch up a bit with John Perry Barlow that last time. It felt like coming home.

Mike Godwin (mnemonic@gmail.com) is a Distinguished Senior Fellow with R Street Institute.

11 Comments

Posted on Techdirt - 30 January 2018 @ 10:42am

Everything That's Wrong With Social Media And Big Internet Companies: Part 2

from the the-list-is-growing dept

Late last year I published Part I of a project to map out all the complaints we hear about social media in particular and about internet companies generally. Now, here's Part 2.

This Part should have come earlier; Part 1 was published in November. I'd hubristically imagined that this is a project that might take a week or a month. But I didn't take into account the speed with which the landscape of the criticism is changing. For example, just as you're trying to do more research into whether Google really is making us dumber, another pundit (Farhad Manjoo at the New York Times) comes along and argues that Apple -- a tech giant no less driven by commercial motives than Google and its parent company, Alphabet -- ought to redesign its products to make us smarter (by making them less addictive). That is, it's Apple's job to save us from Gmail, Facebook, Twitter, Instagram, and other attention-demanding internet media — which we connect to through Apple's products, as well as many others.

In these same few weeks, Facebook has announced it's retooling the user experience for Facebook users in ways aimed at making the experience more personal and interactive and less passive. Is this an implicit admission that Facebook, up until now, has been bad for us? If so, is it responding to the charges that many observers have leveled at social-media companies — that they're bad for us and that they're bad for democracy.

And only this last week, social-media companies have responded to concerns about political extremists (foreign and domestic) in Senate testimony. Although the senators had broad concerns (ISIS recruitment, bomb-making information on YouTube), there was, of course, some allocation of time on the ever-present question of Russian "misinformation campaigns," which may not have altered the outcome of 2016's elections but still may aim to affect 2018 mid-terms and beyond.

These are recent developments, but coloring them all is a more generalized social anxiety about social media and big internet companies that is nowhere better summarized than in Senator Al Franken's last major public policy address. Whatever you think of Senator Franken's tenure, I think his speech was a useful accumulation of the growing sentiment among commentators that there's something out of control with social media and internet companies that needs to be brought back into control.

Now, let's be clear: even if I'm skeptical here about some claims that social media and internet giants are bad for us, that doesn't mean these criticisms necessarily lack any merit at all. But it's always worth remembering that, historically, every new mass medium (and mass-medium platform) has been declared first to be wonderful for us, and then to be terrible for us. So it's always important to ask whether any particular claim about the harms of social media or internet companies is reactive, reflexive... or whether it's grounded in hard facts.

Here are reasons 4, 5, and 6 to believe social media are bad for us. (Remember, reasons 1, 2, and 3 are here.)

(4) Social media (and maybe some other internet services) are bad for us because they're super-addictive, especially on our sweet, slick handheld devices.

"It's Time for Apple to Build a Less Addictive iPhone," according to New York Times tech columnist Farhad Manjoo, who published a column to that effect recently. To be sure, although "Addictive" is in the headline, Manjoo is careful to say upfront that, although iPhone use may leave you feeling "enslaved," it's not "not Apple's fault" and it "isn't the same as [the addictiveness] of drugs or alcohol." Manjoo's column was inspired by an open letter from an ad-hoc advocacy group that included an investment-management firm and the California State Teachers Retirement System (both of which are Apple shareholders). The letter, available here at ThinkDifferentlyAboutKids.com (behind an irritating agree-to-these-terms dialog) calls for Apple to add more parental-control choices for its iPhones (and other internet-connected devices, one infers). After consulting with experts, the letter's signatories argue, "we note that Apple's current limited set of parental controls in fact dictate a more binary, all or nothing approach, with parental options limited largely to shutting down or allowing full access to various tools and functions." Per the letter's authors: "we have reviewed the evidence and we believe there is a clear need for Apple to offer parents more choices and tools to help them ensure that young consumers are using your products in an optimal manner."

Why Apple in particular? Obviously, the fact that two of the signatories own a couple of billion dollars' worth of Apple stock explains this choice to some extent. But one hard fact is that Apple's share of the smartphone market mostly stays in the 12-to-20-percent range. (Market leader Samsung has held 20-30 percent of the market since 2012.) Still, the implicit argument is that Apple's software and hardware designs for the iPhone will mostly lead the way for other phone-makers going forward, as they mostly have for the first decade of the iPhone era.

Still, why should Apple want to do this? The idea here is that Apple's primarily a hardware-and-devices company — which distinguishes Apple from Google, Facebook, Amazon, and Twitter, all of which primarily deliver an internet-based service. Of course, Apple's an internet company too (iTunes, Apple TV, iCloud, and so on), but the company's not hooked on the advertising revenue streams that are the primary fuel for Google, Facebook, and Twitter, or on the sales of other, non-digital merchandise (like Amazon). The ad revenue for the internet-service companies creates what Manjoo argues are "misaligned incentives" — when ad-driven businesses' economic interests lie in getting more users clicking on advertisements, he reasons, he's "skeptical" that (for example) Facebook is the going to offer any real solution to the "addiction" problem. Ultimately, Manjoo agrees with the ThinkDifferentlyAboutKids letter -- Apple's in the best position to fix iPhone "addiction" because of their design leadership and independence from ad revenue.

Even so, Apple has other incentives to make iPhones addictive — notably, pleasing its other investors. Still, investors may ultimately be persuaded that Apple-led fixes will spearhead improvements, rooted in our devices, of our social-media experience. (See, for example, this column: Why Investors May Be the Next to Join the Backlash Against Big Tech's Power.)

It's worth remembering that the idea technology is addictive is itself an addictive idea — not that long ago, it was widely (although not universally) believed that television was addictive. This New York Times story from 1990 advances that argument, although the reporter does quote a psychiatrist who cautions that "the broad definition" of addiction "is still under debate." (Manjoo's "less addictive iPhone" column inoculates itself, you'll recall, by saying iPhone addiction is "not the same.")

"Addiction" of course is an attractive metaphor, and certainly those of us who like using our electronics to stay connected can see the appeal of the metaphor. And Apple, which historically has been super-aware of the degree to which its products are attractive to minors, may conclude—or already have concluded, as the ThinkDifferentlyAboutKids folks admit — that more parental controls are a fine idea.

But is it possible that smartphones maybe already incorporate a solution for addictiveness? Just the week before Manjoo's column, another Times writer, Nellie Bowles asked whether we can make our phones less addictive just by playing with the settings. (The headline? "Is the Answer to Phone Addiction a Worse Phone?") Bowles argues, based on interviews with researchers, that simply setting your phone to use grayscale instead of color inclines users to respond less emotionally and impulsively—in other words, more mindfully—when deciding whether to respond to their phones. Bowles says she's trying the experiment herself: "I've gone gray, and it's great."

At first it seems odd to focus on the device's user interface (parental settings, or color palette) if the real problem of addictiveness is internet content (social media, YouTube and other video, news updates, messages). One can imagine a Times columnist in 1962—in the opening years of widespread color TV— responding to Newt Minow's famous "vast wasteland" speech by arguing that TV-set manufacturers should redesign sets so that they're somewhat more inconvenient—no remote controls, say—and less colorful to watch. (So much for NBC's iconic Peacock opening logo)

In the interests of science, I'm experimenting with some of these solutions myself. For years already I've configured my iDevices not to bug me with every Facebook and Twitter update or new-email notice. Plus, I was worried about this grayscale thing on my iPhone X—one of the major features of which is a fantastic camera. But it turns out that you can toggle between grayscale and color easily once you've set gray as the default. I kind of like the novelty of all-gray—no addiction-withdrawal syndrome yet, but we'll see how that goes.

(5) Social media are bad for us because they make us feel bad, alienating us from one another and causing is to be upset much of the time.

Manjoo says he's skeptical whether Facebook is going to fix the addictiveness of its content and interactions with users, thanks to those "misaligned incentives." It should be said of course that Facebook's incentives—to use its free services to create an audience for paying advertisers—at least have the benefit of being straightforward. (Apple's not dependent on ads, but they still want new products to be attractive enough for users to want to upgrade.) Still, Facebook's Mark Zuckerberg has announced that the company is redesigning Facebook's user experience, (focusing first on its news feed) to emphasize quality time ("time well spent") over more "passive" consumption of the Facebook ads and video that may generate more hits for some advertisers. Zuckerberg maintains that Facebook, even as it has operated over the last decade-plus of general public access, had been good for many and maybe for most users:

"The research shows that when we use social media to connect with people we care about, it can be good for our well-being. We can feel more connected and less lonely, and that correlates with long term measures of happiness and health."

Even so, Zuckerberg writes (translating what Facebook has been hearing from some social-science researchers), "passively reading articles or watching videos -- even if they're entertaining or informative -- may not be as good." This is a gentler way of characterizing what some researchers have recently been arguing, which is that, for some people at least, using Facebook causes depression. This article for example, relies on sociologist Erving Goffman's conceptions of how we distinguish between our public and private selves as we navigate social interactions. Facebook, it's argued, "collapses" our public and private presentations—the result is what social-media researcher danah boyd calls "context collapse." A central idea here is that, because what we publish on Facebook for our circle is also to some high degree public, we are stressed by the need (or inability) to switch between versions of how we present ourselves. In addition context collapse, the highly curated pages we see from other people on Facebook may suggest that their lives are happy in ways that ours are not.

I think both Goffman's and boyd's contributions to our understanding of the sociology of identity (both focus on how we present ourselves in context) are extremely useful, but it's important to think clearly about any links between Facebook (and other social media) and depression. To cut to the chase: there may in fact be strong correlations between social-media use and depression, at least for some people. But it's unclear whether social media actually cause depression; it seems just as likely that causation may go in the other direction. Consider that depression has also been associated with internet use generally (prior to the rise of social-media platforms), with television watching, and even, if you go back far enough, with what is perceived to be excessive consumption of novels and other fiction. Books, of course, are now regarded as redemptive diversions that may actually cure your depression.

So here's a reasonable alternative hypothesis: when you're depressed you seek diversion from depression—which may be Facebook, Twitter, or something else, like novels or binge-watching quality TV. It may be things that are genuinely good for you (books! Or The Wire!) or things that are unequivocally bad for you. (Don't try curing your depression with drinking!) Or it may be social media, which at least some users will testify they find energizing and inspiring rather than enervating and dispiriting.

As a longtime skeptic regarding studies of internet usage (a couple of decades ago I helped expose a fraudulent article about "cyberporn" usage), I don't think the research on social media and its potential harmful side-effects is any more conclusive than Facebook's institutional belief that its social-media platforms are beneficial. But I do think Facebook as a dominant, highly profitable social-media platform is under the gun. And, as I've written here and elsewhere, its sheer novelty may be generating a moral panic. So it's no wonder—especially now that the U.S. Congress (as well as European regulators) are paying more attention to social media—that we're seeing so many Facebook announcements recently that are aimed at showing the company's responsiveness to public criticism.

Whether you think anxiety about social-media is merited or otherwise, you may reasonably be cynical about whether a market-dominant for-profit company will refine itself to act more consistently in the public interest—even in the face of public criticism or governmental impulses to regulate. But such a move is not unprecedented. The key question is whether Facebook's course corrections -- steering us towards personal interactions over "passive" consumption of things like news reports -- really do help us. (For example, if you believe in the filter-bubble hypothesis, it seems possible that Facebook's privileging of personal interactions over news may make filter bubbles worse.) This brings us to Problem Number 6, below.

(6) Social media are bad for us because they're bad for democracy.

There are multiple arguments that Facebook and other social media (Twitter's another frequent target) are bad for democracy. The Verge provides a good beginning list here. The article notes that Facebook's own personnel—including its awesomely titled "global politics and government outreach director" — are acknowledging the criticisms by publishing a series of blog postings. The first one is from the leader of Facebook's "civic engagement team," and the others are from outside observers, including Harvard law professor Cass Sunstein (who's been a critic of "filter bubbles" since long before that term was invented—his preferred term is "information cocoons.").

I briefly mentioned Sunstein's work in Part 1. Here in Part 2 I'll note mainly that Sunstein's essay for Facebook begins by listing ways in which social-media platforms are actually good for democracy. In fact, he writes, "they are not merely good; they are terrific." In spite of their goodness, Sunstein writes, they also exacerbate what he's discussed earlier (notably in a 1999 paper) as "group polarization." In short, he argues, the filter bubble makes like-minded people hold their shared opinions more extremely. The result? More extremism generally, unless deliberative forums are properly designed with appropriate "safeguards."

Perhaps unsurprisingly, given that Facebook is hosting his essay, Sunstein credits Facebook with taking steps to provide those such safeguards, which in his view includes Facebook chief Mark Zuckerberg's declaration that the company is working to fight misinformation in its news feed. But I like Sunstein's implicit recognition that political polarization, while bad, may be no worse as a result of social media in particular, or even this century's modern media environment as a whole:

"By emphasizing the problems posed by knowing falsehoods, polarization, and information cocoons, I do not mean to suggest that things are worse now than they were in 1960, 1860, 1560, 1260, or the year before or after the birth of Jesus Christ. Information cocoons are as old as human history."

(I made that argument, in similar form, in a debate with Farhad Manjoo—not then a Times columnist—almost a decade ago.)

Just as important, I think, is Sunstein's admission that that we don't really have unequivocal data showing that social media are a particular problem even in relation to other modern media:

"Nor do I mean to suggest that with respect to polarization, social media are worse than newspapers, television stations, social clubs, sports teams, or neighborhoods. Empirical work continues to try to compare various sources of polarization, and it would be reckless to suggest that social media do the most damage. Countless people try to find diverse topics, and multiple points of view, and they use their Facebook pages and Twitter feeds for exactly that purpose. But still, countless people don't."

Complementing Sunstein's essay is a piece by Facebook's Samidh Chakrabarti, who underscores the company's new initiative to make News Feed contributions more transparent (so you can see who's funding a political ad or seemingly authentic "news story). Chakrabarti also expresses the company's hope that its "Trust Project for News On Facebook" will help users "sharpen their social media literacy." And Facebook's just announced its plan to use user rankings to rate media sources' credibility.

I'm all for more media literacy, and I love crowd-sourcing, and I support efforts to encourage both. But I share CUNY journalism professor Jeff Jarvis's concern that other components of Facebook's comprehensive response to public criticism may unintentionally undercut support, financial and otherwise, for trustworthy media sources.

Now, I'm aware that some critics are arguing that the data really are solidly showing that social media are undermining democracy. But I'm skeptical whether "fake news" on Facebook or elsewhere in social media changed the outcome of the 2016 election, not least because the Pew Research Center's study a year ago suggests that digital news sources weren't nearly as important as traditional media sources. (Notably, Fox News was hugely influential among Trump voters; there was no counterpart news source for Clinton voters.)

That said, there's no reason to dismiss concerns about social media, which may play an increasing role—as Facebook surely has—as an intermediary of the news. Facebook's Chakrabarti may want to promote "social media literacy," and the company has been forced to acknowledge that "Russian entities" tried to use Facebook as an "information weapon." But Facebook doesn't want in the least to play the rule a social-media-literate citizenry should be playing for itself. Writes Chakrabart:

"In the public debate over false news, many believe Facebook should use its own judgment to filter out misinformation. We've chosen not to do that because we don't want to be the arbiters of truth, nor do we imagine this is a role the world would want for us."

Of course some critics may disagree. As I've said above, the data are equivocal, but that hasn't made its interpreters equivocal. Take for example a couple of recent articles—one academic and another aimed at popular audience—that cast doubt on whether the radical democratization of internet access is a good thing—or at least, whether it's as good a thing as we hoped for a couple of decades ago. One is UC Irvine professor Richard Hasen's law-review article published last year (set for formal publication in the First Amendment Law Review this year), which he helpfully distilled to an LA Times op-ed here. The other is Wired's February 2018 cover story: "It's the (Democracy-Poisoning) Golden Age of Free Speech." (The Wired article is also authored by an academic, UNC Chapel Hill sociology professor Zeynep Tufekci.)

Both Hasen's and Tufekci's articles underscore that internet access has inverted an assumption that long informed free-speech law—that the ability to reach mass audiences is necessarily going to be expensive and scarce. In the internet era, what we have instead is what UCLA professor Eugene Volokh memorably labelled, in a Yale Law Journal law-review article more than 20 years ago, as "cheap speech." Volokh correctly anticipated back then that internet-driven changes in the media landscape would lead some social critics to conclude that the First Amendment's broad protections for speech would need to be revised:

"As the new media arrive, they may likewise cause some popular sentiment for changes in the doctrine. Today, for instance, the First Amendment rules that give broad protection to extremist speakers-Klansmen, Communists, and the like-are relatively low-cost, because these groups are politically rather insignificant. Even without government regulation, they are in large measure silenced by lack of funds and by the disapproval of the media establishment. What will happen when the KKK becomes able to conveniently send its views to hundreds of thousands of supporters throughout the country, or create its own TV show that can be ordered from any infobahn-connected household?"

There, in a nutshell, is a prediction of the world we're living in now (except that we, fortunately, failed to adopt the term "infobahn"). Hasen believes "non-governmental actors"—that is, Facebook and Twitter and Google and the like — may be "best suited to counter the problems created by cheap speech." I think that's a bad idea, not least because corporate decision-making may be less accountable than public law and regulation and, as Manjoo puts it, they are "misaligned incentives." Tufekci, I think, has the better approach. "[I]n fairness to Facebook and Google and Twitter," she writes in Wired, "while there's a lot they could do better, the public outcry demanding that they fix all these problems is mistaken." Because there are "few solutions to the problems of digital discourse that don't involve huge trade-offs," Tufekci insists that deciding what those solutions may be is necessarily a "deeply political decision"—involving difficult discussions what we ask the government to do... or not to do.

She's got that right. She's also right that we haven't had those discussions yet. And as we begin them, we need to remember radically democratic empowerment (all that cheap speech) may be part of the problem, but it's also got to be part of the solution.

Update: Part 3 is now available.

Mike Godwin is a Distinguished Senior Fellow at R Street Institute.

20 Comments

Posted on Techdirt - 29 November 2017 @ 12:00pm

Everything That's Wrong With Social Media And Big Internet Companies: Part 1

from the and-there's-more-to-come dept

Some of today's anxiety about social-media platforms is driven by the concern that Russian operatives somehow used Facebook and Twitter to affect our electoral process. Some of it's due a general perception that big American social-media companies, amorally or immorally driven by the profit motive, are eroding our privacy and selling our data to other companies or turning it over to the government—or both. Some of it's due to the perception that Facebook, Twitter, Instagram, and other platforms are bad for us—that maybe even Google's or Microsoft's search engines are bad for us—and that they make us worse people or debase public discourse. Taken together, it's more than enough fodder for politicians or would-be pundits to stir up generalized anxiety about big tech.

But regardless of where this moral panic came from, the current wave of anxiety about internet intermediaries and social-media platforms has its own momentum now. So we can expect many more calls for regulation of these internet tools and platforms in the coming months and years. Which is why it's a good idea to itemize the criticisms we've already seen, or are likely to see, in current and future public-policy debates about regulating the internet. We need to chart the kinds of arguments for new internet regulation that are going to confront us, so I've been compiling a list of them. It's a work in progress, but here are three major claims that are driving recent expressions of concern about social media and internet companies generally.

(1) Social media are bad for you because they use algorithms to target you, based on the data they collect about you.

It's well-understood now that Facebook and other platforms gather data about what interests you in order to shape what kinds of advertising you see and what kind of news stories you see in your news feed (if you're using a service that provides one). Some part of the anxiety here is driven by the idea (more or less correct) that an internet company is gathering data about your likes, dislikes, interests, and usage patterns, which means it knows more about you in some ways than perhaps your friends (on social media and in what we now quaintly call "real life") know about you. Possibly more worrying than that, the companies are using algorithms—computerized procedures aimed at analyzing and interpreting data—to decide what ads and topics to show you.

It's worth noting, however, that commercial interests have been gathering data about you since long before the advent of the internet. In the 1980s and before in the United States, if you joined one book club or ordered one winter coat on Land's End, you almost certainly ended up on mailing lists and received other offers and many, many mail-order catalogs. Your transactional information was marketed, packaged, and sold to other vendors (as was your payment and credit history). If false information was shared about you, you perhaps had some options ranging from writing remove-me-from-your-list letters to legal remedies under the federal Fair Credit Reporting Act. But the process was typically cumbersome, slow, and less-than-completely satisfactory (and still is when it comes to credit-bureau records). One advantage with some internet platforms is that (a) they give you options to quit seeing ads you don't like (and often to say just why you don't like them), and (b) the internet companies, anxious about regulation, don't exactly want to piss you off. (In that sense, they may be more responsive than TiVo could be.)

Of course it's fair—and, I think, prudent—to note that the combination of algorithms and "big data" may have real consequences for democracy and for freedom of speech. Yale's Jack Balkin has recently written an excellent law-review article that targets these issues. At the same time, it seems possible for internet platforms to anonymize data they collect in ways that pre-internet commercial enterprises never could.

(2) Social Media are bad for you because they allow you to create a filter bubble where you see only (or mostly) opinions you agree with. (2)(a) Social media are bad for you because they foment heated arguments between you and those you disagree with.

To some extent, these two arguments run against each other—if you only hang out online with people who think like you, it seems unlikely that you'll have quite so many fierce arguments, right? (But maybe the arguments between people who share most opinions and backgrounds are fiercer?) In any case, it seems clear that both "filter bubbles" and "flames" can occur. But when they do, statistical research suggests, it's primarily because of user choice, not algorithms. In fact, as a study in Public Opinion Quarterly reported last year, the algorithmically driven social-media platforms may be both increasing polarization and increasing users' exposures to opposing views. The authors summarize their conclusions this way:

"We find that social networks and search engines are associated with an increase in the mean ideological distance between individuals. However, somewhat counterintuitively, these same channels also are associated with an increase in an individual's exposure to material from his or her less preferred side of the political spectrum."

In contrast, the case that "filter bubbles" are a particular, polarizing problem relies to a large degree not on statistics but on anecdotal evidence. That is, the people who don't like arguing or who can't bear too different a set of political opinions tend to curate their social-media feeds accordingly, while people who don't mind arguments (or even love them) have no difficulty encountering heterodox viewpoints on Facebook or Twitter. (At various times I've fallen into one or the other category on the internet, even before the invention of social media or the rise of Google's search engine.)

The argument about “filter bubbles”—people self-segregating and self-isolating into like-minded online groups—is an argument that predates modern social media and the dominance of modern search engines. Law professor Cass Sunstein advanced it in his 2001 book, Republic.com and hosted a website forum to promote that book. I remember this well because I showed up in the forum to express my disagreement with his conclusions—hoping that my showing up as a dissenter would itself raise questions about Sunstein's version of the “filter bubble” hypothesis. I didn't imagine I'd change Sunstein's mind, though, so I was unsurprised to see the professor has revised and refined his hypothesis, first in Republic.com 2.0 in 2007 and now in #Republic: Divided Democracy in the Age of Social Media, published just this year.

(3) Social media are bad for you because they are profit-centered, mostly (including the social media that don't generate profits).

"If you're not paying for the product, you're the product." That's a maxim with real memetic resonance, I have to admit. This argument is related to argument number 1 above, except that instead of focusing on one's privacy concerns, it's aimed at the even-more-disturbing idea that we're being commodified and sold by the companies who give us free services. This necessarily includes Google and Facebook, which provide users with free access but which gather data that is used primarily to target ads. Both of those companies are profitable. Twitter, which also serves ads to its users, isn't yet profitable, but of course aspires to be.

As a former employee of the Wikimedia Foundation—which is dedicated to providing Wikipedia and other informational resources to everyone in the world, for free—I don't quite know what to make of this. Certainly the accounts of the early days of Google or of Facebook suggest that advertising as a mission typically arose after the founders realized that their new internet services needed to make money. But once any new company starts making money by the yacht-load, it's easy to dismiss the whole enterprise as essentially mercenary.

(In Europe, much more ambivalent to commercial enterprises than the United States, it's far more common to encounter this dismissiveness. This helps explain some the Europe's greater willingness to regulate the online world. The fact that so many successful internet companies are American also helps explain that impulse.)

But Wikipedia has steadfastly resisted even the temptation to sell ads—even though it could have become an internet commercial success just as IMDB.com has—because the Wikipedia volunteers and the Wikimedia Foundation see value in providing something useful and fun to everyone regardless of whether one gets rich doing so. So do the creators of free and open-source software. If creating free products and services doesn't always mean you're out to sell other people into data slavery, shouldn't we at least consider the possibility that social-media companies may really mean it when they declare their intentions to do well by doing good? (“Do Well By Doing Good” is a maxim commonly attributed to Benjamin Franklin—who of course sold advertising, and even wrote advertising copy, for his Pennsylvania Gazette.) I think it's a good idea to follow Mike Masnick's advice to stop repeating this “you're the product” slogan—unless you're ready to condemn all traditional journals that subsidize giving their content to you through advertising.

So that's the current top three chart-toppers for the Social Media-Are-Bad-For-You Greatest Hits. But this is a crowded field—only the tip of the iceberg when it comes to trendy criticisms of social-media platforms, search engines, and unregulated mischievous speech on the internet--and we expect to see many other competing criticisms of Facebook, Twitter, Google, etc. surface in the weeks and months to come. I'm already working on Part 2.

Update: It took some time, but Part 2 and Part 3 are now available.

Mike Godwin (@sfmnemonic) is a Distinguished Senior Fellow at the R Street Institute.

41 Comments

Posted on Techdirt - 27 October 2017 @ 9:23am

Back Down The Rabbit Hole About Encryption On Smartphones

from the the-rule-of-law dept

Deputy Attorney General Rod Rosenstein wrote the disapproving memo that President Trump used as a pretext to fire FBI Director James Comey in May. But on at least one area of law-enforcement policy, Rosenstein and Comey remain on the same page—the Deputy AG set out earlier this month to revive the outgoing FBI director's efforts to limit encryption and other digital security technologies. In doing so, Rosenstein has drawn upon nearly a quarter century of the FBI's anti-encryption tradition. But it's a bad tradition.

Like many career prosecutors, Deputy Attorney General Rod Rosenstein is pretty sure he's more committed to upholding the U.S. Constitution and the rule of law than most of the rest of us are. This was the thrust of Rosenstein's recent October 10 remarks on encryption, delivered to an audience of midshipmen at the U.S. Naval Academy.

The most troubling aspect of Rosenstein's speech was his insistence that, while the government's purposes in defeating encryption are inherently noble, the motives of companies that provide routine encryption and other digital-security tools (the way Apple, Google and other successful companies now do) are inherently selfish and greedy.

At the same time, Rosenstein said those who disagree with him on encryption policy as a matter of principle—based on decades of grappling with the public-policy implications of using strong encryption versus weak encryption or no encryption—are "advocates of absolute privacy." (We all know that absolutism isn't good, right?)

In his address, Rosenstein implied that federal prosecutors are devoted to the U.S. Constitution in the same way that Naval Academy students are:

"Each Midshipman swears to 'support and defend the Constitution of the United States against all enemies, foreign and domestic.' Our federal prosecutors take the same oath."

Of course, he elides the fact that many who differ with his views on encryption—including yours truly, as a lawyer licensed in three jurisdictions—have also sworn, multiple times, to uphold the U.S. Constitution. What's more, many of the constitutional rights we now regard as sacrosanct, like the Fifth Amendment privilege against self-incrimination, were only vindicated over time under our rule of law—frequently in the face of overreaching by law-enforcement personnel and federal prosecutors, all of whom also swore to uphold the Constitution.

The differing sides of the encryption policy debate can’t be reduced to supporting or opposing the rule of law and the Constitution. But Rosenstein chooses to characterize the debate this way because, as someone whose generally admirable career has been entirely within government, and almost entirely within the U.S. Justice Department, he simply never attempted to put himself in the position of those with whom he disagrees.

As I've noted, Rosenstein's remarks draw on a long tradition. U.S. intelligence agencies, together with the DOJ and the FBI, reflexively resorted to characterizing their opponents in the encryption debate as fundamentally mercenary (if they're companies) or fundamentally unrealistic (if they're privacy advocates). In Steven Levy's 2001 book Crypto, which documented the encryption policy debates of the 1980s and 1990s, he details how the FBI framed the question for the Clinton administration:

"What if your child is kidnapped and the evidence necessary to find and rescue your child is unrecoverable because of 'warrant-proof' encryption?"

The Clinton administration's answer—deriving directly from George H.W. Bush-era intelligence initiatives—was to try to create a government standard built around a special combination of encryption hardware and software, labeled "the Clipper Chip" in policy shorthand. If the U.S. government endorsed a high-quality digital-security technology that also was guaranteed not to be "warrant-proof"—that allowed special access to government agents with a warrant—the administration asserted this would provide the appropriate "balance" between privacy guarantees and the rule of law.

But, as Levy documents, the government's approach in the 1990s raised just as many questions then as Rosenstein's speech raises now. Levy writes:

"If a crypto solution was not global, it would be useless. If buyers abroad did not trust U.S. products with the [Clipper Chip] scheme, they would eschew those products and buy instead from manufacturers in Switzerland, Germany, or even Russia."

The United States' commitment to rule of law also raised questions about how much our legal system should commit itself to enabling foreign governments to demand access to private communications and other data. As Levy asked at the time:

"Should the United States allow access to stored keys to free-speech—challenged nations like Singapore, or China? And would France, Egypt, Japan, and other countries be happy to let their citizens use products that allowed spooks in the United States to decipher conversations but not their own law enforcement and intelligence agencies?"

Rosenstein attempts to paint over this problem by pointing out that American-based technology companies have cooperated in some respects with other countries' government demands—typically over issues like copyright infringement or child pornography rather than digital-security technologies like encryption. "Surely those same companies and their engineers could help American law enforcement officers enforce court orders issued by American judges, pursuant to American rule of law principles," he says.

Sure, American companies, like companies everywhere, have complied as required with government demands designed to block content deemed in illegal in the countries where they operate. But demanding these companies meet content restrictions—which itself at times also raises international rule-of-law issues—is a wholly separate question from requiring companies to enable law-enforcement everywhere to obtain whatever information they want regarding whatever you do on your phone or on the internet. This is particularly concerning when it comes to foreign governments' demands for private content and personal information, which might include providing private information about dissidents in unfree or "partly free" countries whose citizens must grapple with oppressive regimes.

Technology companies aren't just concerned about money—it's cheaper to exclude digital security measures than to invent and install new ones (such as Apple's 3D-face-recognition technology set to be deployed in its new iPhone X). Companies do this not just to achieve a better bottom line but also to earn the trust of citizens. That's why Apple resists pressure, both from foreign governments and from the U.S. government, to develop tools that governments—and criminals—could use to turn my iPhone against me. This matters even more in 2017 and beyond—because no matter how narrowly a warrant or wiretap order is written, access to my phone and other digital devices is access to more or less everything in my life. The same is true for most other Americans these days.

Rosenstein is certainly correct to have said "there is no constitutional right to sell warrant-proof encryption"—but there absolutely is a constitutional right to write computer software that encrypts my private information so strongly that government can't decrypt it easily. (Or at all.) Writing software is generally understood to be presumptively protected expression under the First Amendment. And, of course, one needn't sell it—many developers of encryption tools have given them away for free.

What's more, our government's prerogative to seek information pursuant to a court-issued order or warrant has never been understood to amount to a "constitutional right that every court order or search warrant be successful." It's common in our law-enforcement culture—of which Rosenstein is unquestionably a part and partisan—to invert the meaning of the Constitution's limits on what our government can do, so that that law-enforcement procedures under the Fourth and Fifth Amendments are interpreted as a right to investigatory success.

We've known this aspect of the encryption debate for a long time, and you don't have to be a technologist to understand the principle involved. Levy quotes Jerry Berman, then of the Electronic Frontier Foundation and later the founder of the Center for Democracy and Technology, on the issue: "The idea that government holds the keys to all our locks, even before anyone has been accused of committing a crime, doesn't parse with the public."

As Berman bluntly sums it up, "It's not America."

Mike Godwin (@sfmnemonic) is a distinguished senior fellow at the R Street Institute.

31 Comments

Posted on Net Neutrality Special Edition - 17 July 2017 @ 3:28am

The FCC Needs Your Quality Comments About Net Neutrality Today

from the please-comment dept

Today is the deadline for the first round of the FCC's comment period on its attempt to roll back the 2015 open internet "net neutrality" rules. The deadline is partly meaningless, because there's a second comment period that is technically to respond to earlier comments -- but allows you to just file more comments. However, it is still important to make your voice heard no matter which side you're on. We'll be posting our own comments later today, but first, we wanted to share Mike Godwin's thoughtful discussion on why you should comment and why you should provide a thoughtful, careful "quality" comment, which he first posted to the R-Street blog, but which is being cross posted here.

If you count just by numbers alone, net-neutrality activists have succeeded in their big July 12 push to get citizens to file comments with the Federal Communications Commission. As I write this, it looks as if 8 million or more comments have now been filed on FCC Chairman Ajit Pai's proposal to roll back the expansive network-neutrality authority the commission asserted under its previous chairman in 2015.

There's some debate, though, about whether the sheer number of comments—which are unprecedented not only for the FCC, but also for any other federal agency—is a thing that matters. I think they do, but not in any simple way. If you look at the legal framework under which the FCC is authorized to regulate, you see that the commission has an obligation to open its proposed rulemakings (or revisions or repeals of standing rules) for public comments. In the internet era, of course, this has meant enabling the public (and companies, public officials and other stakeholders) to file online. So naturally enough, given the comparative ease of filing comments online, controversial public issues are going to generate more and more public comments over time. Not impossibly, this FCC proceeding—centering as it does on our beloved public internet—marks a watershed moment, after which we'll see increasing flurries of public participation on agency rulemakings.

Columbia University law professor Tim Wu—who may fairly be considered the architect of net neutrality, thanks to his having spent a decade and a half building his case for it—tweeted July 12 that it would be "undemocratic" if the commission ends up "ignoring" the (as of then) 6.8 million comments filed in the proceeding.

But a number of critics immediately pointed out, correctly, that the high volume of comments (presumed mostly to oppose Pai's proposal) doesn't entail that the commission bow to the will of any majority or plurality of the commenters.

I view the public comments as relevant, but not dispositive. I think Wu overreaches to suggest that ignoring the volume of comments is "undemocratic." We should keep in mind that there is nothing inherently or deeply democratic about the regulatory process – at least at the FCC. (In fairness to Wu, he could also mean that the comments need to be read and weighed substantively, not merely be tallied and dismissed.)

But I happen to agree with Wu that the volume of comments is relevant to regulators, and that it ought to be. Chairman Pai (whose views on the FCC's framing net neutrality as a Title II function predate the Trump administration) has made it clear, I think, that quantity is not quality with regard to comments. The purpose of saying this upfront (as the chairman did when announcing the proposal) is reasonably interpreted by Wu (and by me and others) as an indicating he believes the commission is at liberty to regulate in a different way from what a majority (or plurality) of commenters might want. Pai is right to think this, I strongly believe.

But the chairman also has said he wants (and will consider more deeply) substantive comments, ideally based on economic analysis. This seems to me to identify an opportunity for net-neutrality advocates to muster their own economists to argue for keeping the current Open Internet Order or modifying it more to their liking. And, of course, it's also an opportunity for opponents of the order to do the same.

But it's important for commenters not to miss the forest for the trees. The volume of comments both in 2014 and this year (we can call this "the John Oliver Effect") has in some sense put net-neutrality advocates in a bind. Certainly, if there were far fewer comments (in number alone) this year, it might be interpreted as showing declining public concern over net neutrality. Obviously, that's not how things turned out. So the net-neutrality activists had to get similar or better numbers this year.

At the same time, advocates on all sides shouldn't be blinded by the numbers game. Given that the chairman has said the sheer volume of comments won't be enough to make the case for Title II authority (or other strong interventions) from the commission, it seems clear to me that while racking up a volume of comments is a necessary condition to be heard, it is not a sufficient condition to ensure the policy outcome you want.

Ultimately, what will matter most, if you want to persuade the commissioners one way or another on the net-neutrality proposal, is how substantive, relevant, thoughtful and persuasive your individual comments prove to be. My former boss at Public Knowledge, Gigi Sohn, a net-neutrality advocate who played a major role in crafting the FCC's current Open Internet Order, has published helpful advice for anyone who wants to contribute to the debate. I think it ought to be required reading for anyone with a perspective to share on this or any other proposed federal regulation.

If you want to weigh in on net neutrality and the FCC's role in implementing it—whether you're for such regulation or against it, or if you think it can be improved—you should follow Sohn's advice and file your original comments no later than Monday, July 17, or reply comments no later than Aug. 16. If you miss the first deadline, don't panic—there's plenty of scope to raise your issues in the reply period.

My own feeling is, if you truly care about the net-neutrality issue, the most "undemocratic" reaction would be to miss this opportunity to be heard.

41 Comments

Posted on Techdirt - 29 June 2017 @ 11:55am

Looking Forward To Next 20 Years Of A Post-Reno Internet

from the your-free-internet dept

Earlier this week, we wrote a little bit about the 20th anniversary of a key case in internet history, Reno v. ACLU, and its important place in internet history. Without that ruling, the internet today would be extraordinarily different -- perhaps even unrecognizable. Mike Godwin, while perhaps best known for making sure his own obituary will mention Hitler, also played an important role in that case, and wrote up the following about his experience with the case, and what it means for the internet.

The internet we have today could have been very different, more like the over-the-air broadcast networks that still labor under broad federal regulatory authority while facing declining relevance.

But 20 years ago this week, the United States made a different choice when the U.S. Supreme Court handed down its 9-0 opinion in Reno v. American Civil Liberties Union, the case that established how fundamental free-speech principles like the First Amendment apply to the internet.

I think of Reno as "my case" because I'd been working toward First Amendment protections for the internet since my first days as a lawyer—the first staff lawyer for the Electronic Frontier Foundation (EFF), which was founded in 1990 by software entrepreneur Mitch Kapor and Grateful Dead lyricist John Perry Barlow. There are other lawyers and activists who feel the same possessiveness about the Reno case, most with justification. What we all have in common is the sense that, with the Supreme Court's endorsement of our approach to the internet as a free-expression medium, we succeeded in getting the legal framework more or less right.

We had argued that the internet—a new, disruptive and, to some large extent, unpredictable medium—deserved not only the free-speech guarantees of the traditional press, but also the same freedom of speech that each of us has as an individual. The Reno decision established that our government has no presumptive right to regulate internet speech. The federal government and state governments can limit free speech on the internet only in narrow types of cases, consistent with our constitutional framework. As Chris Hanson, the brilliant ACLU lawyer and advocate who led our team, recently put it: "We wanted to be sure the internet had the same strong First Amendment standards as books, not the weaker standards of broadcast television."

The decision also focused on the positive benefits this new medium had already brought to Americans and to the world. As one of the strategists for the case, I'd worked to frame this part of the argument with some care. I'd been a member of the Whole Earth 'Lectronic Link (the WELL) for more than five years and of many hobbyist computer forums (we called them bulletin-board systems or "BBSes") for a dozen years. In these early online systems—the precursors of today's social media like Facebook and Twitter—I believed I saw something new, a new form of community that encompassed both shared values and diversity of opinion. A few years before Reno v. ACLU—when I was a relatively young, newly minted lawyer—I'd felt compelled to try to figure out how these new communities work and how they might interact with traditional legal understandings in American law, including the "community standards" relevant to obscenity law and broadcasting law.

When EFF, ACLU and other organizations, companies, and individuals came together to file a constitutional challenge to the Communications Decency Act that President Bill Clinton signed as part of the Telecommunications Act of 1996, not everyone on our team saw this issue the way I did, at the outset. Hanson freely admits that "[w]hen we decided to bring the case, none of [ACLU's lead lawyers] had been online, and the ACLU did not have a website." Hanson had been skeptical of the value of including testimony about what we now call "social media" but more frequently back then referred to as "virtual communities." As he puts it:

"I proposed we drop testimony about the WELL — the social media site — on the grounds that the internet was about the static websites, not social media platforms where people communicate with each other. I was persuaded not to do that, and since I was monumentally wrong, I'm glad I was persuaded."

Online communities turned out to be vastly more important than many of the lawyers first realized. The internet's potential to bring us together meant just as much as the internet's capacity to publish dissenting, clashing and troubling voices. Justice John Paul Stevens, who wrote the Reno opinion, came to understand that community values were at stake, as well. In early sections of his opinion, Justice Stevens dutifully reasons through traditional "community standards" law, as would be relevant to obscenity and broadcasting cases. He eventually arrives at a conclusion that acknowledges that a larger community is threatened by broad internet-censorship provisions:

"We agree with the District Court's conclusion that the CDA places an unacceptably heavy burden on protected speech, and that the defenses do not constitute the sort of 'narrow tailoring; that will save an otherwise patently invalid unconstitutional provision. In Sable, 492 U. S., at 127, we remarked that the speech restriction at issue there amounted to ' 'burn[ing] the house to roast the pig.' ' The CDA, casting a far darker shadow over free speech, threatens to torch a large segment of the Internet community."

The opinion's recognition of "the Internet community" paved the way for the rich and expressive, but also divergent and sometime troubling internet speech and expression we have today.

Which leaves us with the question: now that we've had two decades of experience under a freedom-of-expression framework for the internet—one that has informed not just how we use the internet in the United States but also how other voices around the world use it—what do we now need to do to promote "the Internet community"?

In 2017, not everyone views the internet as an unalloyed blessing. Most recently, we've seen concern about whether Google facilitates copyright infringement, whether Twitter's political exchanges are little more than "outrage porn" and whether Facebook enables "hate speech." U.K. Prime Minister Theresa May, who is almost exactly the same age I am, seems to view the internet primarily as an enabler of terrorism.

Even though we're now a few decades into the internet revolution, my view is that it's still too early to make the call that the internet needs more censorship and government intervention. Instead, we need more protection of the free expression and online communities that we've come to expect. Part of that protection may come from some version of the network neutrality principles currently being debated at the Federal Communications Commission, although it may not be the version in place under today's FCC rules.

In my view, there are two additional things the internet community needs now. The first is both legal and technological guarantees of privacy, including through strong encryption. The second is universal access—including for lower-income demographics and populations in underserved areas and developing countries—that would enable everyone to particulate fully, not just as consumers but as contributors to our shared internet. For me, the best way to honor the 40th anniversary of Reno v. ACLU will be to make sure everybody is here on the internet to celebrate it.

Mike Godwin (mnemonic@gmail.com) is a senior fellow at R Street Institute. He formerly served as staff counsel for the Electronic Frontier Foundation and as general counsel for the Wikimedia Foundation, which operates Wikipedia.

15 Comments

Posted on Techdirt - 26 April 2017 @ 9:29am

Here Comes The Attempt To Reframe Silicon Valley As Modern Robber Barons

from the don't-buy-it dept

It's difficult for me to read Jonathan Taplin's cri de coeur about Google and other technology companies that have come to dominate the top tier of successful American corporations without wincing in sympathy on his behalf.

But the pain I feel is not grounded in Taplin's certainty that something amoral, libertarian and unregulated is undermining democracy. Instead, it's in Taplin's profound misunderstanding of both the innovations and social changes that have made these companies not merely successful but also—for most Americans—vastly useful in enabling people to stay connected, express themselves and find the goods and services (and, even more importantly, communities) they need.

"It is impossible to deny that Facebook, Google and Amazon have stymied innovation on a broad scale," Taplin argues in his screed. He wants Google to divest itself of DoubleClick, in theory because the search engine would be much better if it were unable to generate profits from digitized ad services. He wants Facebook to unload WhatsApp, because the world was much better when connected citizens in the developing world had to pay 10 cents for each SMS message they sent. None of this really amounts to reform and, of course, such "reforms" wouldn't touch companies like Apple or Microsoft in the least.

What Taplin really wants isn't to reform but to reframe. He wants us to understand current tech-company leaders as evil, or at least amoral and out of control. Toward this end, he begins his new book (a much more extended version of his Times screed) by ominously quoting Facebook's Mark Zuckerberg: "Move fast and break things. Unless you are breaking stuff, you aren't moving fast enough."

Despite his misreading of the underlying technologies shaping today's digital world, Taplin—founding director and now director emeritus of the University of Southern California's Annenberg Innovation Lab—is no dummy. He knows that if he asks ordinary internet users whether they hate or love Google or Amazon or Facebook (or whether they'll willingly part with their new iPhones) he's not going to get a lot of buy-in. Even under a hypothetical President Bernie Sanders, regulating Google as a monopoly wouldn't be a meat-and-potatoes issue.

Instead, Taplin creates a counter-narrative in which American technology successes (with the notable exception of Microsoft) represent the kind of rapacious octopus-like capitalism so often caricatured by cartoonists like Thomas Nast. Google and Facebook may not hurt me in particular, but the theory he offers is that they somehow hurt America in the abstract. Taplin essentially reframes American tech success as a retelling of the oil, railroad, banking and telegraph robber-baron trusts of the 19th and early 20th centuries.

But the very tech companies whose success Taplin is absolutely certain is anti-democratic were built on infrastructure and resources that, under federal law and regulation, have been highly regulated throughout his (and my) lifetime. We may disagree about what the regulations should be, but there's little disagreement that there's already a regulatory framework. The regulation of monopoly infrastructures—telephone and telegraph networks, in particular—were what made it possible to refrain from regulating what you said or did on those networks. Regulation at the "wire" level of the infrastructure—and at various technical levels above that—created the space for today's innovative services that provide near-instantaneous access to, potentially, all the information in the world and all the people with whom you would want to stay in touch.

Search engines and other digital tools are, of course, highly disruptive to industries whose traditional model involved having school-age kids hawking ink and wood pulp on street corners. Like Taplin, I still believe newspaper journalism is essential to democracy. Indeed, I read Taplin's op-ed early Sunday morning because I subscribe to the digital edition of The New York Times. We must continue to explore new ways to make this necessary journalism not merely survive, but thrive.

But it also bears mentioning that Taplin doesn't mention Craig Newmark or Craigslist in his screed against Google, even though, if you were to buy into the fundamentals of Taplin's argument, Craigslist clearly did more to erode daily newspapers' advertising revenue than Google has ever done. And, yet, at the same time, it's worth noting here that Newmark—like most of the other successful tech moguls Taplin lumps together into a sort of secret-handshake techno-libertarian fraternity—actually gives money to Poynter, ProPublica and other enterprises that actively respond to the very real problem of very fake news.

A little research into the history of scientific discovery puts even the scary Zuckerberg quote about "breaking stuff" in a different light. The philosopher Karl Popper opens his essential book Conjectures and Refutations with two quotations: "Experience is the name every one gives to their mistakes," from Oscar Wilde and "Our whole problem is to make the mistakes as fast as possible," from the physicist John Archibald Wheeler.

That sentiment—to be adventurous, to risk things, to learn quickly from making mistakes quickly—is, I believe, exactly what Zuckerberg was getting at. It also extends to making mistakes in our search for a new business model for journalism. But this shouldn't include Jonathan Taplin's great big mistake of looking into the digital future and seeing only places we've been before.

Mike Godwin (@sfmnemonic) is a Senior Fellow at R Street Institute. Godwin was named as a Freedom Forum Fellow at the Freedom Forum Media Studies Center in 1997 and may have once said something about Nazis online for which he will always be remembered.

52 Comments


This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it