A Book Review Of Code And Other Laws Of Cyberspace

from the more-timely-than-you-might-think dept

Twenty years ago, Larry Lessig published the original version of his book Code and Other Laws of Cyberspace. A few years later, he put out a very updated version called Code 2.0. Both versions are classics and important pieces of the history of the internet -- and are especially interesting to look at now that issues of how much "code" is substituting as "law" have become central to so many debates. When the original book was published, in 1999, Mike Godwin wrote a review for a long defunct journal called E-Commerce Law Weekly. Given the importance of these issues today, we're republishing a moderately updated version of Godwin's original 1999 review. It's interesting to view this review through the lens of the past 20 years of history that we now have lived through.

Imagine that you could somehow assemble the pioneers of the Internet and the first political theorists of cyberspace in a room and poll them as to what beliefs they have in common. Although there would be lots of heated discussion and no unanimity on any single belief, you might find a majority could get behind something like the following four premises:

  1. The Internet does not lend itself to regulation by governments.
  2. The proper way to guarantee liberty is to limit the role of government and to prevent government from acting foolishly with regard to the Internet.
  3. The structure of the Internet—the "architecture" of cyberspace, if you will—is politically neutral and cannot easily be manipulated by government or special interests.
  4. The expansion of e-commerce and the movement of much of our public discourse to the online world will increase our freedom both as citizens and as consumers.
But what if each of these premises is at best incomplete and at worse false or misleading? (Leave aside the likelihood that they're not entirely consistent with one another.) What if the architecture of the Net can be changed by government and the dynamism of e-commerce? What if the very developments that enhance electronic commerce also undermine political freedom and privacy? The result might be that engineers and activists who are concerned about preserving democratic values in cyberspace were focusing their efforts in the wrong direction. By viewing governmental power as the primary threat to liberty, autonomy, and dignity, they'd blind themselves to the real threats—threats that it may require government to block or remedy.

It is precisely this situation in which Harvard law professor Lawrence Lessig believes we find ourselves. In his new book Code and Other Laws of Cyberspace (Basic Books, 1999), Lessig explores at length his thesis that the existing accounts of the political and legal framework of cyberspace are incomplete and that their very incompleteness may prevent us from preserving the aspects of the Internet we value most. Code is a direct assault on the libertarian perspective that informs much Internet policy debate these days. What's more, Lessig knows that he's swimming against the tide here, but he nevertheless takes on in Code a project that, although focused on cyberspace, amounts to nothing less than the relegitimization of the liberal (in the American sense) philosophy of government.

It is a measure of Lessig's thoroughness and commitment to this project that he mostly succeeds in raising new questions about the proper role of government with regard to the Net in an era in which, with the exception of a few carveouts like Internet gambling and cybersquatting, Congress and the White House have largely thrown up their hands when it comes to Internet policy. While this do-nothingism is arguably an improvement over the kind of panicky, ill-informed interventionism of 1996's Communications Decency Act (which Lessig terms "[a] law of extraordinary stupidity" that "practically impaled itself on the First Amendment"), it also falls far short, he says, of preserving fundamental civil values in a landscape reshaped by technological change.

Architecture Is Not Static

To follow Lessig's reasoning in Code, you need to follow his terminology. This is not always easy to do, since the language by which he describes the Internet as it is today and as it might someday become is deeply metaphorical. Perhaps the least problematic of his terms is "architecture," which Lessig borrows from Mitchell Kapor's Internet aphorism that "architecture is politics." Although his use of the term is a little slippery, Lessig mostly means for us to understand the term "architecture" to refer to both (a) the underlying software and protocols on which the Internet is based and (b) the kinds of applications that may run "on top of that Internet software infrastructure." And while the first kind of architecture is not by itself easily regulable, Lessig says, the second kind might make it so—for example, by incorporating the various monitoring and identification functions that already exist on proprietary systems and corporate intranets.

More difficult to get a handle on is his use of the word "code," which seems to expand and contract from chapter to chapter. At some bedrock level, Lessig means "code" to signify the software and hardware that make up the Internet environment—akin to the sense of "code" that programmers use. But he is also fond of metaphoric uses of "code" that muddy the waters. "Code is law," Lessig writes at several points, by which we may take him to mean that the Internet's software constrains and shapes our behavior with as much force as law does. And of course the book's title equates code and law.

Elsewhere, however, he writes that code is something qualitatively different from law in that it does not derive from legislative or juridical action or community norms, yet may affect us more than laws or norms do, while providing us less opportunity for amendment or democratic feedback. It does not help matters when he refers to things like bicycle locks as "real-world code." But if you can suspend your lexical disbelief for a while, the thrust of Lessig's argument survives any superficial confusions wrought by his terminology.

That argument depends heavily on the first point Lessig makes about Internet architecture, which is simply that it's malleable—shapeable by human beings who may wish to implement an agenda. The initial architecture of the Internet, he says correctly, emphasized openness and flexibility but provided little support for identifying or authenticating actual individuals or monitoring them or gathering data about them. "On the Internet it is both easy to hide that you are a dog and hard to prove that you are not," Lessig writes. But this is a version of the Internet, he says, that is already being reshaped by e-commerce, which has reasons for wanting to identify buyers, share financial data about them, and authenticate the participants in transactions. At the center of e-commerce-wrought changes is the technology of encryption, which, while it has the ability to render communications and transactions in transit, also enables an architecture of identification (through, e.g., encryption-based certification of identity and digital signatures).

The key to the creation of such an architecture, Lessig writes, is not that a government will require people to hold and use certified IDs. Instead, he writes, "The key is incentives: systems that build the incentives for individuals voluntarily to hold IDs." Lessig adds, "When architectures accommodate users who come with an ID installed and make life difficult for users who refuse to bear an ID, certification will spread quickly."

But even if you don't believe that e-commerce alone will establish an architecture of identification, he writes, there are reasons to believe that government will want to help such an architecture along. After all, a technology that enables e-commerce merchants to identify you and authorize your transactions may also have an important secondary usefulness to a government that wants to know where you've been and what you've been up to on the Internet.

And if the government wants to change the technological architecture of the Internet, there is no reason to believe it would not succeed, at least to some extent. After all, Lessig says, the government is already involved in mandating changes in existing architectures in order to effectuate policy. Among the examples of this kind of architectural intervention, he says, are (a) the Communications Assistance to Law Enforcement Act of 1994, in which Congress compelled telephone companies to make their infrastructure more conducive to successful wiretaps, (b) Congress's requiring the manufacturers of digital recording devices to incorporate technologies the extent to which perfect copies can be made, and (c) the requirement in the Telecommunications Act of 1996 that the television industry design and manufacture a V-chip to facilitate individuals' ability to automatically block certain kinds of televised content.

With an identification architecture in place, Lessig argues, what previously might seem to be an intractable Internet-regulation problem, like the prohibition of Internet gambling, might become quite manageable.

The Government and Code

An account of social activity on the Internet that deals solely with the legal framework is inadequate, Lessig argues. In Lessig's view, the actual "regulators" of social behavior come from four sources, each of which has its own dynamic. Those sources of social constraints are the market, the law, social norms, and architecture—here "architecture" means "the constructed environment in which human beings conduct their activities). "But these separate constraints obviously do not simply exist as givens in a social life," Lessig writes. "They are neither found in nature nor fixed by God," he writes, adding that each constraint "can be changed, although the mechanism of changing each is complex." The legal system, he says, "can have a significant role in this mechanics."

So can the open-source movement, which Lessig refers to as "open code." The problem with "architectural" constraints, and the thing that distinguishes them from any other kind, is that they do not depend on human awareness or judgment to function. You may choose whether or not to obey a law or a social norm, for example, and you may choose whether or not to buy or sell something in the market, but (to use the metaphor) you cannot enter a building through a door if there is no door there, and you cannot open a window if there is no window. Open code—software that is part of a code "commons," that is not owned by any individual or business, and that can be inspected and modified—can provide a "a check on state power," Lessig writes, insofar as it makes any government-mandated component of the architecture of the Net both visible to, and (potentially) alterable by, citizens. Open code, which still makes up a large part of the Internet infrastructure, is thus a way of making architecture accountable and subject to democratic feedback, he argues. "I certainly believe that government must be constrained, and I endorse the constraints that open code imposes, but it is not my objective to disable government generally," Lessig writes. But, he adds, "some values can be achieved only if government intervenes."

A Jurisprudence of Cyberspace?

One way that government intervenes, of course, is through the court system. And as Lessig notes, it may be the courts that are first called upon to interpret and preserve our social values when technology shifts the effective balance of rights for individuals. A court faced with such a shift often must engage in "translation" of longstanding individual rights into a new context, he says.

Take wiretapping, for example. Once upon a time, it was not so easy for law-enforcement agents to get access to private conversations. But once telephones had become commonplace and, as Lessig puts it, "life had just begun to move onto the wires," the government began to tap phones in order to gather evidence in criminal investigations. Does wiretapping raise Fourth Amendment concerns? The Supreme Court first answered this question in Olmstead v. United States (1928)—the answer for the majority was that wiretapping, at least when the tap was places somewhere other than on a tappee's property, did not raise Fourth Amendment issues since the precise language of the Fourth Amendment does not address the non-trespassory overhearing of conversations. That is one mode of translation, Lessig writes—the court preserved the precise language of the Fourth Amendment in a way that contracted the scope of the zone of privacy protected by the Fourth Amendment.

Another, and arguably preferable approach, Lessig says, would be to follow Justice Louis Brandeis's approach in his dissent in Olmstead—an approach that preserves the scope of the privacy zone while departing from a strict adherence to the literal language of the Amendment. Brandeis's dissent, arguing that the capture of private conversations does implicate the Fourth Amendment, was adopted by the Supreme Court forty years after Olmstead.

But what if technology raises a question for a court for which it is not clear which interpretative choice comes closer to preserving or "translating" the values inherent in the Bill of Rights? Borrowing from contract law, Lessig calls such a circumstance a "latent ambiguity." He further suggests—this is perhaps the most unfashionable of his arguments—that, instead of simply refusing to act and referring the policy question to the legislature, courts might simply attempt to make the best choice at preserving constitutional values in the hope that its choice will at minimum "spur a conversation about these fundamental values...to focus a debate that may ultimately be resolved elsewhere."

Internet Alters Copyright and Privacy

All this begins to seem far afield from the law of cyberspace, but Lessig's larger point is that the changes wrought by the Internet and related technologies are likely to raise significant "latent ambiguity" problems. He focuses on three areas in which technologies raise important questions about values but for which a passive or overliteral "translation" approach would not be sufficient. Those areas are intellectual property, privacy, and freedom of speech. In each case, the problem Lessig sees is one that is based on "private substitutes for public law"—private, non-governmental decision making that undercuts the values the Constitution and Bill of Rights were meant to preserve.

With intellectual property, and with copyright in particular, technological changes raise new problems that the nuanced established legal balances built into the law do not address. Lessig challenges the long-standing assertion, in Internet circles, at least, that the very edifice of copyright law is likely to crumble in the era of the Internet, which enables millions of perfect copies of a creative work to be duplicated and disseminated for free, regardless of whether the copyright holder has granted anyone a license. In response to that perceived threat, Lessig observes, the copyright holders have moved to force changes in technology and changes in the law.

As a result, technologically implemented copyright—protection and copyright—management schemes are coming online, and the government has already taken steps to prohibit the circumvention of such schemes. This has created a landscape in which the traditional exercise of one's rights to "fair use" of another's work under the Copyright Act may become meaningless. The fact that one technically has a right to engage in fair use is of no help when one cannot engage in any unauthorized copying. Complicating this development, Lessig believes, is the oncoming implementation of an ID infrastructure on the Internet, which may make it impossible for individuals to engage in anonymous reading.

This bears some explaining. Consider that if you buy a book in a bookstore with cash, or if you read it in the library, nobody knows what you're buying and reading. By contrast, a code-based licensing scheme in which you identify yourself online in order to obtain or view a copy of a copyrighted work may undercut your anonymity, especially if there's an Internet I.D. Infrastructure already in place. The technology changes are "private" ones—they do not involve anything we'd call "state action" and thus do not raise what we normally would call a constitutional problem—but they affect public values just as deeply as traditional constitutional problems do.

A similar argument can be made about how the Internet alters our privacy rights and expectations. Because the Internet both makes our backgrounds more "searchable" and our current behavior more monitorable, Lessig reasons, the privacy protections in our Bill of Rights may become meaningless. Once again, when the searching and monitoring is done by someone other than the government, it means that the "state action" trigger for invoking the Bill of Rights is wholly absent.

What's more, such searching and monitoring, whether done by the government or otherwise, may be invisible to the person being investigated. You will have lost your right to any meaningful privacy and you will not even know it is gone until it is too late. Lessig's analysis of the problem here is convincing, even though his proposed solution, a "property regime" for personal data that would replace today's "liability regime," is deeply problematic. This is partly because it would transmute invasions of privacy into property crimes—aren't the jails full enough without adding gossips to the inmates—and partly because the distinction he draws between property regimes and liability regimes as to which benefits the individual more is (in my view) illusory in practical terms.

Perhaps Lessig's most controversial position with regard to the threat of private action to public values is the one he has explored previously in a number of articles for law reviews and popular publications—the argument that some version of the Communications Decency Act—perhaps one that required minors to identify themselves as such so as to be blocked from certain kinds of content—is less dangerous to freedom of speech than is the private use of technologies that filter content. It is important to understand that Lessig is not actually calling for a new CDA here, although that nuance might escape some legislators.

Lessig interprets such a version of the CDA, and the architecture that might be created by it, as a kind of "zoning," which he sees as preferable to private, non-legislated filtering because, he says, zoning "builds into itself a system for its limitation. A site cannot block someone from the site without that individual knowing it." By contrast, he says, a filtering regime such as (now widely regarded as moribund) Platform for Internet Content Selection enables all sorts of censorship schemes, not just nominally child-protecting ones. PICS, because it can scale to function at the server or even network level, can be used by a government to block, say, troubling political content. And because PICS support can be integrated into the architecture of the Internet, it could be used to create compelling private incentives for people to label their Internet content. Worse, he says, such blocking would be invisible to individuals.

Lessig's Arguments Hard to Harmonize

There are many problems with Lessig's analysis here, and while it would take more space than I have here to discuss them in depth, I can at least indicate what some of the problems are. First of all, it's not at all clear that one could not create a "zoning" solution that kept the zoning-excluded users from knowing—directly at least—that they have been excluded. Second, if a zoning scheme works to exclude users identified as kids, is there any reason to think it would not work equally well in excluding users identified as Iranians or Japanese or Americans? Don't forget that incipient I.D. architecture, after all.

Third, a PICS-like scheme, implemented at the server level or higher, is actually less threatening to freedom of speech than key-word or other content filtering at the server level or higher. PICS, in order to function, requires that some high percentage of the content producers in the world buy into the self-labeling scheme before a repressive government could use it to block its citizens from disapproved content. Brute-force key-word filtering, by contrast, does not require anyone else's cooperation—a repressive government could choose its own PICS-independent criteria and implement them at the server level or elsewhere.

Fourth, there's nothing inherent in the architecture of a PICS-style scheme—in the unlikely event that such a scheme were implemented—or any other server-level filtering scheme that requires that users not be notified that blocking took place. In short, you could design that architecture so that its operation is visible.

Lessig is right to oppose the implementation of anything that might be called an architecture of filtering. But one wonders why he is so intent on saying that zoning is better than filtering when both models can operate as tools of repression. Lessig answers that question by letting us know what his real worry is, which is that individuals with filtering tools will block out those who need to be heard. Says Lessig: "[F]rom the standpoint of society, it would be terrible if citizens could simply tune out problems that were not theirs.... We must confront the problems of others and think about problems that affect our society. This exposure makes us better citizens." His concern is that we will use filtering tools to bar us from that salutary exposure.

Leaving aside the question of whether his value here is one we should embrace—it is hard to harmonize it with what Brandeis in his Olmstead dissent termed "the right to be let alone"—it seems worth noting that the Internet does not really stand as evidence to Lessig's assumption that people will use their new tools to avoid confrontation with those holding different opinions. Indeed, much of the evidence seems to point the other way, as anyone who has ever viewed a long-running Internet flame war or inspected dueling Web sites can attest. Nothing forces combatants on the Internet to stay engaged, but they do anyway. The fact is, we like to argue with each other—as Deborah Tannen has pointed out, we have embraced an "argument culture." Whether that culture is healthy is another question, of course.

But even if one disagrees with Lessig's analysis of certain particular issues, this does not detract from his main argument, which is that private decision making, enhanced by new technologies and implemented as part of the "architecture" of the Internet, may undercut the democratic values—freedom of speech, privacy, autonomy, access to information—at the core of our society. Implicit in his argument is that the traditional focus of civil libertarians, which is to challenge government interventions in speech and privacy arenas, may be counterproductive in this new context. If I read him right, Lessig is calling for a new constitutional philosophy, one rooted perhaps in Mill's essay On Liberty in which government can function as a positive public tool to preserve from private encroachments of the liberty values we articulated in the Constitution. Such a philosophy would require, however, a very imaginative "translation" of constitutional values indeed to get past the objection that the Bill of Rights is only about limiting "state action."

What Code is really about is (the author's perception of) the need for political liberals to put a positive face on the role of government without embracing statism or seeming to. Although this is clearly Lessig's project, he's pessimistic about its success—in the public debate about Internet policy, he complains, the libertarians have essentially won the field. What he would like to see, perhaps, is a constitutional structure in which something like the Bill of Rights could be invoked against challenges to personal liberty or autonomy, regardless of whether the challenges come from public or private sources. The ideology of libertarianism, he believes, will interpret the changes wrought by e-commerce and other private action as a given, like the weather. "We will watch as important aspects of privacy and free speech are erased by the emerging architecture of the panopticon, and we will speak, like modern Jeffersons, about nature making it so—forgetting that here, we are nature," he writes in a somewhat forlorn final chapter.

Lessig may be right in his gloomy predictions, but let us suppose that his worst fears are not realized and a new debate does begin about the proper role of government in cyberspace and about appropriate limitations on private crafting of the online architecture. If that happens, it may be that at least some of the thanks for that development will have to go to Lessig's Code.

In 1999, Mike Godwin (@sfmnemonic) was senior legal editor of E-Commerce Law Weekly and had just recently published Cyber Rights: Defending Free Speech in the Digital Age. Currently he is a senior fellow at R Street Institute.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: code, larry lessig, law, mike godwin


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. This comment has been flagged by the community. Click here to show it
    identicon
    Anonymous Coward, 7 Mar 2019 @ 2:24pm

    Zeran v. AOL was the canary in the coal mine for Section 230: Zeran was falsely accused of profiting from the Oklahoma City bombing, radio DJs repeated what they read online, and he was viciously harassed as a result.

    People had a choice of whether the internet was more important than Zeran's reputation and physical safety, and the internet won out. Right then and there, the die was cast. Almost all Section 230 cases since have posed the same dilemma, one still not resolved by our Supreme Court. Revenge Porn was the tipping point for opposition to Section 230 to gain steam.

    As for privacy, people surrended that pretty willingly.

    As for copyright, we have patronage or a hobby where a thriving industry used to exist.

    link to this | view in thread ]

  2. icon
    Mason Wheeler (profile), 7 Mar 2019 @ 2:26pm

    Hmm... let's see.

    Code is a direct assault on the libertarian perspective that informs much Internet policy debate these days.

    I'm liking it already.

    Elsewhere, however, he writes that code is something qualitatively different from law in that it does not derive from legislative or juridical action or community norms, yet may affect us more than laws or norms do, while providing us less opportunity for amendment or democratic feedback. It does not help matters when he refers to things like bicycle locks as "real-world code."

    Hmm? This is not a particularly difficult metaphor to follow. I haven't even read the book, but just from this description--purportedly of something that doesn't make much sense--it makes perfect sense what Lessig is trying to say.

    The key to the creation of such an architecture, Lessig writes, is not that a government will require people to hold and use certified IDs. Instead, he writes, "The key is incentives: systems that build the incentives for individuals voluntarily to hold IDs." Lessig adds, "When architectures accommodate users who come with an ID installed and make life difficult for users who refuse to bear an ID, certification will spread quickly."

    But even if you don't believe that e-commerce alone will establish an architecture of identification, he writes, there are reasons to believe that government will want to help such an architecture along. After all, a technology that enables e-commerce merchants to identify you and authorize your transactions may also have an important secondary usefulness to a government that wants to know where you've been and what you've been up to on the Internet.

    Interesting concept, but it didn't really ever come to pass. I'm "identified" to websites today in essentially the same way I was identified 20 years ago: by my email address and a password, with no cryptographic certificate or similar technology involved.

    So can the open-source movement, which Lessig refers to as "open code."

    I'm glad that term never caught on. The term "open source software" dates to 1998, and I was already familiar with it before this book was published. Having multiple terms for the same thing just creates confusion at best and hair-splitting contention at worst.

    With intellectual property, and with copyright in particular, technological changes raise new problems that the nuanced established legal balances built into the law do not address. Lessig challenges the long-standing assertion, in Internet circles, at least, that the very edifice of copyright law is likely to crumble in the era of the Internet, which enables millions of perfect copies of a creative work to be duplicated and disseminated for free, regardless of whether the copyright holder has granted anyone a license. In response to that perceived threat, Lessig observes, the copyright holders have moved to force changes in technology and changes in the law.
    As a result, technologically implemented copyright—protection and copyright—management schemes are coming online, and the government has already taken steps to prohibit the circumvention of such schemes. This has created a landscape in which the traditional exercise of one's rights to "fair use" of another's work under the Copyright Act may become meaningless. The fact that one technically has a right to engage in fair use is of no help when one cannot engage in any unauthorized copying.

    He was spot-on with this one. That's still one of the biggest problems we're struggling with today. But...

    Complicating this development, Lessig believes, is the oncoming implementation of an ID infrastructure on the Internet, which may make it impossible for individuals to engage in anonymous reading.

    Thankfully, this part never happened.

    A similar argument can be made about how the Internet alters our privacy rights and expectations. Because the Internet both makes our backgrounds more "searchable" and our current behavior more monitorable, Lessig reasons, the privacy protections in our Bill of Rights may become meaningless. Once again, when the searching and monitoring is done by someone other than the government, it means that the "state action" trigger for invoking the Bill of Rights is wholly absent.

    This line of reasoning sounds familiar...

    Lessig's analysis of the problem here is convincing, even though his proposed solution, a "property regime" for personal data that would replace today's "liability regime," is deeply problematic. This is partly because it would transmute invasions of privacy into property crimes

    With the benefit of hindsight, is that really such a bad thing? Who wouldn't like seeing execs at Verizon, Comcast, Facebook, AT&T, and other abusive companies that trample on your privacy get sent to prison for it?

    But even if one disagrees with Lessig's analysis of certain particular issues, this does not detract from his main argument, which is that private decision making, enhanced by new technologies and implemented as part of the "architecture" of the Internet, may undercut the democratic values—freedom of speech, privacy, autonomy, access to information—at the core of our society. Implicit in his argument is that the traditional focus of civil libertarians, which is to challenge government interventions in speech and privacy arenas, may be counterproductive in this new context. If I read him right, Lessig is calling for a new constitutional philosophy, one rooted perhaps in Mill's essay On Liberty in which government can function as a positive public tool to preserve from private encroachments of the liberty values we articulated in the Constitution. Such a philosophy would require, however, a very imaginative "translation" of constitutional values indeed to get past the objection that the Bill of Rights is only about limiting "state action."

    It doesn't have to be particularly "imaginative." It only requires recognizing a simple truth: the rights that are meant to be protected in the Bill of Rights are rights that we believe in protecting. They were originally only applied to the government because originally only the government had the sort of wide-reaching power that was required to do real, widespread harm to them, but what they are, at the core, are things that We The People find so fundamentally important that we don't even trust ourselves, in the person of our elected representatives, to tamper with them. Now that the government is not the only ones with that kind of power anymore, our defenses of our values have to evolve to acknowledge this truth or those values risk becoming meaningless.

    What he would like to see, perhaps, is a constitutional structure in which something like the Bill of Rights could be invoked against challenges to personal liberty or autonomy, regardless of whether the challenges come from public or private sources.

    And why shouldn't he? Private oppression is, if anything, even more harmful because in a democratic system, at least the government is accountable to the people, while private entities are not.

    The ideology of libertarianism, he believes, will interpret the changes wrought by e-commerce and other private action as a given, like the weather. "We will watch as important aspects of privacy and free speech are erased by the emerging architecture of the panopticon, and we will speak, like modern Jeffersons, about nature making it so—forgetting that here, we are nature," he writes in a somewhat forlorn final chapter.

    This one is right on the money! When you try to talk to a libertarian about some sort of societal problem that their twisted ideology sees as beneficial, about half the time (at least!) they'll launch into some super-condescending lecture about how that's just the way things naturally are and trying to change it would 1) be futile, 2) be more trouble than it's worth, 3) just make things worse even if you succeeded, or 4) all of the above. It's one of the many things that makes libertarianism so disgusting and evil.

    Lessig may be right in his gloomy predictions, but let us suppose that his worst fears are not realized and a new debate does begin about the proper role of government in cyberspace and about appropriate limitations on private crafting of the online architecture. If that happens, it may be that at least some of the thanks for that development will have to go to Lessig's Code.

    Unfortunately, it hasn't yet. But many of the problems Lessig pointed out have become more and more prominent as time goes by, so maybe now's a good time to start?

    link to this | view in thread ]

  3. identicon
    Anonymous Coward, 7 Mar 2019 @ 3:49pm

    Re: You’re never tired of being wrong

    First today the police reports than the investigate journalism. Now the revenge porn. All you have to do is bring out that bogus presidential threat analogy and you will have a full John boy bullshit moral panic bingo.

    link to this | view in thread ]

  4. identicon
    Rocky, 7 Mar 2019 @ 4:12pm

    Re:

    Let me summarize:

    Stupid people who believe anything they read or hear harassed some poor guy. The guy then sued AOL instead of going after the stupid mob - because of this 230 is flawed in your opinion.

    In essence, you are arguing that stupid people doing stupid shit are blameless and it's all the platforms fault.

    link to this | view in thread ]

  5. identicon
    Christenson, 7 Mar 2019 @ 9:06pm

    Filtering/zones

    Oh my... we talk about filtering as if we mean all sorting of content into categories...

    But here i am, using Mike Masnick's choices of what to report on for Techdirt as a way of sorting out what is interesting. It is not, of course, absolute...I use multiple sources.. but I think we have to recognize that the question of filters/zones is one of who is the deciding agent. It's (mostly) fine if I filter for me (except I caught the measles last week!). But I sure as heck think any of the famous people and companies named or shamed on Techdirt should not be entrusted entirely with that decision.

    There's too much content...choices have to be made... but how???

    link to this | view in thread ]

  6. identicon
    Anonymous Coward, 7 Mar 2019 @ 9:50pm

    Re: Re:

    Jhon's angry that his stupid shit can't get the SWAT teams to send Masnick to his pound-me-in-the-ass prison.

    He's got a thing for non-con, Jhon does.

    link to this | view in thread ]

  7. identicon
    Anonymous Coward, 8 Mar 2019 @ 2:29am

    I've been reading Lessig from when he published his first work (not that long ago). I'll say a little about myself. I'm a Navy vet, but I served during the Reagan years before the resent series of Military Actions that stirred up the Middle East.

    I've spent most of my adult life as a "Jeffersonian Liberal", and I arrived there through study of History as a hobby, and playing Historical table top games. (pushing around miniature
    American Civil War, and Napoleonic pewter figurines).

    As a Jeffersonian Liberal, I most compare with the Libertarian Party, but I vote the Person not the Party. I'm a Small Government, Fiscal Conservative, Social Liberal. As such, I've never really fit into the typical "Two Party" mold. I've learned the lessons of History though, and believe me when I tell you that as a Species, we have been down this road before. There is no Right Way to Implement Communism. It goes against Human Nature. We are far too selfish, and barely evolved enough to comprehend the consequences of our current reality, let alone attempt to implement a Fantasy Utopia.

    I'm out here, on the sidelines watching History being made, and I'm rooting for Love, Civility and Compromise.

    link to this | view in thread ]

  8. icon
    Scary Devil Monastery (profile), 8 Mar 2019 @ 2:54am

    Re: Filtering/zones

    "But here i am, using Mike Masnick's choices of what to report on for Techdirt as a way of sorting out what is interesting. It is not, of course, absolute...I use multiple sources.. but I think we have to recognize that the question of filters/zones is one of who is the deciding agent."

    Cognitive bias isn't exactly a new thing, nor does the internet significantly worsen the problem that most people tend to go on information someone told them in a pub/party/klan gathering, etc.

    If anything I believe that the internet has done the opposite. People are critical to unsubstantiated sources to a greater degree than ever before. And that's a massive step forward from the time when there was even more public trust in all the wrong sources.

    link to this | view in thread ]

  9. identicon
    Rocky, 8 Mar 2019 @ 3:42am

    Re: Re: Filtering/zones

    If anything I believe that the internet has done the opposite. People are critical to unsubstantiated sources to a greater degree than ever before. And that's a massive step forward from the time when there was even more public trust in all the wrong sources.

    I have to disagree with you here. I would say that some people have become more critical of unsubstantiated sources - and it's usually the same people who always have been applying critical thinking to some degree. The majority of people on the internet doesn't even reflect over what kind of information they consume and how it colors their thinking. And even if they critically reflect over the information they still get colored by it (see the articles about FB moderators).

    Most people doesn't want to think critically because it may upset the familiar worldview they harbor.

    It has changed a bit for the better the last years, mostly due to the fact of all the reporting around the fake news pushed out to affect elections - but there will always be die-hard believers out there that will scoff at any tangible proof that they are wrong and proclaim it's fake.

    link to this | view in thread ]

  10. identicon
    Anonymous Coward, 8 Mar 2019 @ 9:31am

    Re:

    I'm "identified" to websites today in essentially the same way I was identified 20 years ago: by my email address and a password, with no cryptographic certificate or similar technology involved.

    That's not necessarily true, though it may not be obvious. When you provide your email address to a site and it sends you an email to verify, the servers may be using each other's cryptographic certificates behind the scenes. For both DNSSEC and SMTP+TLS, and then the email may ask you to visit an HTTPS link.

    You may have provided a phone number to create your email account, and there may be some backend cryptography (likely not enough) there too. And it's possible you showed a government ID card to get that phone number (that varies by country and carrier).

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.