Computers are such an important part of our daily lives now that it's difficult to imagine how we could get along without them sometimes. Obviously, people do. But growing accustom to supercomputer capabilities available at our fingertips all the time is much more than a luxury. We expect computers to get better and better at an astonishing (exponential) rate, but will we notice if/when that rate slows down? Here are just a few links on keeping up -- or possibly exceeding -- the performance expectations that Moore's Law has instilled in us.
May 6th is the official Day Against DRM. I'm a bit late writing anything about it, but I wanted to highlight this great post by Parker Higgins about an aspect of DRM that is rarely discussed: how DRM makes us less safe. We've talked a lot lately about how the NSA and its surveillance efforts have made us all less safe, but that's also true for DRM.
DRM on its own is bad, but DRM backed by the force of law is even worse. Legitimate, useful, and otherwise lawful speech falls by the wayside in the name of enforcing DRM—and one area hit the hardest is security research.
Section 1201 of the Digital Millennium Copyright Act (DMCA) is the U.S. law that prohibits circumventing "technical measures," even if the purpose of that circumvention is otherwise lawful. The law contains exceptions for encryption research and security testing, but the exceptions are narrow and don’t help researchers and testers in most real-world circumstances. It's risky and expensive to find the limits of those safe harbors.
As a result, we've seen chilling effects on research about media and devices that contain DRM. Over the years, we've collected dozens of examples of the DMCA chilling free expression and scientific research. That makes the community less likely to identify and fix threats to our infrastructure and devices before they can be exploited.
That post also reminds us of Cory Doctorow's powerful speech about how DRM is the first battle in the war on general computing. The point there is that, effectively, DRM is based on the faulty belief that we can take a key aspect of computing out of computing, and that, inherently weakens security as well. Part of this is the nature of DRM, in that it's a form of weak security -- in that it's intended purpose is to stop you from doing something you might want to do. But that only serves to open up vulnerabilities (sometimes lots of them), by forcing your computer to (1) do something in secret (otherwise it wouldn't be able to stop you) and (2) to try to stop a computer from doing basic computing. And that combination makes it quite dangerous -- as we've seen a few times in the past.
DRM serves a business purpose for the companies who insist on it, but it does nothing valuable for the end user and, worse, it makes their computers less safe.
But in a series of cases this week about law enforcement searches of cell phones, we caught a glimpse of the Supreme Court’s real technology problem. Here's what it comes down to: it's not essential that the Court knows specifics about how technology itself works—and as Timothy Lee argues, that might even tempt them to make technology-based decisions that don't generalize well. However, it is essential that the Court understands how people use technology, especially in areas where they're trying to elaborate a standard of what expectations are "reasonable."
So when Chief Justice Roberts suggests that a person carrying two cell phones might reasonably be suspected of dealing drugs, that raises major red flags. Not because of any special facts about how cell phones work, but because (for example) at least half of the lawyers in the Supreme Court Bar brought two cell phones with them to the courthouse that day. Should those attorneys (along with the many, many other people who carry multiple devices) reasonably expect less privacy because the Chief Justice is out of touch with that fact?
Contrast that with Justice Kagan’s point about storage location in the same argument. Justice Kagan suggested, correctly, that people don’t always know what is stored on their device and what is stored “in the cloud.” The actual answer to that question should be immaterial; the point is that it’s absurd for a person’s privacy interest to hinge on which hard drive private data is stored on.1 Instead, the important fact here, which Justice Kagan recognizes, is that the distinction between local and cloud storage just doesn’t matter to many people, and so it can’t be the basis of a reasonable-expectation-of-privacy test.
If you’re feeling less generous, you might take Justice Kagan’s point as evidence that she herself doesn’t know where her files are stored. And in fact, that’s probably true—but it’s not important. You don’t actually need to know much about file systems and remote storage to know that it’s a bad idea for the law to treat it differently.
That’s not to say that technical implementation details are never relevant. Relevant details, though, should (and almost always do) get addressed in the briefs, long before the oral argument takes place. They don’t usually read like software manuals, either: they’re often rich with analogies to help explain not just how the tech works, but what body of law should apply.
What can’t really be explained in a brief, though, is a community’s relationship with a technology. You can get at parts of it, citing authorities like surveys and expert witnesses, but a real feeling for what people expect from their software and devices is something that has to be observed. If the nine justices on the Supreme Court can’t bring that knowledge to the arguments, the public suffers greatly. Again, Justice Kagan seems to recognize this fact when she says of cell phones:
They're computers. They have as much computing capacity as laptops did five years ago. And everybody under a certain age, let’s say under 40, has everything on them.
Justice Kagan is not under 40, and might not have everything stored on a phone (or on an online service accessible through her phone). But that quote shows me that she at least knows where other people’s expectations are different. Chief Justice Roberts’s questions show me exactly the opposite.
The justices live an unusual and sheltered life: they have no concerns about job security, and spend much of their time grappling with abstract questions that have profound effects on this country’s law. But if they fail to recognize where their assumptions about society and technology break from the norm—or indeed, where they are making assumptions in the first place—we’re all in trouble.
I don't think I've ever had so many people all recommend I watch the same thing as the number of folks who pointed me to Cory Doctorow's brilliant talk at the Chaos Communication Congress in Berlin last week. You can watch the 55 minute presentation below... or if you're a speed reader, you can check out the fantastic transcript put together by Joshua Wise, which I'll be quoting from:
The crux of his argument is pretty straightforward. The idea behind all these attempts to "crack down" on copyright infringement online, with things like DRM, rootkits, three strikes laws, SOPA and more, are really simply all forms of attacks on general purpose computing. That's because computers that can run any program screw up the kind of gatekeeper control some industries are used to, and create a litany of problems for those industries:
By 1996, it became clear to everyone in the halls of power that there was something important about to happen. We were about to have an information economy, whatever the hell that was. They assumed it meant an economy where we bought and sold information. Now, information technology makes things efficient, so imagine the markets that an information economy would have. You could buy a book for a day, you could sell the right to watch the movie for one Euro, and then you could rent out the pause button at one penny per second. You could sell movies for one price in one country, and another price in another, and so on, and so on; the fantasies of those days were a little like a boring science fiction adaptation of the Old Testament book of Numbers, a kind of tedious enumeration of every permutation of things people do with information and the ways we could charge them for it.
[[355.5]] But none of this would be possible unless we could control how people use their computers and the files we transfer to them. After all, it was well and good to talk about selling someone the 24 hour right to a video, or the right to move music onto an iPod, but not the right to move music from the iPod onto another device, but how the Hell could you do that once you'd given them the file? In order to do that, to make this work, you needed to figure out how to stop computers from running certain programs and inspecting certain files and processes. For example, you could encrypt the file, and then require the user to run a program that only unlocked the file under certain circumstances.
[[395.8]] But as they say on the Internet, "now you have two problems". You also, now, have to stop the user from saving the file while it's in the clear, and you have to stop the user from figuring out where the unlocking program stores its keys, because if the user finds the keys, she'll just decrypt the file and throw away that stupid player app.
[[416.6]] And now you have three problems [audience laughs], because now you have to stop the users who figure out how to render the file in the clear from sharing it with other users, and now you've got four! problems, because now you have to stop the users who figure out how to extract secrets from unlocking programs from telling other users how to do it too, and now you've got five! problems, because now you have to stop users who figure out how to extract secrets from unlocking programs from telling other users what the secrets were!
From there he goes on to put together a fantastic analogy of how a confusion over analogies, rather than (perhaps) outright cluelessness (or evilness) explains why bad copyright laws keep getting passed:
It's not that regulators don't understand information technology, because it should be possible to be a non-expert and still make a good law! M.P.s and Congressmen and so on are elected to represent districts and people, not disciplines and issues. We don't have a Member of Parliament for biochemistry, and we don't have a Senator from the great state of urban planning, and we don't have an M.E.P. from child welfare. (But perhaps we should.) And yet those people who are experts in policy and politics, not technical disciplines, nevertheless, often do manage to pass good rules that make sense, and that's because government relies on heuristics -- rules of thumbs about how to balance expert input from different sides of an issue.
[[686.3]] But information technology confounds these heuristics -- it kicks the crap out of them -- in one important way, and this is it. One important test of whether or not a regulation is fit for a purpose is first, of course, whether it will work, but second of all, whether or not in the course of doing its work, it will have lots of effects on everything else. If I wanted Congress to write, or Parliament to write, or the E.U. to regulate a wheel, it's unlikely I'd succeed. If I turned up and said "well, everyone knows that wheels are good and right, but have you noticed that every single bank robber has four wheels on his car when he drives away from the bank robbery? Can't we do something about this?", the answer would of course be "no". Because we don't know how to make a wheel that is still generally useful for legitimate wheel applications but useless to bad guys. And we can all see that the general benefits of wheels are so profound that we'd be foolish to risk them in a foolish errand to stop bank robberies by changing wheels. Even if there were an /epidemic/ of bank robberies, even if society were on the verge of collapse thanks to bank robberies, no-one would think that wheels were the right place to start solving our problems.
[[762.0]] But. If I were to show up in that same body to say that I had absolute proof that hands-free phones were making cars dangerous, and I said, "I would like you to pass a law that says it's illegal to put a hands-free phone in a car", the regulator might say "Yeah, I'd take your point, we'd do that". And we might disagree about whether or not this is a good idea, or whether or not my evidence made sense, but very few of us would say "well, once you take the hands-free phones out of the car, they stop being cars". We understand that we can keep cars cars even if we remove features from them. Cars are special purpose, at least in comparison to wheels, and all that the addition of a hands-free phone does is add one more feature to an already-specialized technology. In fact, there's that heuristic that we can apply here -- special-purpose technologies are complex. And you can remove features from them without doing fundamental disfiguring violence to their underlying utility.
[[816.5]] This rule of thumb serves regulators well, by and large, but it is rendered null and void by the general-purpose computer and the general-purpose network -- the PC and the Internet. Because if you think of computer software as a feature, that is a computer with spreadsheets running on it has a spreadsheet feature, and one that's running World of Warcraft has an MMORPG feature, then this heuristic leads you to think that you could reasonably say, "make me a computer that doesn't run spreadsheets", and that it would be no more of an attack on computing than "make me a car without a hands-free phone" is an attack on cars. And if you think of protocols and sites as features of the network, then saying "fix the Internet so that it doesn't run BitTorrent", or "fix the Internet so that thepiratebay.org no longer resolves", then it sounds a lot like "change the sound of busy signals", or "take that pizzeria on the corner off the phone network", and not like an attack on the fundamental principles of internetworking.
The end result, then, is that any attempt to pass these kinds of laws really results not in building a task-specific computing system or application, but in deliberately crippling a general purpose machine -- and that's kind of crazy for all sorts of reasons. Basically, it effectively means having to put spyware everywhere:
[[1090.5]] Because we don't know how to build the general purpose computer that is capable of running any program we can compile except for some program that we don't like, or that we prohibit by law, or that loses us money. The closest approximation that we have to this is a computer with spyware -- a computer on which remote parties set policies without the computer user's knowledge, over the objection of the computer's owner. And so it is that digital rights management always converges on malware.
[[1118.9]] There was, of course, this famous incident, a kind of gift to people who have this hypothesis, in which Sony loaded covert rootkit installers on 6 million audio CDs, which secretly executed programs that watched for attempts to read the sound files on CDs, and terminated them, and which also hid the rootkit's existence by causing the kernel to lie about which processes were running, and which files were present on the drive. But it's not the only example; just recently, Nintendo shipped the 3DS, which opportunistically updates its firmware, and does an integrity check to make sure that you haven't altered the old firmware in any way, and if it detects signs of tampering, it bricks itself.
[[1158.8]] Human rights activists have raised alarms over U-EFI, the new PC bootloader, which restricts your computer so it runs signed operating systems, noting that repressive governments will likely withhold signatures from OSes unless they have covert surveillance operations.
[[1175.5]] And on the network side, attempts to make a network that can't be used for copyright infringement always converges with the surveillance measures that we know from repressive governments. So, SOPA, the U.S. Stop Online Piracy Act, bans tools like DNSSec because they can be used to defeat DNS blocking measures. And it blocks tools like Tor, because they can be used to circumvent IP blocking measures. In fact, the proponents of SOPA, the Motion Picture Association of America, circulated a memo, citing research that SOPA would probably work, because it uses the same measures as are used in Syria, China, and Uzbekistan, and they argued that these measures are effective in those countries, and so they would work in America, too!
[audience laughs and applauds] Don't applaud me, applaud the MPAA!
But his point is much bigger than copyright. It's that the copyright fight is merely the canary in the coalmine to this kind of attack on general purpose computing in all sorts of other arenas as well. And those fights may be much bigger and more difficult than the copyright fight:
And it doesn't take a science fiction writer to understand why regulators might be nervous about the user-modifiable firmware on self-driving cars, or limiting interoperability for aviation controllers, or the kind of thing you could do with bio-scale assemblers and sequencers. Imagine what will happen the day that Monsanto determines that it's really... really... important to make sure that computers can't execute programs that cause specialized peripherals to output organisms that eat their lunch... literally. Regardless of whether you think these are real problems or merely hysterical fears, they are nevertheless the province of lobbies and interest groups that are far more influential than Hollywood and big content are on their best days, and every one of them will arrive at the same place -- "can't you just make us a general purpose computer that runs all the programs, except the ones that scare and anger us? Can't you just make us an Internet that transmits any message over any protocol between any two points, unless it upsets us?"
[[1576.3]] And personally, I can see that there will be programs that run on general purpose computers and peripherals that will even freak me out. So I can believe that people who advocate for limiting general purpose computers will find receptive audience for their positions. But just as we saw with the copyright wars, banning certain instructions, or protocols, or messages, will be wholly ineffective as a means of prevention and remedy; and as we saw in the copyright wars, all attempts at controlling PCs will converge on rootkits; all attempts at controlling the Internet will converge on surveillance and censorship, which is why all this stuff matters. Because we've spent the last 10+ years as a body sending our best players out to fight what we thought was the final boss at the end of the game, but it turns out it's just been the mini-boss at the end of the level, and the stakes are only going to get higher.
And this is an important fight. It's why each of the moves to fight back against attempts to censor and break computing systems is so important. Because the next round of fights is going to be bigger and more difficult. And while they'll simply never succeed in actually killing off the idea of the all-purpose general computer (you don't put that kind of revelation back in Pandora's box), the amount of collateral damage that can (and almost certainly will) be caused in the interim is significant and worrisome.
His point (and presentation) are fantastic, and kind of a flip side to something that I've discussed in the past. When people ask me why I talk about the music industry so much, I often note that it's the leading indicator for the type of disruption that's going to hit every single industry, even many that believe they're totally immune to this. My hope was that we could extract the good lessons from what's happening in the music industry -- the fact that the industry has grown tremendously, that a massive amount of new content is being produced, and that amazing new business models mean that many more people can make money from music today than ever before -- and look to apply some of those lessons to other industries before they freak out.
But Cory's speech, while perhaps the pessimistic flip side of that coin, highlights the key attack vector where all of these fights against disruption will be fought. They'll be attacks on the idea of general purpose computing. And, if we're hoping to ward off the worst of the worst, we can't just talk about the facts and data and success stories, but also need to be prepared to explain and educate about the nature of a general purpose computer, and the massive (and dangerous) unintended consequences from seeking to hold back general computing power to stop "apps we don't like."