In the post-Snowden era, we don't have to tell you how important it is to stay engaged with (and vigilant about) the surveillance state in America. Jennifer Granick is the Director of Civil Liberties at the Stanford Center for Internet and Society, and author of the new book American Spies — and this week she joins us for an in-depth discussion about the surveillance state today. Of course, shortly after we recorded this podcast, the NSA made major changes to one of its surveillance programs, so Jennifer returned to record an addendum examining this latest news, so make sure you listen to the end!
This has the makings of a movement along the lines of the highly-unofficial "Magistrates Revolt." More efforts are being made more frequently to push federal courts out of their default secrecy mode. The government prefers to do a lot of its work under the cover of judicial darkness, asking for dockets and documents to be sealed in a large percentage of its criminal cases.
Just in the last month, we've seen the ACLU petition the court to unseal dockets related to the FBI's takedown of Freedom Hosting using a Tor exploit and Judge Beryl Howell grant FOIA enthusiast Jason Leopold's request to have a large number of 2012 pen register cases unsealed.
Now, we have researchers Jennifer Granick and Riana Pfefferkornpetitioning [PDF] the Northern District of California court to unseal documents related to "technical assistance" cases -- like the one involving the DOJ's attempted use of an All Writs Order to force Apple to crack open a phone for it.
Petitioners Jennifer Granick and Riana Pfefferkorn, researchers at the Stanford Center for Internet and Society proceeding pro se, file this Petition to unseal court records. We file this Petition so that the public may better understand how government agents are using legal authorities to compel companies to assist them in decrypting or otherwise accessing private data subject to surveillance orders. Petitioners hereby seek the docketing of surveillance orders issued by this Court; the unsealing of those dockets; and the unsealing of the underlying Court records in surveillance cases relating to technical-assistance orders issued by this Court to communications service providers, smartphone manufacturers, or other third parties…
This district should contain a great number of documents fitting this description, seeing as it also contains a great number of service providers and third party tech companies.
More specifically, the researchers are looking to gain access to documents in cases where the government has used the following list of statutes to compel cooperation:
the Wiretap Act, 18 U.S.C. §§ 2510-2522;
the Stored Communications Act (or “SCA”), 18 U.S.C. §§ 2701-2712;
Not only that, but Granick and Pfefferkorn are asking the court to shift away from the default secrecy that has made this petition necessary.
[Petitioners request that] the Court revise its practices going forward, such that the Clerk’s office will assign case numbers to, docket, and enter into CM/ECF all applications and orders for search warrants, surveillance, and technical assistance; the Court will undertake a periodic review (e.g., annually or biannually) of sealed dockets, warrants, surveillance orders, and technical-assistance orders; and after such review, the Court will unseal those records for which there is no longer any need for continued sealing.
The researchers point out that secrecy in court records is a First Amendment issue: these documents were meant to be accessible by the general public. They also note that they aren't asking for anything related to ongoing investigations or information that might compromise future investigations -- like names of law enforcement personnel or other potential criminal investigation targets. All they're asking is that the court stop granting the government permission to seal complete dockets so often and to perform periodic reviews of sealed cases to see whether the imposed secrecy is still warranted.
As it stands now, this large number of sealed documents prevents the public from knowing how law enforcement agencies and courts are interpreting (often outdated) tech-related laws. It's preventing researchers like these two from gaining any insight on the government's electronic surveillance efforts and it's locking defense lawyers out of possibly precedential rulings that may affect current or future clients.
Lots of people, mainly those supporting the DOJ/FBI's view of the Apple fight, have been arguing that this isn't a big deal. They're just asking for one small thing. Other people have tried to examine "what's at stake" in the case, with a number of the arguments falling into the typical "privacy v. security" framing, or even something around precedents related to privacy and security. However, Jennifer Granick recently wrote a great piece that does a much better job framing what's truly at stake. It's not privacy vs. security at all, but rather who gets to set the rules over how software works in an era where software controls everything.
We live in a software-defined world. In 2000, Lawrence Lessig wrote that Code is Law — the software and hardware that comprise cyberspace are powerful regulators that can either protect or threaten liberty. A few years ago, Mark Andreessen wrote that software was eating the world, pointing to a trend that is hockey sticking today. Software is redefining everything, even national defense. But, software is written by humans. Increasingly, our reality will obey the rules encoded in software, not of Newtonian physics. Software defines what we can do and what can be done to us. It protects our privacy and ensures security, or not. Software design can be liberty-friendly or tyranny-friendly.
This battle is over who gets to control software, and thus the basic rules of the world we live in. Who will write the proverbial laws of physics in the digital world? Is it the FBI and DOJ? Is it the US Congress? Is it private industry? Or is it going to be individuals around the world making choices that will empower us to protect ourselves — for better or for worse?
That's a big question -- and it's not necessarily one that should be decided by a magistrate judge, making use of a law from the 18th century.
The big question then becomes: Are people going to be forced to live in a surveillance-friendly world? Or will the public be able to choose products — phones, computers, apps — that keep our private information, conversations, and thoughts secure?
Right now, the FBI wants to decide these questions with reference to a law that was originally passed in 1789. The All Writs Act allows courts to “issue all writs necessary or appropriate in aid of their respective jurisdictions and agreeable to the usages and principles of law.” Obviously, Congress wasn’t considering iPhone security at the time. The AWA has no internal limits and provides no guidance for courts on how to weigh individual privacy interests with corporate liberty and business interests with public safety interests. It is an utterly inappropriate vehicle for compelling forensic assistance.
And if the DOJ wins this case, it's pretty clear where this goes:
If the All Writs Act can be used in this way — to force a company to develop forensic software that the government wants to deploy in a single case of terrorism — it could be used in any number of other (currently unforeseen) circumstances.
In other words, design mandates will be next. In fact, maybe it’s already happening behind our backs. When the Snowden documents showed that Microsoft had created surveillance backdoors in Skype, Outlook.com, and Hotmail, the company issued a statement. It said:
Finally when we upgrade or update products legal obligations may in some circumstances require that we maintain the ability to provide information in response to a law enforcement or national security request. There are aspects of this debate that we wish we were able to discuss more freely. That’s why we’ve argued for additional transparency that would help everyone understand and debate these important issues.
At the Center for Internet and Society, we’ve been trying to figure out what those legal obligations are. I wonder if these AWA arguments are part of it.
To make sound policy in this space, the public needs to know what the government is forcing companies to do, the full picture. This San Bernardino case is just one salvo in the ongoing war between a surveillance-friendly world and a surveillance-resistant world. The stakes for liberty, security, and privacy — for control over our software-defined world — are high.
This is not about one phone. It's not about one case. It's not just about encryption. It's about how we work as a society and who gets to set the rules. That's kind of a big deal.
We warned earlier this week that Congress was going to make the cybersecurity bill CISA much worse on privacy, and then shove it into the "must pass" omnibus spending bill, and that's exactly what happened. The 2000+ page bill was only released early yesterday morning and the vote on it is tomorrow, meaning people have been scrambling to figure out what exactly is actually in there. The intelligence community has been using that confusion to push the bill, highlighting a couple of the predictions that didn't make it into the bill to argue that people against CISA are overstating the problems of the bill. That's pretty low, even for the intelligence community.
Stanford's Jennifer Granick has gone through this new zombie CISA, which has technically been renamed "the Cybersecurity Act of 2015," but which she's calling OmniCISA and discovered that it's a complete disaster on the privacy front, basically wiping out any ability by the FCC or the FTC to make service providers respect user privacy, and instead, is designed to encourage more monitoring of user behavior, weakening their privacy. As she notes, after the FCC's net neutrality rules, there was some concern about a turf war between the FCC and the FTC on who protects consumer privacy rights with regards to internet access providers. To stop people from freaking out over this, the two agencies told people to calm down, because they're happy to work together to protect privacy, with the FCC handling issues related to privacy as a common carrier, and the FTC handling everything else.
But, as Granick points out, under CISA, so long as ISPs claim that they're spying on your internet activity for "cybersecurity" purposes (which is defined ridiculously broadly in the bill), then the FCC and FTC are completely blocked from doing anything:
This language means that, regardless of what rules the FCC or FTC have now or will have in the future, private companies including ISPs can monitor their systems and access information that flows over those systems for “cybersecurity purposes.”
[....]
It appears that OmniCISA is trying to stake out a category of ISP monitoring that the FCC and FTC can’t touch, regardless of its privacy impact on Americans.
This section of OmniCISA would not only interfere with future privacy regulations, it limits the few privacy rules we currently have.
The Wiretap Act is a provision of law that conditions the ability of telephone companies and Internet Service Providers to monitor the private messages that flow over their networks. The Wiretap Act says that these wire and electronic communications service providers can “intercept, disclose, or use that communication in the normal course of … employment while engaged in any activity which is a necessary incident to the rendition of his service or to the protection of the rights or property of the provider of that service” (emphasis added). Similarly, ECPA allows providers to access stored information, and then to voluntarily share it for the same reasons. This language allows providers to conduct some monitoring of their systems for security purposes — to keep the system up and running and to protect the provider.
But it appears OmniCISA would waive these provisions of the Wiretap Act and ECPA. Why do that except to expand that ability to monitor for broader “cybersecurity purposes” beyond the legal ability providers already have to intercept communications in order to protect service, rights, or property?
So this bill isn’t just about threat information sharing, it’s about enabling ISP monitoring in ways beyond current law that have not been clearly defined or explained.
And, of course, if you don't think this will be abused both by the internet access providers and the law enforcement/intelligence communities, you haven't been paying attention for the past decade or more.
Last week, I came across two separate speeches that were given recently about the future of the internet -- both with very different takes and points, but both that really struck a chord with me. And the two seem to fit together nicely, so I'm combining both of them into one post. The first speech is Jennifer Granick's recent keynote at the Black Hat conference in Las Vegas. You can see the video here or read a modified version of the speech entitled, "The End of the Internet Dream."
It goes through a lot of important history -- some of which is already probably familiar to many of you. But, it's also important to remember how we got to where we are today in order to understand the risks and threats to the future of the internet. The key point that Granick makes is that for too long, we've been prioritizing a less open internet, in favor of a more centralized internet. And that's a real risk:
For better or for worse, we’ve prioritized things like security, online civility, user interface, and intellectual property interests above freedom and openness. The Internet is less open and more centralized. It’s more regulated. And increasingly it’s less global, and more divided. These trends: centralization, regulation, and globalization are accelerating. And they will define the future of our communications network, unless something dramatic changes.
Twenty years from now,
You won’t necessarily know anything about the decisions that affect your rights, like whether you get a loan, a job, or if a car runs over you. Things will get decided by data-crunching computer algorithms and no human will really be able to understand why.
The Internet will become a lot more like TV and a lot less like the global conversation we envisioned 20 years ago.
Rather than being overturned, existing power structures will be reinforced and replicated, and this will be particularly true for security.
Internet technology design increasingly facilitates rather than defeats censorship and control.
Later in the speech, she digs deeper into those key trends of centralization, regulation and globalization:
Centralization means a cheap and easy point for control and surveillance.
Regulation means exercise of government power in favor of domestic, national interests and private entities with economic influence over lawmakers.
Globalization means more governments are getting into the Internet regulation mix. They want to both protect and to regulate their citizens. And remember, the next billion Internet users are going to come from countries without a First Amendment, without a Bill of Rights, maybe even without due process or the rule of law. So these limitations won’t necessarily be informed by what we in the U.S. consider basic civil liberties.
This centralization is often done in the name of convenience -- because centralized systems currently offer up plenty of cool things:
Remember blogs? Who here still keeps a blog regularly? I had a blog, but now I post updates on Facebook. A lot of people here at Black Hat host their own email servers, but almost everyone else I know uses gmail. We like the spam filtering and the malware detection. When I had an iPhone, I didn’t jailbreak it. I trusted the security of the vetted apps in the Apple store. When I download apps, I click yes on the permissions. I love it when my phone knows I’m at the store and reminds me to buy milk.
This is happening in no small part because we want lots of cool products “in the cloud.” But the cloud isn’t an amorphous collection of billions of water droplets. The cloud is actually a finite and knowable number of large companies with access to or control over large pieces of the Internet. It’s Level 3 for fiber optic cables, Amazon for servers, Akamai for CDN, Facebook for their ad network, Google for Android and the search engine. It’s more of an oligopoly than a cloud. And, intentionally or otherwise, these products are now choke points for control, surveillance and regulation.
So as things keep going in this direction, what does it mean for privacy, security and freedom of expression? What will be left of the Dream of Internet Freedom?
She goes on to note how this centralization comes with a very real cost: mainly in that it's now one-stop shopping for government surveillance.
Globalization gives the U.S. a way to spy on Americans…by spying on foreigners we talk to. Our government uses the fact that the network is global against us. The NSA conducts massive spying overseas, and Americans’ data gets caught in the net. And, by insisting that foreigners have no Fourth Amendment privacy rights, it’s easy to reach the conclusion that you don’t have such rights either, as least when you’re talking to or even about foreigners.
Surveillance couldn’t get much worse, but in the next 20 years, it actually will. Now we have networked devices, the so-called Internet of Things, that will keep track of our home heating, and how much food we take out of our refrigerator, and our exercise, sleep, heartbeat, and more. These things are taking our off-line physical lives and making them digital and networked, in other words, surveillable.
At the end of her speech, Granick talks about the need to "build in decentralization where possible," to increase strong end-to-end encryption, to push back on government attempts to censor and spy.
And that's where the second speech comes in. It's by the Internet Archive's Brewster Kahle. And while he actually gave versions (one longer one and one shorter one) earlier this year, he just recently wrote a blog post about why we need to "lock the internet open" by building a much more distributed web -- which would counteract many of Granick's quite accurate fears about our growing reliance on centralized systems.
Kahle also notes how wonderful new services are online and how much fun the web is -- but worries about the survivability of a centralized system and the privacy implications. He notes how the original vision of the internet was about it being a truly distributed system, and it's the web (which is a subsegment of the internet for those of you who think they're the same), seems to be moving away from that vision.
Contrast the current Web to the Internet—the network of pipes on top of which the World Wide Web sits. The Internet was designed so that if any one piece goes out, it will still function. If some of the routers that sort and transmit packets are knocked out, then the system is designed to automatically reroute the packets through the working parts of the system. While it is possible to knock out so much that you create a chokepoint in the Internet fabric, for most circumstances it is designed to survive hardware faults and slowdowns. Therefore, the Internet can be described as a “distributed system” because it routes around problems and automatically rebalances loads.
The Web is not distributed in this way. While different websites are located all over the world, in most cases, any particular website has only one physical location. Therefore, if the hardware in that particular location is down then no one can see that website. In this way, the Web is centralized: if someone controls the hardware of a website or the communication line to a website, then they control all the uses of that website.
In this way, the Internet is a truly distributed system, while the Web is not.
And, thus, he wants to build a more distributed web, built on peer-to-peer technology that has better privacy, distributed authentication systems (without centralized usernames and passwords), a built-in versioning/memory system and easy payment mechanisms. As he notes, many of the pieces for this are already in existence, including tools like BitTorrent and the blockchain/Bitcoin. There's a lot more in there as well, and you should read the whole thing.
Our new Web would be reliable because it would be hosted in many places, and multiple versions. Also, people could even make money, so there could be extra incentive to publish in the Distributed Web.
It would be more private because it would be more difficult to monitor who is reading a particular website. Using cryptography for the identity system makes it less related to personal identity, so there is an ability to walk away without being personally targeted.
And it could be as fun as it is malleable and extendable. With no central entities to regulate the evolution of the Distributed Web, the possibilities are much broader.
Fortunately, the needed technologies are now available in JavaScript, Bitcoin, IPFS/Bittorrent, Namecoin, and others. We do not need to wait for Apple, Microsoft or Google to allow us to build this.
What we need to do now is bring together technologists, visionaries, and philanthropists to build such a system that has no central points of control. Building this as a truly open project could in itself be done in a distributed way, allowing many people and many projects to participate toward a shared goal of a Distributed Web.
Of course, Kahle is hardly the first to suggest this. Nearly five years ago we were writing about some attempts at a more distributed web, and how we were starting to see elements of it showing up in places the old guard wouldn't realize. Post-Snowden, the idea of a more distributed web got a big boost, with a bunch of other people jumping in as well.
It's not there yet (by any stretch of the imagination), but a lot of people have been working on different pieces of it, and some of them are going to start to catch on. It may take some time, but the power of a more decentralized system is only going to become more and more apparent over time.
Jennifer Granick, a well known (and brilliant) civil liberties fighter (currently at Stanford) recently co-wrote an article with Chris Sprigman about why the NSA's surveillance efforts were almost certainly both illegal and unconstitutional. Just a few weeks later, she got to have dinner with NSA boss Keith Alexander, which she's now written about. As you might imagine, it appears that they didn't agree on very much about the NSA's surveillance efforts. Basically, Alexander more or less argues that the NSA has to do what it does to "protect Americans" and that the agency is filled with good people who don't want to invade the privacy of Americans.
I have no doubt that Gen. Alexander loves this country as much as I do, or that his primary motivation is to protect our nation from terrorist attacks. “Never again,” he said over dinner. But it may be that our deep differences stem from a fundamental disagreement about human nature. I think Gen. Alexander believes that history is made by great individuals standing against evil. I believe that brave people can make a difference, but that larger inexorable forces are often more important: history, economics, political and social systems, the environment. So I believe that power corrupts and that good people will do bad things when a system is poorly designed, no matter how well-intentioned they may be. More than once, my dinner companions felt the need to reassure the DIRNSA that none of us thought he was a bad man, but that we thought the surveillance policies and practices were bad, and that eventually, inevitably, those policies and practices would lead to abuse.
She goes on to note that the NSA's (and the administration's) further defense of the efforts have only made her point even stronger (contrary to General Alexander's promise to Granick that the upcoming revelations would show that the NSA's actions were perfectly reasonable). As she notes later in the piece, the history of abuses is well known, even if Alexander likes to ignore it:
Of course, we see mission creep – once you build the mousetrap of surveillance infrastructure, they will come for the data. First it was counterterrorism, then it was drug investigations, then it was IRS audits. Next it will be for copyright infringement.
And of course, there also will be both “inadvertent” and intentional abuse, inevitable but difficult to discover. Bored analysts do things like spy on women using surveillance cameras and listen to American GIs overseas having phone sex with their loved ones back home. Or an FBI agent may investigate strange but not unlawful emails on behalf of a family friend, leading to a sex scandal that brings down the Director of the CIA. These surveillance tools and information databases may one day end up in the hands of a J. Edgar Hoover and a President demanding embarrassing information about her political opponents, information that, in an age of mass surveillance, the government most assuredly will have somewhere in its treasure trove.
There's a reason we make it hard for the government to spy on people. We know that the temptation to abuse such powers will be strong and abuse will inevitably occur. That's the nature of a free society. And it's a problem when people like General Alexander think that the best way to "protect" a free society is to take away the very factors that make it one.