Software Legend Ray Ozzie Thinks He Can Safely Backdoor Encryption; He's Very Wrong
from the and-dangerous dept
There have been ongoing debates for a while now about the stupidity of backdooring encryption, with plenty of experts explaining why there's no feasible way to do it without causing all sorts of serious consequences (some more unintended than others). Without getting too deep into the weeds, the basic issue is that cryptography is freaking difficult and if something goes wrong, you're in a lot of trouble very fast. And it's very, very easy for something to go wrong. Adding in a backdoor to encryption is, effectively, making something go wrong... on purpose. In doing so, however, you're introducing a whole host of other opportunities for many, many things to go wrong, blowing up the whole scheme and putting everyone's information at risk. So, if you're going to show up with a "plan" to backdoor encryption, you better have a pretty convincing argument for how you avoid that issue (because the reality is you can't).
For at least a year (probably more) the one name that has kept coming up over and over as one of the few techies who insists that the common wisdom on backdooring encryption is wrong... is Ray Ozzie. Everyone notes that he's Microsoft's former Chief Software Architect and CTO, but some of us remember him from way before that when he created Lotus Notes and Groove Networks (which was supposed to be the nirvana of collaboration software). In recent months his name has popped up here and there, often by FBI/DOJ folks seeking to backdoor encryption, as having some possible ways forward.
And, recently, Wired did a big story on his backdoor idea, where he plays right into the FBI's "nerd harder" trope, by saying exactly what the FBI wants to hear, and which nearly every actual security expert says is wrong:
Ozzie, trim and vigorous at 62, acknowledged off the bat that he was dealing with a polarizing issue. The cryptographic and civil liberties community argued that solving the problem was virtually impossible, which “kind of bothers me,” he said. “In engineering if you think hard enough, you can come up with a solution.” He believed he had one.
This, of course, is the same sort of thing that James Comey, Christopher Wray and Rod Rosenstein have all suggested in the past few years: "you techies are smart, if you just nerd harder, you'll solve the problem." Ozzie, tragically, is giving them ammo. But he's not delivering the actual goods.
The Wired story details his plan which is not particularly unique. It takes concepts that others have proposed (and which have been shown to not be particularly secure) and puts a fresh coat of paint on them. Basically, the vendor of a device has a private key that it needs to keep secret, and under some "very special circumstances" it can send an employee into the dark chamber to do the requisite dance, retrieve the code, and give it to law enforcement. That's been suggested many times, and it's been explained many times why that opens up all sorts of dangerous scenarios that could put everyone at risk. The one piece that does seem different is that Ozzie wants a sort of limitation on the possible damage his system does if it goes wrong (in one particular way), which is that under his system if the backdoor is used, it can only be used on one phone and then it disables that phone forever:
Ozzie designed other features meant to reassure skeptics. Clear works on only one device at a time: Obtaining one phone’s PIN would not give the authorities the means to crack anyone else’s phone. Also, when a phone is unlocked with Clear, a special chip inside the phone blows itself up, freezing the contents of the phone thereafter. This prevents any tampering with the contents of the phone. Clear can’t be used for ongoing surveillance, Ozzie told the Columbia group, because once it is employed, the phone would no longer be able to be used.
So, let's be clear. That piece isn't what's useful in "reassuring skeptics." That piece is the only thing that really appears to be that unique about Ozzie's plan. And it hasn't done much to reassure skeptics. As the report notes, when Ozzie laid this out at a special meeting of super smart folks in the field, it didn't take long for one to spot a hole:
The most dramatic comment came from computer science professor and cryptographer Eran Tromer. With the flair of Hercule Poirot revealing the murderer, he announced that he’d discovered a weakness. He spun a wild scenario involving a stolen phone, a second hacked phone, and a bank robbery. Ozzie conceded that Tromer found a flaw, but not one that couldn’t be fixed.
"Not one that couldn't be fixed." But it took this guy just hearing about the system to find the flaw. There are more flaws. And they're going to be catastrophic. Because that's how cryptogrpahy works. Columbia computer science professor and all around computer security genius Steve Bellovin (who was also at that meeting) highlights how Tromer's flaw-spotting shows why Ozzie's plan is a fantasy with dangerous consequences:
Ozzie presented his proposal at a meeting at Columbia—I was there—to a diverse group. Levy wrote that Ozzie felt that he had "taken another baby step in what is now a two-years-and-counting quest" and that "he'd started to change the debate about how best to balance privacy and law enforcement access". I don't agree. In fact, I think that one can draw the opposite conclusion.
At the meeting, Eran Tromer found a flaw in Ozzie's scheme: under certain circumstances, an attacker can get an arbitrary phone unlocked. That in itself is interesting, but to me the important thing is that a flaw was found. Ozzie has been presenting his scheme for quite some time. I first heard it last May, at a meeting with several brand-name cryptographers in the audience. No one spotted the flaw. At the January meeting, though, Eran squinted at it and looked at it sideways—and in real-time he found a problem that everyone else had missed. Are there other problems lurking? I wouldn't be even slightly surprised. As I keep saying, cryptographic protocols are hard.
Bellovin also points out -- as others have before -- that there's a wider problem here: how other countries will use whatever stupid example the US sets for much more nefarious purposes:
If the United States adopts this scheme, other countries, including specifically Russia and China, are sure to follow. Would they consent to a scheme that relied on the cooperation of an American company, and with keys stored in the U.S.? Almost certainly not. Now: would the U.S. be content with phones unlockable only with the consent and cooperation of Russian or Chinese companies? I can't see that, either. Maybe there's a solution, maybe not—but the proposal is silent on the issue.
And we're just getting started on how many experts are weighing in on just how wrong Ozzie is. Errata Security's Rob Graham pulls no punches pointing out that:
He's only solving the part we already know how to solve. He's deliberately ignoring the stuff we don't know how to solve. We know how to make backdoors, we just don't know how to secure them.
Specifically, Ozzie's plan relies on the idea that companies can keep their master private key safe. To support that this is possible, Ozzie (as the FBI has in the past) points to the fact that companies like Apple already keep their signing keys secret. And that's true. But that assumes incorrectly that signing keys and decryption keys are the same thing and can be treated similarly. They're not and they cannot be. The security protocols around signing keys are intense, but part of that intensity is built around the idea that you almost never have to use a signing key.
A decryption key is a different story altogether, especially with the FBI blathering on about thousands of phones it wants to dig its digital hands into. And, as Graham notes, you quickly run into a scaling issue, and with that scale, you ruin any chance of keeping that key secure.
Yes, Apple has a vault where they've successfully protected important keys. No, it doesn't mean this vault scales. The more people and the more often you have to touch the vault, the less secure it becomes. We are talking thousands of requests per day from 100,000 different law enforcement agencies around the world. We are unlikely to protect this against incompetence and mistakes. We are definitely unable to secure this against deliberate attack.
And, even worse, when that happened, we wouldn't even know.
If Ozzie's master key were stolen, nothing would happen. Nobody would know, and evildoers would be able to freely decrypt phones. Ozzie claims his scheme can work because SSL works -- but then his scheme includes none of the many protections necessary to make SSL work.
What I'm trying to show here is that in a lab, it all looks nice and pretty, but when attacked at scale, things break down -- quickly. We have so much experience with failure at scale that we can judge Ozzie's scheme as woefully incomplete. It's not even up to the standard of SSL, and we have a long list of SSL problems.
And so Ozzie's scheme relies on an impossibility. That you could protect a decryption key that has to be used frequently, the same way that a signing key is currently protected. And that doesn't work. And when it fails, everyone is seriously fucked.
Graham's article also notes that Ozzis is -- in true nerd harder fashion -- focusing on this as a technological problem, ignoring all the human reasons why such a system will fail and such a key won't be protected.
It focuses on the mathematical model but ignores the human element. We already know how to solve the mathematical problem in a hundred different ways. The part we don't know how to secure is the human element.
How do we know the law enforcement person is who they say they are? How do we know the "trusted Apple employee" can't be bribed? How can the law enforcement agent communicate securely with the Apple employee?
You think these things are theoretical, but they aren't.
Cryptography expert (and professor at Johns Hopkins), Matt Green did a fairly thorough tweetstorm debunking of Ozzie's plan as well. He also points out, as Graham does, the disaster scenario of what happens when (not if) the key gets out. But, an even bigger point that Green makes is that Ozzie's plan relies on a special chip in every device... and assumes that we'll design that chip to work perfectly and never get broken. And that's ridiculous:
3. Let’s be more clear about this. All Apple phones have a similar chip inside of them. This chip is designed to prevent people from brute-forcing the passcode by limiting the number of attempts you can make.
At present, every one of these chips appears to be completely broken.
— Matthew Green (@matthew_d_green) April 25, 2018
4. Specifically, there is some (as yet unknown) exploit that can completely bypass the internal protections provided by Apple’s Secure Enclave Processor. So effectively “the chip” Ozzie relies on is now broken. https://t.co/wqoyzfaC2G
— Matthew Green (@matthew_d_green) April 25, 2018
5. When you’re proposing a system that will affect the security of a billion Apple devices, and your proposal says “assume a lock nobody can break”, you’d better have some plan for building such a lock.
— Matthew Green (@matthew_d_green) April 25, 2018
Green and Graham also both point to the example of GrayKey, the recently reported on tool that law enforcement has been using to crack into all supposedly encrypted iPhones. Already, someone has hacked into the company behind GrayKey and leaked some of the code.
Put it all together and:
8. So let’s recap. We are going to insert a backdoor into billions of devices. It’s security relies on a chip that is now broken. AND the people who broke that chip MAY HAVE LEAKED THEIR CODE TO EXTORTIONISTS ON THE INTERNET.
— Matthew Green (@matthew_d_green) April 25, 2018
Suddenly the fawning over Ozzie's plan doesn't look so good any more, does it? And, again, these are the problems that everyone who has dug into why backdoors are a bad idea have pointed out before:
11. Assumes a security technology with yet-to-be-achieved resilience to attacks (insider and outsider) ✅
This technology is broken ✅
The break is comically accessible even by random criminals, not sophisticated nation state attackers ✅
— Matthew Green (@matthew_d_green) April 25, 2018
Green expanded some of his tweets into a blog post as well, which is also worth reading. In it, he also points out that even if we acknowledge the difference between signing keys and decryption keys, companies aren't even that good at keeping signing keys safe (and those are almost certainly going to be more protected that decryption keys since they need to be access much less frequently):
Moreover, signing keys leak all the time. The phenomenon is so common that journalists have given it a name: it’s called “Stuxnet-style code signing”. The name derives from the fact that the Stuxnet malware — the nation-state malware used to sabotage Iran’s nuclear program — was authenticated with valid code signing keys, many of which were (presumably) stolen from various software vendors. This practice hasn’t remained with nation states, unfortunately, and has now become common in retail malware.
And he also digs deeper into the point he made in his tweetstorm about how on the processor side, not even Apple has been able to keep its secure chip from being broken -- yet Ozzie's plan is based almost entirely on the idea that such an unbreakable chip would be available:
The richest and most sophisticated phone manufacturer in the entire world tried to build a processor that achieved goals similar to those Ozzie requires. And as of April 2018, after five years of trying, they have been unable to achieve this goal — a goal that is critical to the security of the Ozzie proposal as I understand it.
Now obviously the lack of a secure processor today doesn’t mean such a processor will never exist. However, let me propose a general rule: if your proposal fundamentally relies on a secure lock that nobody can ever break, then it’s on you to show me how to build that lock.
Update: We should add that the criticisms raised here are not new either. Back in February we wrote about a whitepaper by Riana Pfefferkorn making basically all of these same points that the folks quoted above are making. In other words, it's a bit bizarre that Wired wrote this article as if Ozzie is doing something new and noteworthy.
So that's a bunch of experts highlighting why Ozzie's plan is silly. But, from the policy side it's awful too. Because having Ozzie going around and spouting this debunked nonsense, but with his pedigree, simply gives the "going dark" and "responsible encryption" pundits something to grasp onto to claim they were right all along, even though they weren't. They've said for years that the techies just need to nerd harder, and they will canonize Ray Ozzie as the proof that they were right... even though they're not and his plan doesn't solve any of the really hard problems.
And, as we noted much earlier in this post, cryptography is one of those areas where the hard problems really fucking matter. And if Ozzie's plan doesn't even touch on most of the big ones, it's no plan at all. It's a Potemkin Village that law enforcement types will parade around for the next couple of years insisting that backdoors can be made safely, even though Ozzie's plan is not safe at all. I am sure that Ray Ozzie means well -- and I've got tremendous respect for him and have for years. But what he's doing here is actively harmful -- even if his plan is never implemented. Giving the James Comeys and Chris Wrays of the worlds some facade they can cling to to say that this can be done is only going to create many more problems.
Filed Under: encryption, encryption is hard, going dark, key escrow, matthew green, nerd harder, ray ozzie, responsible encryption, rob graham, security, steven bellovin