GCHQ Propose A 'Going Dark' Workaround That Creates The Same User Trust Problem Encryption Backdoors Do
from the wiretaps-but-for-Whatsapp dept
Are we "going dark?" The FBI certainly seems to believe so, although its estimation of the size of the problem was based on extremely inflated numbers. Other government agencies haven't expressed nearly as much concern, even as default encryption has spread to cover devices and communications platforms.
There are solutions out there, if it is as much of a problem as certain people believe. (It really isn't… at least not yet.) But most of these solutions ignore workarounds like accessing cloud storage or consensual searches in favor of demanding across-the-board weakening/breaking of encryption.
A few more suggestions have surfaced over at Lawfare. The caveat is that both authors, Ian Levy and Crispin Robinson, work for GCHQ. So that should give you some idea of which shareholders are being represented in this addition to the encryption debate.
The idea (there's really only one presented here) isn't as horrible as others suggested by law enforcement and intelligence officials. But that doesn't mean it's a good one. And there's simply no way to plunge into this without addressing an assertion made without supporting evidence towards the beginning of this Lawfare piece.
Any functioning democracy will ensure that its law enforcement and intelligence methods are overseen independently, and that the public can be assured that any intrusions into people’s lives are necessary and proportionate.
By that definition, the authors' home country is excluded from the list of "functioning democracies." Multiple rulings have found GCHQ's surveillance efforts in violation of UK law. And a number of leaks over the past half-decade have shown its oversight is mostly ornamental.
The same can be said for the "functioning democracy" on this side of the pond. Leaked documents and court orders have shown the NSA frequently ignores its oversight when not actively hiding information from Congress, the Inspector General, and the FISA court. Oversight of our nation's law enforcement agencies is a patchwork of dysfunction, starting with friendly magistrates who care little about warrant affidavit contents and ending with various police oversight groups that are either filled with cops or cut out of the process by the agencies they nominally oversee. We can't even get a grip on routine misconduct, much less ensure "necessary and proportionate intrusions into people's lives."
According to the two GCHQ reps, there's a simple solution to eavesdropping on encrypted communications. All tech companies have to do is keep targets from knowing their communications are no longer secure.
In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved - they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.
We’re not talking about weakening encryption or defeating the end-to-end nature of the service. In a solution like this, we’re normally talking about suppressing a notification on a target’s device, and only on the device of the target and possibly those they communicate with. That’s a very different proposition to discuss and you don’t even have to touch the encryption.
Suppressing notifications might be less harmful than key escrow or backdoors. It wouldn't require a restructuring of the underlying platform or its encryption. If everything is in place -- warrants, probable cause, exhaustion of less intrusive methods -- it could give law enforcement a chance to play man-in-the-middle with targeted communications.
But there's a downside -- one that isn't referenced in the Lawfare post. If both ends of a conversation are targeted, this may be workable. But what if one of the participants isn't a target? This leaves them unprotected because the suppressed messages wouldn't inform other non-target parties the conversation isn't protected. Obviously it wouldn't do the let anyone targets converse with know things are no longer normal on the target's end, as it's likely one of those participants will let the target know they've encountered a security warning while talking to them.
In that respect, it is analogous to a wiretap on someone's phones. It will capture innocent conversations irrelevant to the investigation. In those cases, investigators are told to stop eavesdropping. It's unclear how the same practice will work when the communications are being harvested digitally via unseen government additions to private conversations.
This proposal seems at odds with the authors' suggested limitations, especially this one:
Any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users.
When a service provider starts suppressing warning messages, the trust relationship is going to be fundamentally altered. Even if users are made aware this is only happening in rare instances involving targets of investigations, the fact that their platform provider has chosen to mute these messages means they really can't trust a lack of warnings to mean everything is still secure.
On the whole, it's a more restrained solution than others have proposed -- but it still has the built-in exploitation avenue key escrow does. It's better than a backdoor but not by much. And the authors of this proposal shouldn't pretend the solution lives up to the expectations they set for it. Their own proposal falls short of their listed ideals… and the whole thing is delivered under the false pretense law enforcement/intelligence agencies are subject to robust oversight.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: backdoors, encryption, gchq, going dark, third parties, vulnerabilities
Reader Comments
Subscribe: RSS
View by: Time | Thread
'We promise we totally won't abuse THESE tools.'
Any functioning democracy will ensure that its law enforcement and intelligence methods are overseen independently, and that the public can be assured that any intrusions into people’s lives are necessary and proportionate.
I'm not sure if they're trying to open with a joke here, or making it crystal clear that they're talking about a purely hypothetical situation that has absolutely nothing to do with the ones who pay their salaries. Either way nice of them to start out with a statement making clear that what follows will have only a vague connection to the real world I suppose.
Any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users.
If they really think that 'your communications can be monitored as soon as someone brings the right paperwork, whether because you are of interest or simply communicating with someone who is, and you'll have no idea it's happening' wouldn't impact the 'trust relationship between a service provider and it's users' they're not just naive they're downright delusional.
Ultimately however they face a problem of their own making here, in that with the multitude of abuses and history of secrecy tainting the agency(and the US equivalent, as noted in the article), the fact that this 'suggestion' may not be as bad as crippling encryption does not mean I want them to be able to use it, because I have zero trust that they won't abuse it, just like they have previous tools.
If, as has been demonstrated to be the case, they can't be trusted with what they already have, why should they be granted more?
[ link to this | view in thread ]
Y’know, I bet they would feel a lot differently about their proposal if they were on the receiving end of this workaround.
[ link to this | view in thread ]
Maybe you should, you know, do your investigative job?
[ link to this | view in thread ]
Re:
By far the worst part of these de facto wiretaps is that it easily sets people up for "perjury traps" whenever people fail to remember everything in a recorded conversion. Wiretaps should only ever be used for solving specific serious crimes, not as a general purpose way to convict people who have comitted no other crime other than having an imperfect memory (whether by accident or design).
[ link to this | view in thread ]
Re: Re:
A difference exists between forgetting a few details here and there and outright lying about what was (or was not) said. Having a fuzzy memory is not the same thing as committing perjury.
[ link to this | view in thread ]
We're only *listening* to *some* conversations
It will not be a conversation between individuals, it will be every conversation with anyone by a 2 or 3 hops out set of "parties of interest" for some extended period of time.
[ link to this | view in thread ]
No its not because it requires that the service provider and not the user controls the keys. Also, it implies that the encryption application should not list the keys it is using.
If somebody else is controlling your encryption system, it is totally broken, and not fit for purpose.
[ link to this | view in thread ]
Welp
[ link to this | view in thread ]
Unicorns
GCHQ knows this full well. So, it sounds like the GCHQ is proposing to setup the whole system, minus the keys, at first. Then they can come back later and say "You know, this just isn't really working out the way we need it to. We just need to make one little, itsy-bitty change to fix it. All we need are the keys".
One step at a time to get what they want.
[ link to this | view in thread ]
What all...
[ link to this | view in thread ]
Re: What all...
You don't speak for me.
[ link to this | view in thread ]
Re: Re: Re:
It's easy to see that anyone with any idea of how the legal system really works either refuses to talk or feigns amnesia from start to finish. People who *think* they have nothing (or little) to hide and choose to cooperate with authorities (sometimes unknowingly) can be setting themselves up for some real pain down the road.
[ link to this | view in thread ]
Re: Re: What all...
/mōst/Submit
determiner & pronoun
1.superlative of many, much.
2.greatest in amount or degree.
"they've had the most success"
adverb
1.superlative of much.
2.to the greatest extent.
"the things he most enjoyed"
[ link to this | view in thread ]
Re: Re: Re: What all...
[ link to this | view in thread ]
[ link to this | view in thread ]
...which means that it's no longer "end to end" but either a multicast communication or by definition compromised with a man in the middle exploit - which is the kind of thing that encryption is designed to stop in the first place!
"That’s a very different proposition to discuss and you don’t even have to touch the encryption."
Except, of course, encryption has increasingly been designed so that the service provider does not hold the keys and cannot provide them mid communication. For those services, you would either have to completely redesign the encryption, or build in a client-side backdoor. Which makes them less secure than even an encryption backdoor would.
It's funny. Whenever these people explain how things are easily possible, they usually end up describing things that cannot be implemented.
[ link to this | view in thread ]
Democracy
To them the idea of democracy is 'not only are we in charge, but we are benevolent (muhahahaha), to our own purposes, which is power, power, power'. To us, the idea of democracy is 'the government is us' (no matter how it is implemented, direct or representative) and we get to say what is 'necessary and proportionate'. Their cognitive dissonance is outrageous and purposeful.
[ link to this | view in thread ]
Re: Re: Re:
[ link to this | view in thread ]
Re: Re: Re:
It can be.
You won't see me showing any sympathy to Flynn, but there are serious problems with criminalizing lying to the FBI. Ken White has discussed them on multiple occasions; here are a few examples:
Everybody Lies: FBI Edition, Popehat
Trump Investigation shows how easy it is for feds to create crimes, National Review
Donald Trump shouldn't talk to the feds. And neither should you.
Trump’s problem isn’t that he has bad lawyers. It’s that he’s a bad client., Washington Post
And not directly related but along the same lines: Witnesses ‘flipping’ does corrupt justice. But not because they’re ‘rats.’, New York Times
As always, we have to consider that methods the criminal justice system uses to prosecute the guilty can also be used to persecute the innocent. The anon is right that the FBI can trick nervous people into perjuring themselves. That's a serious problem, even though you're right that that's not what happened with Flynn.
[ link to this | view in thread ]
Re:
In other words, two backdoors. Good end-to-end encryption will let users do some kind of out-of-band key verification, and will warn when keys change.
[ link to this | view in thread ]
Re: Re: Re: Re:
[ link to this | view in thread ]
Re: Re: Re: Re: Re:
[ link to this | view in thread ]
Re: Unicorns
But the only way that is useful is if you also have the keys to also decrypt the conversation.
No, it actually doesn't require this, as this exploit occurs before the encryption keys are exchanged.
The first step in end-to-end encryption is the key exchange. The problem, of course, is that prior to the key exchange there is no way to tell who is at the other end of the communication, e.g. no way of knowing whose key you are receiving. So the question is, how can we verify the identity of the owner of the key we have received?
In the case of internet services, this step is performed by certificate authorities which (for example) verify that the public key you have received when you go to Facebook.com actually belongs to Facebook . This is, of course, only as trustworthy as the certificate authority in question.
However, those big certificate authorities very rarely hold certificates for individuals (as a far greater likelihood of error for individuals would erode trust in their core business), nor do most service providers allow this anyway. Instead, when someone wishes to speak to you, the service provider establishes the initial connection for key exchange, thus acting as the certificate authority and verifying the identity of each party themselves. And similar to above, the identity of each party is only as trustworthy as the service provider.
Now this proposal simply states that the service provider (who is already providing identity verification on their encrypted service) should lie about this identity verification when requested by law enforcement. You then, based on this lie, exchange public keys with the police. The police still cannot decrypt messages sent with your public key or your recipients public key, but it no longer matters because you have been tricked into sending the messages to them using their own public key. This is, theoretically, not any kind of new weakness in the system, as this is already perfectly possible to do.
This should be of limited effectiveness, as key exchange should only ever happen once, thus if you communicated with people before the police became interested, all future communications will remain secure (only communications with new people would be vulnerable). However, because most services allow you to add new people to existing conversations (thus requiring a new key exchange with the new person), it might be possible for the service provider to trick your system into adding a new person to the group (exchange public keys with them) without actually notifying you that this has occurred. Since the system is designed to keep all members up to date, your own system would then use your private key to decrypt messages, encrypt them with the police's public key, and send them to the police while the service provider prevents this activity from showing up on your system.
It's actually quite an elegant solution in theory (as identity verification is not a new vulnerability in the system), though it still has far too many problems to be workable in practice.
[ link to this | view in thread ]
"Functioning democracies"
I hope this means we can defer the conversation until we've determined (for an adequate period of time, like twenty years) that the democracy is functioning first.
The US and UK certainly don't qualify.
[ link to this | view in thread ]
Re: Re: Unicorns
You have overlooked the big hole they are demanding, changing the keys used without notifying the user. That means if they want in on a conversion, their key is added, or the user keys are changed without the users being notified; which implies that any look at the keys being used is removed from encryption enables applications/.
[ link to this | view in thread ]
Security vs Trust
The mere existence of a capability for a service provider to breach its users' communications, thus needing them to trust it not to, would make the service not secure.
[ link to this | view in thread ]
Re: Re: Re: Re: Re: Re:
[ link to this | view in thread ]
Re: Re: Re: Unicorns
You have overlooked the big hole they are demanding, changing the keys used without notifying the user.
"it might be possible for the service provider to trick your system into adding a new person to the group (exchange public keys with them) without actually notifying you that this has occurred."
Nope, included right there.
[ link to this | view in thread ]
If were going back?
[ link to this | view in thread ]
[ link to this | view in thread ]
Re:
Chat with end-to-end encryption already relies on something like a certificate authority, so that if Alice wants to chat to Bob she can get his public key. Tampering with the certificate authority server would allow for Eve to masquerade as Bob. The proposal is to allow law enforcement to do such tampering, plus something like the following:
If the chat app has a configuration option to let Alice say that she trusts Bob so much that he can automatically join in on any existing group chat, the app should be changed so there'd be a way so Bob can silently join that existing chat, without notifying anyone already in the chat that someone new has joined. Also the app should be changed so that Bob can join the chat multiple times without alerting any user that there appears to be multiple simultaneous instances of Bob.
[ link to this | view in thread ]
Re: any functioning democracy
Funny thing, hyperbole aside; With the right attitude, humbleness and humor- apathy and hope aren't mutually exclusive either. *Through most of human history apathy has been rightly seen as a sign/symptom of mental illness; However, due to recent events, psychologist have been forced to re-evaluate this simplistic view and acknowledge the clear emerging consensus that the inverse is now undeniably true- at this juncture in history a chronic and acute absence of apathy is now clearly indicative of severe mental illness and probable social retardation- a chronic lack of apathy is increasingly becoming a severely debilitating condition, the national financial consequences poised to skyrocket into the trillions in the next few decades. It is expected the DSMVI shall officially declare the death of all meaning.
*facts contained in internet comments may in fact not be facts, but jokes that went over your head...
[ link to this | view in thread ]
Re: Re: Unicorns
At which point they then "have the keys to also decrypt the conversation", which you started off denying.
[ link to this | view in thread ]
Re: Re: Re: Re: Unicorns
[ link to this | view in thread ]
Re: Re: Re: What all...
[ link to this | view in thread ]
Re: Re: Re: Re: What all...
[ link to this | view in thread ]
Re: Re:
Exactly. Which means that they have to either insert a back door and/or force the service provider to have access that they have deliberately designed the system not to give them.
The group chat things seems to be a waste of time to my mind too. I might be wrong, but I'd suspect any competent criminals are avoiding group chats anyway to avoid detection. They would certainly be avoiding such things once it's revealed that providers are being forced to redesign apps to do such things (and it absolutely would be revealed).
I might be giving the potential criminals too much credit, but this seems to be yet another example of something that would be much more useful for abuse than it would be in actually catching the "bad guys".
[ link to this | view in thread ]
Re: Re: Re:
Which means that they have to either insert a back door and/or force the service provider to have access that they have deliberately designed the system not to give them.
In most cases, they already have this access. Most chat providers act as the certificate authority within their ecosystem (partly due to convenience, partly because no large certificate authorities are going to risk their core business by attempting to verify the identity of hundreds of millions of individuals). Since they are already the certificate authority, they already control the databases containing information on identification of individuals for key exchange, and can already change that database at will. There's no change in the system required, only in the perceived trustworthiness of the provider.
The group chat things seems to be a waste of time to my mind too. I might be wrong, but I'd suspect any competent criminals are avoiding group chats anyway to avoid detection.
For most chat providers, there is no functional difference between group chats and individual chats, individual chats are just group chats that currently contain only two members.
As for how this is advantageous, in theory key exchange would only need to occur once (at the beginning of the chat), which means that a certificate authority exploit (as discussed) would only be effective on chats which began after the police became interested. Any chats already going on would no longer need the certificate authority, and therefore remain secure. However, most chats not only allow people to be added to existing chats, but also update them on the chat history after they're added. That is, when you add someone to a chat, the encrypted historical contents are decoded using your private key and then encrypted using their public key and sent to them. Thus, this gives the ability to obtain information on existing chats, as opposed to only getting information on new chats.
This is where a new backdoor would need to be inserted in most systems, as addition of new members "shouldn't" be possible from the provider side.
But the basic idea of undermining the certificate authority would require no changes to the existing system, identity verification is still one of the biggest problems in encryption in general. Certificate authorities can compromise communications by changing the identity attached to a public key because changing the identity attached to a public key is quite literally their job. It's just that their job is generally to change it correctly on behalf of their users, rather than incorrectly on behalf of law enforcement.
[ link to this | view in thread ]
Re: Re: Re: Re:
Yes, and this forces services that deliberately don't do this to follow suit, reversing recent trends and placing their users at greater risk than with their current design.
You seem to be missing the point - whether or not what you describe is the norm, what GHCQ are proposing is to make it illegal to operate in any other way.
"There's no change in the system required, only in the perceived trustworthiness of the provider."
With SOME providers, yes. With others, it involves a complete redesign to deliberately make their service less secure. Are you following that yet?
"For most chat providers"
Again, you seem to be stating the correct facts, but missing the overall point. You keep qualifying your words with "most", meaning that you're aware that there are providers who do not fit into the description and will be destroyed / irrevocably changed by these rulings if they came to pass.
Even if they're a minority, the fact that you admit there are some providers who do not operate in the way that GCHQ operates confirms that point that what they are saying is a lie. You can correctly say that "most" physical doors require a metal key. But, if you demand that police have to be supplied with a master metal key for every lock, you're going to cause some major problems for people who manufacture and use biometric and combination locks, even if statistically they're in the minority.
[ link to this | view in thread ]
"Giving the potential criminals too much credit"
I find myself offended with the basic mistakes that willful criminals (that is, those that plan and intend to commit a crime) make, the lack of track-covering on their behalf.
It may also be that there is a lot of crime and the police mostly pursue low-hanging fruit, that is crime that is easy to catch (and crime committed by less aggressive suspects) when they're not forced by the high publicity of an incident.
That said, I've pointed out already they don't use their current tools right (such as their $2 field drug test kits that test positive to glazed sugar and cotton candy) and our law enforcement needs to learn how to do its job professionally and competently before we yield to them even more of our privacy.
[ link to this | view in thread ]
Re: "Giving the potential criminals too much credit"
That said, I've pointed out already they don't use their current tools right (such as their $2 field drug test kits that test positive to glazed sugar and cotton candy) and our law enforcement needs to learn how to do its job professionally and competently before we yield to them even more of our privacy.
I'd say that that would be the starting point to giving them more tools, a demonstration that they can act responsibly with what they already have, but even then I'd still want a very strong showing that what they are asking for was not only necessary but the least intrusive option reasonably available, along with an extensive weighing of how it can be abused and what steps would be taken to mitigate/eliminate/punish abuse.
Do all that and then I'd consider allowing them more tools, but given they can't even get past the first step I'm not holding my breath.
[ link to this | view in thread ]
Re: Re: Re:
I might be giving the potential criminals too much credit, but this seems to be yet another example of something that would be much more useful for abuse than it would be in actually catching the "bad guys".
I've long been of the opinion that the only criminals stupid enough to be caught by things like this are likely too stupid to be of any real threat, making the gains/cost equation grossly disproportionate.
In exchange for snagging the low hanging fruit of the exceptionally stupid everyone is made less safe and secure, and yet agencies like the GCHQ see that as a good trade, because hey, it's not like their security is being compromised.
[ link to this | view in thread ]
Re: "Giving the potential criminals too much credit"
That's absolutely my take. Properly organised criminals often take a long time to get caught, specifically because they avoid making basic mistakes. Sometimes they just get caught because they make some basic mistakes in moments of weakness, but the real bad guys often don't get caught for decades.
That's why it's doubly offensive that they wish to weaken or destroy protections for everybody for a chance to catch criminals today. In reality, they might catch a few morons who decide to use known compromised tools, but in reality all they'll do is give more access the guys they never catch, at the price of everybody's rights.
[ link to this | view in thread ]
Re: 'We promise we totally won't abuse THESE tools.'
Anyone who objects to this is basically just objecting to law enforcement in general at this point, and that's not a rational position to take.
[ link to this | view in thread ]
Re: Re: 'We promise we totally won't abuse THESE tools.'
In principle, yes. In practical terms, not really.
"And it doesn't break the encryption"
You're missing the point - that's exactly what it will do.
It won't necessarily break the kinds of encryption where two end users communicate with keys held by the provider. It will, however, completely break the types of encryption that are more complex or which have been specifically designed so that the provider themselves do not have access mid communication. It will prevent those types of encryption being developed (well, except by the "bad guys", of course), and will likely hinder patching of issues found with existing encryption if they best require the removal of this ability.
They are literally asking that a man in there middle attack - one thing that encryption is specifically designed to prevent - be allowed and part of the reasoning is that they can do it with technology invented before modern encryption was invented. To reuse an analogy I've used elsewhere, that's like asking biometric locks to still be opened with a metal master key because they do that with other locks, and then complaining when people explain why this is not a good idea.
They're still asking for back doors and for encryption to be made insecure in order that they have a slightly easier job of spying on people, they've just asked a little more nicely this time.
"Anyone who objects to this is basically just objecting to law enforcement in general at this point"
You mean "understands encryption and they ways it will be compromised by legal restrictions such as the ones suggested".
[ link to this | view in thread ]
Re: Re: 'We promise we totally won't abuse THESE tools.'
Not sure I see what the big deal is here. As the article notes, this is really no different than a traditional wiretap,
Uh, it's very different in that it adds a direct vulnerability into the encryption and assumes that won't get exploited.
And it doesn't break the encryption
Yes, that's literally what it does. It is adding a way to get into an encrypted conversation, which is literally breaking encryption.
[ link to this | view in thread ]
Traditional wiretaps
We may be used to traditional wiretaps at this point, but they were a pretty major deal when law enforcement wanted to start listening in on private conversations (or worse, on business conversations).
We may resigned to the feds wiretapping our phones but that should not be confused with the idea that we consented to them wiretapping our phones.
Martin Luther King Jr. and Phillip K. Dick certainly didn't consent.
[ link to this | view in thread ]
[ link to this | view in thread ]