The Big Question: When Did The NSA Know About Heartbleed?
from the inquiring-minds... dept
It's not too surprising that one of the first questions many people have been asking about the Heartbleed vulnerability in OpenSSL is whether or not it was a backdoor placed there by intelligence agencies (or other malicious parties). And, even if that wasn't the case, a separate question is whether or not intelligence agencies found the bug earlier and have been exploiting it. So far, the evidence is inconclusive at best -- and part of the problem is that, in many cases, it would be impossible to go back and figure it out. The guy who introduced the flaw, Robin Seggelmann, seems rather embarrassed about the whole thing but insists it was an honest mistake:Mr Seggelmann, of Munster in Germany, said the bug which introduced the flaw was "unfortunately" missed by him and a reviewer when it was introduced into the open source OpenSSL encryption protocol over two years ago.Later in that same interview, he insists he has no association with intelligence agencies, and also notes that it is "entirely possible" that intelligence agencies had discovered the bug and had made use of it.
"I was working on improving OpenSSL and submitted numerous bug fixes and added new features," he said.
"In one of the new features, unfortunately, I missed validating a variable containing a length."
After he submitted the code, a reviewer "apparently also didn’t notice the missing validation", Mr Seggelmann said, "so the error made its way from the development branch into the released version." Logs show that reviewer was Dr Stephen Henson.
Mr Seggelmann said the error he introduced was "quite trivial", but acknowledged that its impact was "severe".
Another oddity in all of this is that, even though the flaw itself was introduced two years ago, two separate individuals appear to have discovered it on the exact same day. Vocativ, which has a great story giving the behind the scenes on the discovery by Codenomicon, mentions the following in passing:
Unbeknownst to Chartier, a little-known security researcher at Google, Neel Mehta, had discovered and reported the OpenSSL bug on the same day. Considering the bug had actually existed since March 2012, the odds of the two research teams, working independently, finding and reporting the bug at the same time was highly surprising.Highly surprising. But not necessarily indicative of anything. It could be a crazy coincidence. Kim Zetter, over at Wired explores the "did the NSA know about Heartbleed" angle, and points out accurately that while the bug is catastrophic in many ways, what it's not good for is targeting specific accounts. The whole issue with Heartbleed is that it "bleeds" chunks of memory that are on the server. It's effectively a giant crapshoot as to what you get when you exploit it. Yes, it bleeds all sorts of things: including usernames, passwords, private keys, credit card numbers and the like -- but you never quite know what you'll get, which makes it potentially less useful for intelligence agencies. As that Wired article notes, at best, using the Heartbleed exploit would be "very inefficient" for the NSA.
But that doesn't mean there aren't reasons to be fairly concerned. Peter Eckersley, over at EFF, has tracked down at least one potentially scary example that may very well be someone exploiting Heartbleed back in November of last year. It's not definitive, but it is worth exploring further.
EFF is asking people to try to replicate Koeman's findings, while also looking for any other possible evidence of Heartbleed exploits being used in the wild. As it stands now, there doesn't seem to be any conclusive evidence that it was used -- but that doesn't mean it wasn't being used. After all, it's been known that the NSA has a specific program designed to subvert SSL, so there's a decent chance that someone in the NSA could have discovered this bug earlier, and rather than doing its job and helping to protect the security of the internet, chose to use it to its own advantage first.The second log seems much more troubling. We have spoken to Ars Technica's second source, Terrence Koeman, who reports finding some inbound packets, immediately following the setup and termination of a normal handshake, containing another Client Hello message followed by the TCP payload bytes 18 03 02 00 03 01 40 00 in ingress packet logs from November 2013. These bytes are a TLS Heartbeat with contradictory length fields, and are the same as those in the widely circulated proof-of-concept exploit.
Koeman's logs had been stored on magnetic tape in a vault. The source IP addresses for the attack were 193.104.110.12 and 193.104.110.20. Interestingly, those two IP addresses appear to be part of a larger botnet that has been systematically attempting to record most or all of the conversations on Freenode and a number of other IRC networks. This is an activity that makes a little more sense for intelligence agencies than for commercial or lifestyle malware developers.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: heartbleed, nsa, privacy, security, surveillance
Companies: codenomicon
Reader Comments
The First Word
“Poor guy
I bet Robert Seggelmann is not feeling too great the last few days.Hopefully he's not directly subjected to some of the ridiculous vitriol I've seen on some sites regarding this bug.
The fact is, FOSS is a community effort. The concept isn't that every individual developer writes bug-free code - but that enough people are reviewing it, or constantly scrutinizing it that the bugs will be found and eradicated quicker than they do in closed source software.
Therefore, everyone who develops and/or uses OpenSSL is partially to blame here. Security-related software is only as good as the weakest link, and it's the job of all involved to make sure that those links are located and strengthened.
I've read some pretty damning stuff about OpenSSL's development practices in the last couple days - and hopefully they've taken some of this to heart, and will be reflecting on how this occurred, and how it could have been prevented. This is how you turn mistakes into opportunities - opportunities to prevent such things from happening again.
So, I just wanted to let Seggelmann know (I'm sure he'll never read this comment) - I feel ya bud, this is the shits. I've been there, and seen my "handiwork" destroy data, or cause failures. It's a shitty feeling, but it's part of life, and it will pass. Hang in there buddy.
On the other hand, if you did this knowingly, burn in hell ;)
Subscribe: RSS
View by: Time | Thread
Heartbleed approximates Box of Chocolates
Not as tasty, though.
[ link to this | view in chronology ]
Re: Heartbleed approximates Box of Chocolates
[ link to this | view in chronology ]
Poor guy
Hopefully he's not directly subjected to some of the ridiculous vitriol I've seen on some sites regarding this bug.
The fact is, FOSS is a community effort. The concept isn't that every individual developer writes bug-free code - but that enough people are reviewing it, or constantly scrutinizing it that the bugs will be found and eradicated quicker than they do in closed source software.
Therefore, everyone who develops and/or uses OpenSSL is partially to blame here. Security-related software is only as good as the weakest link, and it's the job of all involved to make sure that those links are located and strengthened.
I've read some pretty damning stuff about OpenSSL's development practices in the last couple days - and hopefully they've taken some of this to heart, and will be reflecting on how this occurred, and how it could have been prevented. This is how you turn mistakes into opportunities - opportunities to prevent such things from happening again.
So, I just wanted to let Seggelmann know (I'm sure he'll never read this comment) - I feel ya bud, this is the shits. I've been there, and seen my "handiwork" destroy data, or cause failures. It's a shitty feeling, but it's part of life, and it will pass. Hang in there buddy.
On the other hand, if you did this knowingly, burn in hell ;)
[ link to this | view in chronology ]
The answer? Over a quarter-century ago. In 1988, the Morris Worm brought the Internet to its knees, taking down about 10% of all existing servers at the time. It got in through a buffer exploit in a piece of system software written in C.
That should have put the programming community on notice. The C language should have been dead by 1990, because this class of security hole (buffer exploits) is inherent in the design of the language and can't be fixed. Some people say "you just have to be careful and get it right," but to err is human, and it's an easy mistake to make. This means that the language is at odds with reality itself. Something has to give, and it's not going to be human nature.
They say those who don't learn from history are doomed to repeat it. Well, here we have it again, a major buffer exploit in a piece of software written in C, affecting between 10% (there's that figure again) and 66% of all servers on the Internet, depending on which estimate you listen to.
We know better than this. We have known better than this since before the Morris Worm ever happened, and indeed for longer than most people reading this post have been alive. I quote from Tony Hoare, one of the great pioneers in computer science, talking in 1980 about work he did in 1960:
Maybe now that it's happened again we'll finally wise up and give this toxic language its long-overdue funeral?
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
It causes the code to read memory areas that it should not be, zeroing the requested area would cause a seg-fault, so I guess that would eradicate the heartbleed but at the expense of server stability.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
Re:
The problem is not the language. The problem is programming practices. I've seen the same mistake made in Python and Java and C++ and Ruby and Perl and Javascript and Fortran. I've also seen other well-known mistakes made across all those languages, sometimes because I was the one making them.
Switching languages is not the cure-all for programming problems, although advocates of the flavor-of-the-month often claim that is so. Careful coding and peer review -- LOTS of peer review -- is the best we can do. This of course why open source is inherently superior to closed source, which cannot be independently peer reviewed. But that only works if people actually do it, which in this case, not enough people did.
Given the criticality of OpenSSL to so many operations, this would be a good time for a lot of the big players to pony up $50K or a developer's time for six months in a collaborative effort to audit all the code and identify the other bugs that are no doubt lurking. (Note that this is probably less than they've spent this week dealing with the fallout.)
[ link to this | view in chronology ]
Re: Re:
As you have said no amount of 'programming language change' can stop human errors.
[ link to this | view in chronology ]
Re: Re: Re:
Yes, but it can mitigate the damage they do. Tony Hoare knew how to make this sort of thing impossible waaaay back in 1960: design the language so that if someone tries to go outside the bounds of an array, the program crashes instead.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
They have no one to blame but themselves.
[ link to this | view in chronology ]
If you are a C programmer, you learn, about five years into your career, to never, /never/, NEVER forget to check the bounds.
strcmp burn in hell; strncmp rules and the like.
I place my bets on malice. He was made an offer he could not refuse.
P.S. The guy is now toast. He will have really, REALLY hard time finding a new job now.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re:
True. However, it's also a very common stupid mistake. I've seen a LOT of both commercial and open source code over the decades, including mainstream, trusted commercial software from major companies. I've seen this problem in almost every source set somewhere. Some is worse than others.
Given that, malice would be the last thing that I suspect. Carelessness would be the first.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
The denial means nothing
If it was added by mistake, the author will deny doing it on purpose.
If it was added on purpose, the author will deny doing it on purpose.
The author of the code denying adding it on purpose gives us zero information. The denial would be exactly the same in either case.
[ link to this | view in chronology ]
Compromised development enviorments?
Assuming someone had gained access to your development system, would you detect if a bug were truly your own or injected by someone that closely mimics your style?
With the vast effort to compromise the foundation of security this question is more relevant now than ever.
[ link to this | view in chronology ]
Re: Compromised development enviorments?
If it was maliciously added by someone else, the author will say "that's not my code!"
So I guess I was wrong, the denial does give us some new information. It confirms the code's authorship.
[ link to this | view in chronology ]
Re: The denial means nothing
If it was added by mistake, the author will deny doing it on purpose
If it was added on purpose, the author will either say "no comment" or deny doing it on purpose.
If the author denies doing it on purpose, it makes it slightly more probably that the author did not do it on purpose.
[ link to this | view in chronology ]
What you may not know is that during an interview today, the spokesperson, with a bit of hesitation, said that they "had to shut down because they cannot be sure if any illegal organization (slight pause) and intelligence agency are able to get to the private information of Canadian citizens..."
I am just unsure whether to laugh or cry when I hear that sentence.
[ link to this | view in chronology ]
Reading Code
I do not know how much of the above might apply to this situation, but it might.
[ link to this | view in chronology ]
Re: Reading Code
[ link to this | view in chronology ]
Re: Re: Reading Code
"The fix is to change the reasoning process of the developer so that secure practices are like muscle memory."
Spot on. In the old days, programmers used to speak of using C "idioms" -- common constructs that were memorized to perform common tasks. Using idioms allowed good programming practices to become so habitual that they felt instinctual.
I've noticed a trend in the newer generations of programmers. The ones who write in C or C++ tend to be more careless about their use of the language. I believe that it's because they cut their teeth on languages that hold the programmer's hand more (Java, etc.) and never developed the basic, good, paranoid practices that are essential when using the more powerful languages that let you shoot yourself in the foot, such as C/C++.
[ link to this | view in chronology ]
Seggelmann is the author of the heartbeat RFC
http://tools.ietf.org/html/rfc6520
[ link to this | view in chronology ]
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
We discuss that possibility in the post. It seems unlikely.
[ link to this | view in chronology ]
Re:
I'm almost positive that the NSA knew about it, but what's far more interesting is that the coder in this case is saying all the right things; that he screwed up, and that, whilst the fix was trivial, the consequences were not.
[ link to this | view in chronology ]
Re: Re:
1) OpenSSL is an open source project with a public commit history.
2) The NSA employs people that have a skill set that may allow them to monitor certain important development projects
looking for potential vulnerabilities.
3) The NSA is not interested in disclosing vulnerabilities.
[ link to this | view in chronology ]
Furthermore
Sure, they say they had nothing to do with it-we can't prove that they did, but it sure is interesting that the NSA is assumed of being associated with it and we're expecting them to be.
After all, there isn't anything they aren't capable of-that much we all know.
It's just not provable yet.
On the other hand, how come it took so damned long to find it, anyway? From what I've read, it's been around 2 freaking years-you'd think that someone would have caught this long before now and corrected it.
Which makes it a cascading mistake with enormous consequences. I don't feel sorry for the programmer. He is supposed to be able to do his job correctly, and check the code before it's released along with the others he's working with. Nobody caught it and now we're paying the price for one 'mistake'.
Apologies don't cut it.
[ link to this | view in chronology ]
Remember, you are paying these people to protect you.
If they knew about a security hole as bad as this one, and decided to make use of it instead of warning you all, it's criminal negligence at the very least, possibly bordering on direct treason.
[ link to this | view in chronology ]
Re:
At this point, I think it's very clear that we are not, in fact, paying them to protect us. We're paying them to spy on us (and everybody else).
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Clearly
I can't wait to see the new Techdirt logo, the one all covered in tin foil. It's getting weird in here!
[ link to this | view in chronology ]
Re: Clearly
a) traitorous scum; or
b) swivel-eyed loons who know not what they are talking about.
[ link to this | view in chronology ]
Re: Clearly
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
A fact
[ link to this | view in chronology ]
https://www.schneier.com/blog/archives/2014/04/heartbleed.html
[ link to this | view in chronology ]
[ link to this | view in chronology ]