Last fall, we noted that the popular disk encryption software TrueCrypt was undergoing a security audit, inspired by the Snowden revelations. At issue: TrueCrypt is open source and widely used and promoted (hell, Snowden himself apparently taught people how to use it), but no one really knew who was behind it -- raising all sorts of questions. A little over a month ago, we noted that the first phase of the audit didn't find any backdoors, but did note a few (mostly) minor vulnerabilities.
However, a little while ago, TrueCrypt's SourceForge page suddenly announced that " WARNING: Using TrueCrypt is not secure as it may contain unfixed security issues" and furthermore: "The development of TrueCrypt was ended in 5/2014 after Microsoft terminated support of Windows XP."
While some initially questioned if this was a hoax, others quickly noted that a new version of the program was signed with the official TrueCrypt private key -- meaning that it's either legit, or TrueCrypt's private key has been compromised (which would obviously present another serious issue). If you happen to use TrueCrypt, you should be very, very careful right now.
Snowden's revelations that key elements of the Internet have been subverted by the NSA and its allies has led people to realize that in the future we need a more thoroughgoing framework for security that assumes surveillance, and takes steps in advance to counter it. One interesting manifestation of this approach is a new "Request For Comments" document from the Internet Engineering Task Force (IETF), RFC 7528, entitled "Pervasive Monitoring Is an Attack." Here's the basic idea:
Pervasive Monitoring (PM) is widespread (and often covert) surveillance through intrusive gathering of protocol artefacts, including application content, or protocol metadata such as headers. Active or passive wiretaps and traffic analysis, (e.g., correlation, timing or measuring packet sizes), or subverting the cryptographic keys used to secure protocols can also be used as part of pervasive monitoring. PM is distinguished by being indiscriminate and very large scale, rather than by introducing new types of technical compromise.
The IETF community's technical assessment is that PM is an attack on the privacy of Internet users and organisations. The IETF community has expressed strong agreement that PM is an attack that needs to be mitigated where possible, via the design of protocols that make PM significantly more expensive or infeasible. Pervasive monitoring was discussed at the technical plenary of the November 2013 IETF meeting [IETF88Plenary] and then through extensive exchanges on IETF mailing lists. This document records the IETF community's consensus and establishes the technical nature of PM.
What's key is the idea that pervasive monitoring is an attack that needs to be mitigated as a matter of course; here's what that means:
Those developing IETF specifications need to be able to describe how they have considered PM, and, if the attack is relevant to the work to be published, be able to justify related design decisions. This does not mean a new "pervasive monitoring considerations" section is needed in IETF documentation. It means that, if asked, there needs to be a good answer to the question "Is pervasive monitoring relevant to this work and if so, how has it been considered?"
In particular, architectural decisions, including which existing technology is reused, may significantly impact the vulnerability of a protocol to PM. Those developing IETF specifications therefore need to consider mitigating PM when making architectural decisions.
As that shows, this is a high-level technical specification; it's not about how to mitigate pervasive monitoring, but about the fact that Internet engineers should always think about how to mitigate such surveillance when they are drawing up IETF specifications. It's great that the IETF is starting to work along these lines, even if it is a rather melancholy acknowledgement that we now live in a world where the default assumption has to be that someone, somewhere, is trying to monitor on a massive scale what people are doing.
May 6th is the official Day Against DRM. I'm a bit late writing anything about it, but I wanted to highlight this great post by Parker Higgins about an aspect of DRM that is rarely discussed: how DRM makes us less safe. We've talked a lot lately about how the NSA and its surveillance efforts have made us all less safe, but that's also true for DRM.
DRM on its own is bad, but DRM backed by the force of law is even worse. Legitimate, useful, and otherwise lawful speech falls by the wayside in the name of enforcing DRM—and one area hit the hardest is security research.
Section 1201 of the Digital Millennium Copyright Act (DMCA) is the U.S. law that prohibits circumventing "technical measures," even if the purpose of that circumvention is otherwise lawful. The law contains exceptions for encryption research and security testing, but the exceptions are narrow and don’t help researchers and testers in most real-world circumstances. It's risky and expensive to find the limits of those safe harbors.
As a result, we've seen chilling effects on research about media and devices that contain DRM. Over the years, we've collected dozens of examples of the DMCA chilling free expression and scientific research. That makes the community less likely to identify and fix threats to our infrastructure and devices before they can be exploited.
That post also reminds us of Cory Doctorow's powerful speech about how DRM is the first battle in the war on general computing. The point there is that, effectively, DRM is based on the faulty belief that we can take a key aspect of computing out of computing, and that, inherently weakens security as well. Part of this is the nature of DRM, in that it's a form of weak security -- in that it's intended purpose is to stop you from doing something you might want to do. But that only serves to open up vulnerabilities (sometimes lots of them), by forcing your computer to (1) do something in secret (otherwise it wouldn't be able to stop you) and (2) to try to stop a computer from doing basic computing. And that combination makes it quite dangerous -- as we've seen a few times in the past.
DRM serves a business purpose for the companies who insist on it, but it does nothing valuable for the end user and, worse, it makes their computers less safe.
The Computer Fraud and Abuse Act is so severely flawed that people are extremely hesitant to report security holes in websites, especially after witnessing what happened to Weev (Andrew Auernheimer), who went to jail for exposing a flaw in AT&T's site that exposed user info when values in the URL were incremented.
"I remember a person was recently arrested for finding this same flaw in a website and told (at&t/apple??) about it. He was arrested and jailed if I remember right. This is the type of chilling effects that come when people view techies as hackers and are arrested for pointing out flaws.
By changing the number at the end you can harvest personal info.
I won't report the flaw, I could go to jail."
Is that overdramatic? Doubtful. People have reported security flaws to companies only to have these entities press charges, file lawsuits or otherwise tell them to shut up. Weev's only out because the government's case was brought in the wrong venue. The CFAA, which has been used to punish many helpful people, is still intact and as awful as ever.
As the (also anonymous) redditor points out, he or she has tried to contact the company but has found no avenue to address this security hole which exposes names, addresses and email addresses of customers sending in claims for a free year of Netflix streaming that came bundled with their purchase of an LG Smart TV. Incrementing the digits at the end of the URL brings up other claims, some with images of receipts attached. In addition, anyone can upload support documents to these claims.
Here's a screenshot of the hole in question:
As the original poster points out, with a little coding, someone could put together a database of addresses that most likely house a brand new LG Smart TV. And this may not just be limited to LG. ACB Incentives is the company behind this promotion, and it handles the same sort of online rebate forms for a variety of companies. These rebate submission sites all branch off acbincentives.com, which could mean it's just a matter of figuring out how each one handles submitted claims, URL-wise.
Now, I've contacted the company to let them know. Amanda Phelps at the Memphis branch says she's bringing it to the attention of programming. I also let her know that it may affect other rebate pages but that I can't confirm that. We'll see how quickly this is closed*, but all in all, the people at ACB seemed to be concerned and helpful, rather than suspcious.
*Very quickly, it appears. See note at top of post.
But the underlying point remains. Many people who discover these flaws aren't criminals and aren't looking to expose the data of thousands of unsuspecting users. They're simply concerned that this is happening and often incredulous that major companies would be this careless with customers' data. That the kneejerk reaction has often been to shoot the messenger definitely gives those discovering these holes second thoughts as to reporting them, a hesitation that could allow someone with more nefarious aims to exploit the exposed data. The law needs to change, and so does the attitude that anyone discovering a flaw must be some sort of evil hacker -- or that the entity must do whatever it takes, even if it means throwing the CFAA at someone, just to prevent a little embarrassment.
A recent article in the NY Times talked about how the US State Department is behind a project to build up mesh networks that can be used in countries with authoritarian governments, helping citizens of those places access an internet that is often greatly limited. This isn't actually new. In fact, three years ago we wrote about another NY Times article about the State Department funding these kinds of projects. Nor is the specific project in the latest NYT article new. A few months back, we had covered an important milestone with Commotion, the mesh networking project coming out of New America Foundation's Open Technology Institute (OTI).
But the latest NYT article is especially odd, not because it repeats old news, but because it tries to build a narrative that Commotion and other such projects funded by the State Department are somehow awkward because they could be used to fight back against government surveillance, such as those of the NSA. The problem is that the issues are unrelated, and nothing in mesh networking deals with stopping surveillance. As Ed Felten notes, the Times reporters appear to be confusing things greatly:
There’s only one problem: mesh networks don’t do much to protect you from surveillance. They’re useful, but not for that purpose.
A mesh network is constructed from a bunch of nodes that connect to each other opportunistically and figure out how to forward packets of data among themselves. This is in constrast to the hub-and-spoke model common on most networks.
The big advantage of mesh networks is availability: set up nodes wherever you can, and they’ll find other nearby nodes and self-organize to route data. It’s not always the most efficient way to move data, but it is resilient and can provide working connectivity in difficult places and conditions. This alone makes mesh networks worth pursing.
But what mesh networks don’t do is protect your privacy. As soon as an adversary connects to your network, or your network links up to the Internet, you’re dealing with the same security and privacy problems you would have had with an ordinary connection.
The whole point of Commotion and other mesh networks is availability, not privacy. The target use is for places where governments are seeking to shut down internet access, not surveil on them. Yes, there is a case where if you could set up a mesh network that then routed around government surveillance points you could circumvent some level of surveillance, but the networks themselves are not designed to be surveillance proof. In fact, back in January when we wrote about Commotion, we pointed out directly that the folks behind the project themselves are pretty explicit that Commotion is not about hiding your identity or preventing monitoring of internet traffic.
Could a mesh network also be combined with stronger privacy and security protections? Yes, but that's different than just assuming that mesh networking takes on that problem by itself. It doesn't -- and it's misleading for the NYT to suggest otherwise.
The Heartbleed computer security bug is many things: a catastrophic tech failure, an open invitation to criminal hackers and yet another reason to upgrade our passwords on dozens of websites. But more than anything else, Heartbleed reveals our neglect of Internet security.
The United States spends more than $50 billion a year on spying and intelligence, while the folks who build important defense software — in this case a program called OpenSSL that ensures that your connection to a website is encrypted — are four core programmers, only one of whom calls it a full-time job.
In a typical year, the foundation that supports OpenSSL receives just $2,000 in donations. The programmers have to rely on consulting gigs to pay for their work. "There should be at least a half dozen full time OpenSSL team members, not just one, able to concentrate on the care and feeding of OpenSSL without having to hustle commercial work," says Steve Marquess, who raises money for the project.
Is it any wonder that this Heartbleed bug slipped through the cracks?
Dan Kaminsky, a security researcher who saved the Internet from a similarly fundamental flaw back in 2008, says that Heartbleed shows that it's time to get "serious about figuring out what software has become Critical Infrastructure to the global economy, and dedicating genuine resources to supporting that code."
The Obama Administration has said it is doing just that with its national cybersecurity initiative, which establishes guidelines for strengthening the defense of our technological infrastructure — but it does not provide funding for the implementation of those guidelines.
Instead, the National Security Agency, which has responsibility to protect U.S. infrastructure, has worked to weaken encryption standards. And so private websites — such as Facebook and Google, which were affected by Heartbleed — often use open-source tools such as OpenSSL, where the code is publicly available and can be verified to be free of NSA backdoors.
The federal government spent at least $65 billion between 2006 and 2012 to secure its own networks, according to a February report from the Senate Homeland Security and Government Affairs Committee. And many critical parts of the private sector — such as nuclear reactors and banking — follow sector-specific cybersecurity regulations.
But private industry has also failed to fund its critical tools. As cryptographer Matthew Green says, "Maybe in the midst of patching their servers, some of the big companies that use OpenSSL will think of tossing them some real no-strings-attached funding so they can keep doing their job."
In the meantime, the rest of us are left with the unfortunate job of changing all our passwords, which may have been stolen from websites that were using the broken encryption standard. It's unclear whether the bug was exploited by criminals or intelligence agencies. (The NSA says it didn't know about it.)
It's worth noting, however, that the risk of your passwords being stolen is still lower than the risk of your passwords being hacked from a website that failedtoprotect them properly. Criminals have so many ways to obtain your information these days — by sending you a fake email from your bank or hacking into a retailer's unguarded database — that it's unclear how many would have gone through the trouble of exploiting this encryption flaw.
The problem is that if your passwords were hacked by the Heartbleed bug, the hack would leave no trace. And so, unfortunately, it's still a good idea to assume that your passwords might have been stolen.
So, you need to change them. If you're like me, you have way too many passwords. So I suggest starting with the most important ones — your email passwords. Anyone who gains control of your email can click "forgot password" on your other accounts and get a new password emailed to them. As a result, email passwords are the key to the rest of your accounts. After email, I'd suggest changing banking and social media account passwords.
But before you change your passwords, you need to check if the website has patched their site. You can test whether a site has been patched by typing the URL here. (Look for the green highlighted " Now Safe" result.)
If the site has been patched, then change your password. If the site has not been patched, wait until it has been patched before you change your password.
A reminder about how to make passwords: Forget all the password advice you've been given about using symbols and not writing down your passwords. There are only two things that matter: Don't reuse passwords across websites and the longer the password, the better.
I suggest using password management software, such as 1Password or LastPass, to generate the vast majority of your passwords. And for email, banking and your password to your password manager, I suggest a method of picking random words from the Dictionary called Diceware. If that seems too hard, just make your password super long — at least 30 or 40 characters long, if possible.
Well, this is interesting. I naturally assumed that when the various researchers first discovered Heartbleed, they told the government about it. While I know that some people think this is crazy, it is fairly standard practice, especially for a bug as big and as problematic as Heartbleed. However, the National Journal has an article suggesting that Google deliberately chose not to tell the government about Heartbleed. No official reason is given, but assuming this is true, it wouldn't be difficult to understand why. Google employees (especially on the security side) still seem absolutely furious about the NSA hacking into Google's data centers, and various other privacy violations. When a National Journal reporter contacted Google about the issue, note the response:
Asked whether Google discussed Heartbleed with the government, a company spokeswoman said only that the "security of our users' information is a top priority" and that Google users do not need to change their passwords.
Here's the thing: if the NSA hadn't become so focused on hacking everyone, it wouldn't be in this position. The NSA's dual offense and defense role has poisoned the waters, such that no company can or should trust the government to do the responsible thing and help secure vulnerable systems any more. And for that, the government only has itself to blame.
It's not too surprising that one of the first questions many people have been asking about the Heartbleed vulnerability in OpenSSL is whether or not it was a backdoor placed there by intelligence agencies (or other malicious parties). And, even if that wasn't the case, a separate question is whether or not intelligence agencies found the bug earlier and have been exploiting it. So far, the evidence is inconclusive at best -- and part of the problem is that, in many cases, it would be impossible to go back and figure it out. The guy who introduced the flaw, Robin Seggelmann, seems rather embarrassed about the whole thing but insists it was an honest mistake:
Mr Seggelmann, of Munster in Germany, said the bug which introduced the flaw was "unfortunately" missed by him and a reviewer when it was introduced into the open source OpenSSL encryption protocol over two years ago.
"I was working on improving OpenSSL and submitted numerous bug fixes and added new features," he said.
"In one of the new features, unfortunately, I missed validating a variable containing a length."
After he submitted the code, a reviewer "apparently also didn’t notice the missing validation", Mr Seggelmann said, "so the error made its way from the development branch into the released version." Logs show that reviewer was Dr Stephen Henson.
Mr Seggelmann said the error he introduced was "quite trivial", but acknowledged that its impact was "severe".
Later in that same interview, he insists he has no association with intelligence agencies, and also notes that it is "entirely possible" that intelligence agencies had discovered the bug and had made use of it.
Another oddity in all of this is that, even though the flaw itself was introduced two years ago, two separate individuals appear to have discovered it on the exact same day. Vocativ, which has a great story giving the behind the scenes on the discovery by Codenomicon, mentions the following in passing:
Unbeknownst to Chartier, a little-known security researcher at Google, Neel Mehta, had discovered and reported the OpenSSL bug on the same day. Considering the bug had actually existed since March 2012, the odds of the two research teams, working independently, finding and reporting the bug at the same time was highly surprising.
Highly surprising. But not necessarily indicative of anything. It could be a crazy coincidence. Kim Zetter, over at Wired explores the "did the NSA know about Heartbleed" angle, and points out accurately that while the bug is catastrophic in many ways, what it's not good for is targeting specific accounts. The whole issue with Heartbleed is that it "bleeds" chunks of memory that are on the server. It's effectively a giant crapshoot as to what you get when you exploit it. Yes, it bleeds all sorts of things: including usernames, passwords, private keys, credit card numbers and the like -- but you never quite know what you'll get, which makes it potentially less useful for intelligence agencies. As that Wired article notes, at best, using the Heartbleed exploit would be "very inefficient" for the NSA.
But that doesn't mean there aren't reasons to be fairly concerned. Peter Eckersley, over at EFF, has tracked down at least one potentially scary example that may very well be someone exploiting Heartbleed back in November of last year. It's not definitive, but it is worth exploring further.
The second log seems much more troubling. We have spoken to Ars Technica's second source, Terrence Koeman, who reports finding some inbound packets, immediately following the setup and termination of a normal handshake, containing another Client Hello message followed by the TCP payload bytes 18 03 02 00 03 01 40 00 in ingress packet logs from November 2013. These bytes are a TLS Heartbeat with contradictory length fields, and are the same as those in the widely circulated proof-of-concept exploit.
Koeman's logs had been stored on magnetic tape in a vault. The source IP addresses for the attack were 193.104.110.12 and 193.104.110.20. Interestingly, those two IP addresses appear to be part of a larger botnet that has been systematically attempting to record most or all of the conversations on Freenode and a number of other IRC networks. This is an activity that makes a little more sense for intelligence agencies than for commercial or lifestyle malware developers.
EFF is asking people to try to replicate Koeman's findings, while also looking for any other possible evidence of Heartbleed exploits being used in the wild. As it stands now, there doesn't seem to be any conclusive evidence that it was used -- but that doesn't mean it wasn't being used. After all, it's been known that the NSA has a specific program designed to subvert SSL, so there's a decent chance that someone in the NSA could have discovered this bug earlier, and rather than doing its job and helping to protect the security of the internet, chose to use it to its own advantage first.
The USTR seems to have a worrying need to blame other countries. Alongside the infamous Special 301 Report which puts a selection of nations on the naughty step because of their failure to bend to the will of the US copyright industries, there's the less well-known Section 1377 Review
, which considers "Compliance with Telecommunications Trade Agreements." Here's some information about the latest one (pdf):
The Section 1377 Review ("Review") is based on public comments filed by interested parties and information developed from ongoing contact with industry, private sector, and foreign government representatives in various countries. This year USTR received four comments and two reply comments from the private sector, and one comment from a foreign government.
The ability to send, access and manage data remotely across borders is integral to global services, including converged and hybrid services such as cloud services. However, the tremendous increase in cross-border data flows has raised concerns on the part of many governments. Given that cross-border services trade is, at its essence, the exchange of data, unnecessary restrictions on data flows have the effect of creating barriers to trade in services.
That seems to be reflected in the following section of the USTR's review:
Recent proposals from countries within the European Union to create a Europe-only electronic network (dubbed a "Schengen cloud" by advocates) or to create national-only electronic networks could potentially lead to effective exclusion or discrimination against foreign service suppliers that are directly offering network services, or dependent on them.
In particular:
Deutsche Telekom AG (DTAG), Germany's biggest phone company, is publicly advocating for EU-wide statutory requirements that electronic transmissions between EU residents stay within the territory of the EU, in the name of stronger privacy protection. Specifically, DTAG has called for statutory requirements that all data generated within the EU not be unnecessarily routed outside of the EU; and has called for revocation of the U.S.-EU "Safe Harbor" Framework, which has provided a practical mechanism for both U.S companies and their business partners in Europe to export data to the United States, while adhering to EU privacy requirements.
Of course, Deutsche Telekom is not the only one calling for Safe Harbor to be revoked: the European Parliament's inquiry into the mass surveillance of EU citizens has also proposed that, along with a complete rejection of TAFTA/TTIP unless it respects the rights of Europeans. Strangely, the USTR doesn't mention that fact in its complaint, but goes on to say:
The United States and the EU share common interests in protecting their citizens' privacy, but the draconian approach proposed by DTAG and others appears to be a means of providing protectionist advantage to EU-based ICT suppliers.
You've got to love the idea that too much privacy protection is "draconian". The USTR continues to tiptoe around the real reason that not just Deutsche Telekom but even Germany's Chancellor, Angela Merkel, are both keen on the idea of an EU-only cloud:
Given the breath of legitimate services that rely on geographically-dispersed data processing and storage, a requirement to route all traffic
involving EU consumers within Europe, would decrease efficiency and stifle innovation. For example, a supplier may transmit, store, and process its data outside the EU more efficiently, depending on the location of its data centers. An innovative supplier from outside of Europe may refrain from offering its services in the EU because it may find EU-based storage and processing requirements infeasible for nascent services launched from outside of Europe.
The USTR saves what it obviously sees as its killer punch for last:
Furthermore, any mandatory intra-EU routing may raise questions with respect to compliance with the EU's trade obligations with respect to Internet-enabled services. Accordingly, USTR will be carefully monitoring the development of any such proposals.
Got that, Europeans? If you dare to try to protect yourselves by creating a slightly more secure EU-only cloud in response to the NSA breaking into everything and anything, you may find yourself referred to the World Trade Organization or something....
It's interesting that the USTR brings up this issue -- doubtless a reflection of the huge direct losses that revelations about massive surveillance on Europeans and others are likely to cause the US computing industry. But trying to paint itself as the wronged party here is not going to endear the USTR to European politicians. At a time when Safe Harbor and even the TAFTA/TTIP negotiations are being called into question in the EU, such an aggressive and insulting stance seems a very stupid move.
Almost exactly a decade ago (man, time flies...), we first discussed the question of whether or not it should be against the law to get hacked. The FTC had gone after Tower Records (remember them?) for its weak data security practices. That resulted in a series of questions about where the liability should fall. Many people, quite reasonably, say that there should be incentives for companies to better manage data security and (especially) to protect their users. But, it's also true that sooner or later, if you're a target, you're going to get hacked. Ten years later and this is still an issue. The FTC went after Wyndham hotels for its egregiously bad data security (which made it easy for hackers to get hotel guests' information, including credit cards), but Wyndham fought back, saying the FTC had no authority over such matters, especially without having first issued specific rules.
However, a court has shot down that argument and will allow the FTC's case against Wyndham to move forward.
Again, Wyndham's security here was egregiously bad. It didn't encrypt payment data, and also used default logins and passwords for its systems. So there's an argument here that some kind of line can be drawn between purely negligent behavior, such as Wyndham's (lack of) data security, and companies who actually do follow some rather basic security practices, and yet still fall prey to hacks. What makes things tricky is that pretty large gray area in between the two extremes.