As pretty much anyone in computer security recognizes, any bit of "secure" computing is only secure for a limited period of time. Eventually, the security will be cracked. Yet, we still keep hearing about expectations for some new technologies to solve all our security problems. For example, we've been hearing for years about the wonders of "trusted computing," which basically gets mocked every time some company tries to roll it out (which is why it's gone through five or six name changes over the years). The latest news is that Intel's implementation of a trusted computing offering, called Trusted Execution Technology, has security vulnerabilities that allow it to be circumvented. In other words, it's not trustworthy, nor secure. Of course, it's not widely used, either, so it's not a big deal. But, once again, there is no magic bullet for security that solves all security problems.
You may recall earlier this month that a judge in New Jersey barred some researchers from releasing their report into the security vulnerabilities found in e-voting machines from Sequoia that were being used in the state. Sequoia had fought hard to stop the research from even being done in the first place, let alone released, even threatening the researchers with lawsuits. Now, one of the researchers who did the research, Andrew Appel, has released a long report detailing a ridiculous number of security problems with Sequoia's machines. To be honest, it's not clear from the blog post about the report if this is the same one that's being suppressed or not, but it's pretty damning. Because this is an important issue that doesn't necessarily get enough attention, I'm reposting Appel's executive summary of just how screwed up these machines are:
Executive Summary
I. The AVC Advantage 9.00 is easily "hacked" by the installation of fraudulent firmware. This is done by prying just one ROM chip from its socket and pushing a new one in, or by replacement of the Z80 processor chip. We have demonstrated that this "hack" takes just 7 minutes to perform.
The fraudulent firmware can steal votes during an election, just as its criminal designer programs it to do. The fraud cannot practically be detected. There is no paper audit trail on this machine; all electronic records of the votes are under control of the firmware, which can manipulate them all simultaneously.
II. Without even touching a single AVC Advantage, an attacker can install fraudulent firmware into many AVC Advantage machines by viral propagation through audio-ballot cartridges. The virus can steal the votes of blind voters, can cause AVC Advantages in targeted precincts to fail to operate; or can cause WinEDS software to tally votes inaccurately. (WinEDS is the program, sold by Sequoia, that each County's Board of Elections uses to add up votes from all the different precincts.)
III. Design flaws in the user interface of the AVC Advantage disenfranchise voters, or violate voter privacy, by causing votes not to be counted, and by allowing pollworkers to commit fraud.
IV. AVC Advantage Results Cartridges can be easily manipulated to change votes, after the polls are closed but before results from different precincts are cumulated together.
V. Sequoia's sloppy software practices can lead to error and insecurity. Wyle's Independent Testing Authority (ITA) reports are not rigorous, and are inadequate to detect security vulnerabilities. Programming errors that slip through these processes can miscount votes and permit fraud.
VI. Anomalies noticed by County Clerks in the New Jersey 2008 Presidential Primary were caused by two different programming errors on the part of Sequoia, and had the effect of disenfranchising voters.
VII. The AVC Advantage has been produced in many versions. The fact that one version may have been examined for certification does not give grounds for confidence in the security and accuracy of a different version. New Jersey should not use any version of the AVC Advantage that it has not actually examined with the assistance of skilled computer-security experts.
VIII. The AVC Advantage is too insecure to use in New Jersey. New Jersey should immediately implement the 2005 law passed by the Legislature, requiring an individual voter-verified record of each vote cast, by adopting precinct-count optical-scan voting equipment.
You know, the one thing that computers are supposed to be good at is counting things accurately. So why is it so hard to do so when it comes to counting votes? We recently wrote about the case in Washington DC's primaries where election officials were struggling to figure out the source of an awful lot of votes for a non-existent write-in candidate. Sequoia, the makers of the e-voting machines in question, were quick to deny any and all responsibility with the hilariously "thou dost protest too much" statement: "There's absolutely no problem with the machines in the polling places. No. No."
Either way, it appears that officials in DC still can't properly add up the votes properly, and are noting that 13 separate races all show the exact same number of overvotes: 1,542, though no one can explain why. Sequoia continues to stand by its original statement that the problem must be one of human error -- though it fails to explain how simple human error would create 1,542 extra votes in 13 entirely separate races -- and why it didn't design a system that would prevent the ability for "human error" to create such votes.
Last week, we wrote about yet another problem with Sequoia e-voting equipment where the company was vehemently denying the problem was with the machines, even saying: "There's absolutely no problem with the machines in the polling places. No. No." Of course, this came right after a report revealing how easy it was to hack their machines, as well as numerous other problems with Sequoia machines. Yet the company consistently employs the same exact strategy: it couldn't possibly be the fault of the machines.
You may recall the story earlier this month about the Sequoia optical scanning machines in Palm Beach County that supposedly couldn't reach the same vote tally if different counting machines were used. At least that was the original claim -- but it was later changed when election officials admitted they had simply misplaced some ballots. Well, the latest report claims that the recount is now not showing lost ballots -- it's showing too many ballots. Fantastic. Election officials think they've traced the problem to the fact that some votes on Sequoia's e-voting machine cartridges weren't properly transferred, which kicks off Sequoia's standard PR response:
The company's representative, Phil Foster says "the cartridge is fine. Why it didn't read I do not know," suggesting another human error made on election night.
You know, when you keep saying that, and the problems keep occurring, at some point, people are going to stop believing you. Even if the problem really is human error every one of these times, people might begin to wonder why you don't design your systems to avoid such human errors.
It seems like every few months, well respected security researchers come out with yet another report about just how insecure various e-voting machines are. The amazing thing is how hard the various e-voting companies have fought against allowing these researchers to look at their machines, always insisting that the federal certification process (the one that's were later shown to have not done a very good job testing the machines) was fine. Of course, even the Government Accountability Office has admitted that the federal certification process sucks.
One of the complaints that the e-voting firms have had about having independent security researchers testing the machines is that those tests are not in real world conditions. In fact, we had a commenter from one of the e-voting companies who insisted that these independent tests were useless because:
The point people often miss, which is left off of the conspiracy blogs, is that all of these 'hacking' attempts that are requested are made to do so in some sort of vacuum. In some obscure room where a gang of hackers get together and try to penetrate the system with unlimited resources. In any election, paper or fully electronic, there are procedural and security measures taken that complement and supplement the security features of the system itself. This is in addition to internal and system-independent, pre- and post-election audit features.
That's really rather meaningless, because if it were true, then that info would also come out in those independent research reports. However, even that comment turns out to be untrue. As a few folks have submitted, some security researchers at UCSB have demonstrated not just how insecure Sequoia's e-voting systems are, but they've shown how easy it is to hack an election with a pair of videos that you can watch right here (if you're in the RSS feed, click through to see them):
What this shows is that the hack that the researchers shows demolishes that comment from the insider. All it required was for those wishing to change the results of the election to drop a USB key into the pile of USB keys used to set the system up. All of the security measures that the insider talks about are then bypassed with ease. The video shows it getting buy the procedural security measures, as well as the pre- and post-election audit features.
The video also shows why paper ballots are hardly a solution, as the second video shows how the malware included in the software can be set to void out legitimate votes and replace them with fake votes, in a variety of different scenarios, almost all of which are likely to go undetected. This is a hugely damning report -- and it comes against a company that has fought so hard against having its machines tested by independent security experts. While some may say that this shows why they didn't want it tested -- it should concern anyone who believes in free and fair democratic elections that we're using such insecure voting machines.
It's amazing to watch just how sensitive some companies are concerning the rather well-known security vulnerabilities associated with RFID tags and smart cards. We've seen time and time again, companies try to suppress such research from getting published -- and every single time, those efforts to suppress the publication of the vulnerabilities backfires, often badly.
But that never seems to stop companies from flexing their legal muscles.
Texas Instruments comes on along with chief legal counsel for American Express, Visa, Discover, and everybody else... They were way, way outgunned and they absolutely made it really clear to Discovery that they were not going to air this episode talking about how hackable this stuff was, and Discovery backed way down being a large corporation that depends upon the revenue of the advertisers. Now it's on Discovery's radar and they won't let us go near it.
Check out the video of him saying this (while admitting he's probably not supposed to talk about it) here:
Perhaps it's an exaggeration by Savage, but do the credit card companies really think that security through obscurity (with a healthy dose of legal threats) is the best way to protect their customers?
Consider me to be in a state of shock. For nearly half a decade Diebold has always responded in the identical way to every single report of a problem or security vulnerability with its e-voting machines: attacking those who pointed out the problem and claiming it really wasn't a problem at all. This has happened time and time again that I'm not even sure how to react when the company (renamed Premier to get away from the Diebold name stigma) has finally admitted that its machines have a flaw that drops votes. Oops. It's warning 34 states that use the machines of the problem which was highlighted in the lawsuit Ohio filed against Premiere/Diebold. Not only that, but it's admitting the flaw in the software has been in the software for the past decade.
It should also make us question Premier/Diebold's longstanding claim that independent outsiders should not be allowed to inspect its machines for problems. Of course, Diebold execs are already downplaying all of this, claiming that they were "confident" that this hadn't actually impacted any elections, though they offer no proof of that. The company's president admits he's "distressed" that they were wrong in their previous analysis, but he fails to explain why the company is so against letting outsides inspect the machines to avoid such flaws. In the meantime, the company insists that the problem will be patched in time for the November election, and I'm sure we're all confident that there won't be any other problems with their machines, right?
While it took about a week and a half, a judge has now lifted the gag order that had prevented some MIT students from sharing a presentation about vulnerabilities in the Boston subway system. The judge refused to ban the students from talking about it for a period of five months (which the MBTA insisted it needed to fix the system). This is definitely a win for free speech, though I'm sure the debate over how and when to disclose security vulnerabilities will continue for a long, long time.
We recently wrote about how NXP Semiconductor (formerly Philips Semiconductor) was suing to try to stop the publication of some research that showed some vulnerabilities in its chips used in smart cards around the world. The vulnerability itself was already widely known (though NXP denied it for a while). The good news is that a judge has denied the request, and the research will be published as originally planned. The bad news is that NXP wasted quite a lot of time denying there was a problem instead of fixing the problem -- and with this latest misguided legal stunt, made sure a lot more people knew about it.
Rich Kulawiec writes in to point out that security expert Dan Geer is suggesting that merchants violate the security of customers they deem as security risks. His argument is, basically, that there are two types of users out there: those who respond "yes" to any request -- and therefore are likely to be infected by multiple types of malware doing all sorts of bad things -- and those who respond "no" to any request, who are more likely to be safe. Thus, Geer says merchants should ask users if they want to connect over an "extra special secure connection," and if they respond "yes," you assume that they respond yes to everything and therefore are probably unsafe. To deal with those people, Geer says, you should effectively hack their computer. It won't be hard, since they're clearly ignorant and open to vulnerabilities -- so you just install a rootkit and "0wn" their machine for the duration of the transaction.
As Kulawiec notes in submitting this: "Maybe he's just kidding, and the sarcasm went right over my (caffeine-starved) brain. I certainly hope so, because otherwise there are so many things wrong with this
that I'm struggling to decide which to list first." Indeed. I'm not sure he's kidding either, but the unintended consequences of violating the security of someone's computer, just because you assume they've been violated previously are likely to make things a lot worse. This seems like a suggestion that could have the same sort of negative unintended consequences as the suggestion others have made about creating "good trojans" that go around automatically closing the security holes and stopping malware by using the same techniques employed by the malware. Both are based on the idea that people are too stupid to cure themselves, and somehow "white hat" hackers can help fix things. Now, obviously, plenty of people do get infected -- but using that as an excuse to infect them back, even for noble purposes, is only going to create more problems in the long run. Other vulnerabilities will be created and you're trusting these "good" hackers to do no harm on top of what's been done already, which is unlikely to always be the case. No, security will never be perfect and some people will always be more vulnerable -- but that shouldn't give you a right to violate their security, even if for a good reason.