After all the concerns raised about CISPA and other cybersecurity legislation, Senators Lieberman and Collins introduced a heavily revised version of their cybersecurity bill. The entire thing is an insane 211 pages, but as a first pass, the ACLU (who has been highly critical of nearly all previous proposals) sounds cautiously optimistic that the new bill contains important privacy protections. From the ACLU's initial analysis, this version of the bill will:
Ensure that companies who share cybersecurity information with the government give it directly to civilian agencies, and not to military agencies like the National Security Agency. The single most important limitation on domestic cybersecurity programs is that they are civilian-run and do not turn the military loose on Americans and the internet.
Ensure that information shared under the program be “reasonably necessary” to describe a cybersecurity threat.
Restrict the government’s use of information it receives under the cyber info sharing authority so that it can be used only for actual cybersecurity purposes and to prosecute cyber crimes, protect people from imminent threat of death or physical harm, or protect children from serious threats.
Require annual reports from the Justice Department, Homeland Security, Defense and Intelligence Community Inspectors General that describe what information is received, who gets it, and what is done with it.
Allow individuals to sue the government if it intentionally or willfully violates the law.
The ACLU specifically calls out Senators Durbin and Franken for helping to get these changes included in the bill. I agree that all of these are important and useful changes compared to what had been in previous proposals. Focusing on civilian agencies rather than the NSA is a big one, since much of the fight over competing visions of the bill were really a turf war over who got to control the information (and the budget): Homeland Security or the NSA.
The bill also removes some of the regulatory requirements for organizations that run "critical infrastructure," in favor of a more voluntary approach to setting up best practices, which may make the bill more palatable for some.
That said, we're still waiting for an actual justification of cybersecurity bills that doesn't include exaggerations of the threats that are out there, or Hollywood-scripted stories about planes falling from the skies that have little basis in reality. Moreover, though the claim has always been that these bills are important because the government is being legally prevented from sharing and receiving vital information, nobody has actually pointed to the specific legal obstacles that exist -- and the government already has information sharing programs that don't seem to require any new legislation. Also, any bill that's 211-pages long is something to be concerned about, as the number of "hidden" easter eggs could be immense and serious. But, compared to previous cybersecurity bills, this certainly sounds like a big step in the right direction.
We keep hearing US government officials tell us fanciful stories about why we need cybersecurity legislation that paves the way for the government to get access to private information, but the arguments never make much sense. There are vague claims of threats that really seem more like garden variety hackers, and then there are the completely made up threats that are pulled right from Hollywood scripts -- like the claims that an online attack will lead to planes colliding.
A new survey suggests that the public just isn't buying it. 63% of those polled worried about the impact on privacy and civil liberties if we provided greater information sharing with the government. So for all the talk about how there's "bipartisan" support for doing something here, it's not clear that there's really American public support for this kind of thing.
The American Enterprise Institute (AEI) recently held an event about cybersecurity and cybersecurity legislation. The keynote speech was from NSA boss General Keith Alexander. He of course talked about why he supports cybersecurity legislation, such as CISPA and other proposals that will make it easier for the NSA access private content from service providers -- much of which, reports claim, they're already capturing and storing. Alexander has claimed that the NSA doesn't have "the ability" to spy on American emails and such, and reiterates that claim during the Q&A in this session, insisting that the Utah data center doesn't hold data on Americans' emails (and makes a joke about just how many emails that would be to read). That's nice for him to say, but so many people with knowledge of the situation claim the opposite.
In a motion filed today, the three former intelligence analysts confirm that the NSA has, or is in the process of obtaining, the capability to seize and store most electronic communications passing through its U.S. intercept centers, such as the "secret room" at the AT&T facility in San Francisco first disclosed by retired AT&T technician Mark Klein in early 2006.
So it's interesting to pay attention to what Alexander has to say in pushing for cybersecurity legislation. You can watch the full video below, if you'd like:
Much of what he talks about online involves basic malware and hack attacks. These are definitely issues -- but are they issues that we need the military (which the NSA is a part of) to step in on? His "quote" line is that these attacks represent the "greatest transfer of wealth in history." That is a pretty broad statement, and there's almost no evidence to support it. He points to studies from Symantec and McAfee on the "costs" of dealing with security issues -- but remember, those are two of the biggest sellers of security software, and have every incentive in the world to inflate the so-called "costs." Also, seriously? The "greatest transfer of wealth in history"? Has he paid absolutely no attention to what's happened on Wall Street and the financial world over the past decade? Does anyone honestly believe that the amount of money "transferred" due to hack attacks is greater than the amount of money transferred due to dodgy financial deals and the mortgage/CDO mess? That doesn't pass the laugh test.
He does insist that worse attacks are coming, but provides no basis for that (or, again, why the NSA needs your info). In fact, according to a much more believable study, the real risks are not outside threats and hackers, but internal security screwups and disgruntled inside employees. None of that requires NSA help. At all.
But it sure makes for a convenient bogeyman to get new laws that take away privacy rights.
Alexander, recognizing the civil liberties audience he was talking to, admits that the NSA neither needs nor wants most personal info, such as emails, and repeatedly states that they need to protect civil liberties (though, in the section quoted below, you can also interpret his words to actually mean they don't care about civil liberties -- but that's almost certainly a misstatement on his part):
One of the things that we have to have then [in cybersecurity legislation], is if the critical infrastructure community is being attacked by something, we need them to tell us... at network speed. It doesn't require the government to read their mail -- or your mail -- to do that. It requires them -- the internet service provider or that company -- to tell us that that type of event is going on at this time. And it has to be at network speed if you're going to stop it.
It's like a missile, coming in to the United States.... there are two things you can do. We can take the "snail mail" approach and say "I saw a missile going overhead, looks like it's headed your way" and put a letter in the mail and say, "how'd that turn out?" Now, cyber is at the speed of light. I'm just saying that perhaps we ought to go a little faster. We probably don't want to use snail mail. Maybe we could do this in real time. And come up with a construct that you and the American people know that we're not looking at civil liberties and privacy, but we're actually trying to figure out when the nation is under attack and what we need to do about it.
Nice thing about cyber is that everything you do in cyber, you can audit. With 100% reliability. Seems to be there's a great approach there.
Now all that's interesting, because if that's true, then why is he supporting legislation that would override any privacy rules that protect such info? If he really only needs limited information sharing, then why isn't he in favor of more limited legislation that includes specific privacy protections for that kind of information? He goes back to insisting they don't care about this info later on in the talk, but never explains why he doesn't support legislation that continues to protect the privacy of such things:
The key thing in information sharing that gets, I think, misunderstood, is that when we talk about information sharing, we're not talking about taking our personal emails and giving those to the government.
So make that explicit. Rather than supporting cybersecurity legislation that wipes out all privacy protections why not highlight what kind of information sharing is blocked right now and why it's blocked? Is it because of ECPA regulations? Something else? What's the specific problem? Talking about bogeymen hackers and malicious actors makes for a good Hollywood script, but there's little evidence to support the idea that it's a real threat here -- and in response, Alexander is asking us all to basically wipe out all such privacy protections... because he insists that the NSA doesn't want that kind of info. And, oh yeah, this comes at the same time that three separate whistleblowers -- former NSA employees -- claim that the NSA is getting exactly that info already.
So, this speech is difficult to square up with that reality. If he really believes what he's saying, then why not (1) clearly identify the current regulatory hurdles to information sharing, (2) support legislation that merely amends those regulations and is limited to just those regulations and (3) support much broader privacy protections for the personal info that he insists isn't needed? It seems like a pretty straightforward question... though one I doubt we'll get an answer to. Ever. At least not before cybersecurity legislation gets passed.
So called "cybersecurity" and "intellectual property" are two very different issues, but it seems that politicians are realizing that they get further by screaming about "cybersecurity threats" than about "intellectual property infringement." The latest proposed appropriations bill for the State Department includes a role for a "coordinator for cyber issues" -- which is an awful title. However, snuck into the job description is the fact that this person will have to create a "naughty" list of countries who are "cybersecurity concerns." Okay, fair enough. Except, the bill goes on to define what constitutes a cybersecurity concern, noting that if this person determines that there has been a
"... pattern of incidents of cybercrime against the United States Government or United States persons, or that disrupt United States electronic commerce or otherwise negatively impact the trade or intellectual property interests of the United States....
This seems to suggest that the State Department can now shame entire countries claiming they're a "cybersecurity concern" if the reality is that their copyright enforcement efforts are more lax. With such a broad definition, it seems like just about any country could be blamed if they don't magically somehow stop the "negative impact" of file sharing.
We've certainly written an awful lot about the ridiculousness of the concept of "cyber war." Even with things like Stuxnet and Flame, it seems silly to compare what amounts to either electronic espionage or a little hacking as "war." But perhaps we were looking at it the wrong way. In a Foreign Policy article by John Arquilla, he argues that perhaps we should be embracing this kind of "cool war" as it can be effective at stopping threats (even distributed ones like terrorist operations, rather than just centralized ones like governments), while causing minimal bloodshed:
On balance, it seems that cyberwar capabilities have real potential to deal with some of the world's more pernicious problems, from crime and terrorism to nuclear proliferation. In stark contrast to pitched battles that would regularly claim thousands of young soldiers' lives during Robert E. Lee's time, the very nature of conflict may come to be reshaped along more humane lines of operations. War, in this sense, might be "made better" -- think disruption rather than destruction. More decisive, but at the same time less lethal.
And, indeed, if we believe that reports of "cyber attacks" being used to make planes fall from the sky are greatly exaggerated, perhaps we should welcome a "war" that mainly involves hackers vs. hackers trying to disrupt each others "real" warfare capabilities. But, of course, there are plenty of other issues that come up here as well -- such as how secret hacking programs can be abused. If it gets governments to stop physical battles that lead to real lives lost, that does seem like an improvement, though I'm not sure anyone should think that continuing to attack each other through computers is ever a "good" situation overall.
With all the talk lately about cybersecurity legislation, we've still yet to see anyone lay out an actual scenario for a real "cyber security" threat (or, at least one that goes beyond your everyday malware or corporate espionage, which are covered by existing laws just fine). However, we have heard lots of fear mongering about planes falling from skies and electric grids being shut down -- despite no evidence that there is any such threat (and, if there is, the concern should be focused on why those things are hooked up to the internet in the first place). And, of course, in all this fear mongering, there's one phrase that stands out: "Digital Pearl Harbor," as in, "we must protect ourselves before there's a digital Pearl Harbor."
David Parera, over at FierceGovernmentIT, has done the dirty work of tracing the history of the phrase, and suggesting that these Chicken Littles have been warning about the "imminent" digital Pearl Harbor for many years now.
The earliest public reference appears to be in a June 26, 1996 Daily News article in which CIA Director John Deutch warned that hackers "could launch 'electronic Pearl Harbor' cyber attacks on vital U.S. information systems."
The next month, then-Deputy Attorney General Jamie Gorelick told the Senate Governmental Affairs permanent subcommittee on investigations that "we will have a cyber-equivalent of Pearl Harbor at some point, and we do not want to wait for that wake-up call," according to the Armed Forces Newswire Service.
Thereafter the term appears to have gone into a hiatus, apart from some offhand or derivative references to the original sources cited above. But, not to worry, Sen. Sam Nunn (D-Ga.) used it again in the spring of 1998, being quoted in a March 19 South Bend Tribune article warning that "We have an opportunity to act now before there is a cyber-Pearl Harbor...We must not wait for either the crisis or for the perfect solution to get started."
There's a lot more where that came from, so go hit the link, read it, and be amazed.
Of course, as Parera notes, just because every single one of those fearmongering reports turned out to be false, it's still possible that the "Digital Pearl Harbor" is right around the corner. But, still, it at least raises significant questions of how important it is that we rush through the bill without an explicit explanation of the true threat. Of course, that won't really matter, as everyone's basically playing a giant game of musical chairs, trying to be ready to claim they "called it" should these horrible things ever actually happen.
First off, there's the fact that, for all the vague talks of "threats," the only real evidence of "cyberattacks" to date all seem to point to the US. So, if we're worried about attacks directed back at us, perhaps we shouldn't have kicked off the effort by showing the rest of the world how it's done. And, no, Senator Feinstein, the problem isn't the leak, but the action. As Harper points out:
The likelihood of attacks having extraordinary consequences is low. This talk of “cyberwar” and “cyberterror” is the ugly poetry of budget-building in Washington, D.C. But watch out for U.S. cyberbellicosity coming home to roost. The threat environment is developing in response to U.S. aggression.
This parallels the United States’ use of nuclear weapons, which made “the bomb” (Dmitri) an essential tool of world power. Rightly or wrongly, the United States’ use of the bomb spurred the nuclear arms race and triggered nuclear proliferation challenges that continue today. (To repeat: Cyberattacks can have nothing like the consequence of nuclear weapons.)
Of course, the "urgency" that we keep hearing about is almost certainly political. Should some attack actually happen, no politician wants to give his or her opponents the opportunity to point to their failure to pass "do something!" cybersecurity legislation during a campaign. As Harper points out, the real fear from politicians isn't a cyberattack, it's a political attack:
Senator Reid has gone hook, line, and sinker for the “cyber-9/11″ idea, of course. Like all politicians, his primary job is not to set appropriate cybersecurity policies but to re-elect himself and members of his party. The tiniest risk of a cyberattack making headlines to use against his party justifies expending taxpayer dollars, privacy, and digital liberties. This it not to prevent “cyber” attack. It is to prevent political attack.
He then goes on to highlight a bunch of former government officials who sent a letter to Senate leaders urging them to pass cybersecurity legislation "as soon as possible" since it's "critically necessary to protect our national and economic security." Of course, what the signatories of that letter really mean is that they want to protect their own "economic security." Every one of them has moved to the private sector and is in a position to profit greatly from a freakout over cybersecurity...
And yes, in answer to the URL I mentioned at the beginning, using cyber does, in fact, make you look like an idiot in most cases. But for the amount of profit and spying power at stake? It doesn't seem like many in DC care that much.
With a lot of new attention being paid to the Flame malware that was datamining computers around the Middle East, there have been plenty of comparisons to Stuxnet, the famous bit of malware that was targeted at mucking up Iran's nuclear power program. So it's very interesting timing to see the NY Times reveal many of the details behind Stuxnet, including confirming that it was a program driven by the US, with a lot of help from the Israelis. Many, many, many people suspected that already, but it certainly appears that the NYTimes has numerous detailed sources that support this claim.
Perhaps even more interesting, however, is the fact that Stuxnet (which apparently originally infected Iranian nuclear plants via workers using USB keys when they shouldn't) was never supposed to get out into the wild. It was supposed to just sit in the computers at the power plant, confusing the hell out of the Iranians. But, obviously, that didn't happen. Having that info get out into the wild probably killed off the effort much earlier than expected, since it basically explained to the Iranians what was happening.
It's also noteworthy that a source in the article claims that Stuxnet was the first example of using a computer attack to destroy physical items (it made centrifuges work irregularly in ways that could cause them to break). Some have therefore used Stuxnet as "proof" of the cybersecurity threats out there and the misnamed "cyberwar." I'm not sure that's true. Stuxnet still appears to be a rather unique case in terms of a very, very specific target that had some significant vulnerabilities. We hear lots of worries about cybersecurity impacting physical infrastructure -- and I'm sure that those who wish to do harm would love to bring down power grids and airplanes through some form of a cyber attack. But I'm not convinced that the success of Stuxnet is so easily replicable in other such areas. And I don't see how that automatically justifies effectively tossing out all privacy protections.
While there have been some claims that Google has supported CISPA (whereas the company does not appear to have taken an official position), at least one top person at Google is not at all pleased with the bill. Vint Cerf apparently blasted Congress about CISPA, noting that the bill went way too far.
Cerf, who is an "Internet evangelist" (Vice President) at Google, had some particularly harsh words for the Cybersecurity and Intelligence Protection Act (CISPA), saying it wasn't specific enough on how shared information between government and corporations would be used.
It's good to see more and more internet experts speaking out about why the bill is so bad. It's too bad that Google hasn't taken an official position on CISPA yet.
While we know that at least Senator Ron Wyden understands why CISPA (and related cybersecurity bills) are bad, there are still 99 other Senators who don't seem quite so clear on the matter. And they're about to vote on such bills very, very soon. A bunch of groups have set up a site called Privacy is Awesome to help you contact your Senator today to let them know that you do think that privacy is awesome, and you won't accept them voting to take away your privacy via overly expansive cybersecurity bills like CISPA or the other bills the Senate is considering.