The BBC has a story about how the operators of one of the larger botnets that was recently shut down showed up at the offices of a security researcher who helped bring them down... asking for a job. The article highlights how the researcher, Luis Corrons, basically had figured out who was running the botnet after one of the operators made a mistake and revealed his home computer... which actually was not far from where Corrons worked. It was shut down at the end of last year, but a few months later, Corrons had an interesting experience:
In late March Mr Corrons was preparing for a meeting at Panda's Bilbao lab with a journalist and took a moment to dodge downstairs to get a drink. On the way down he passed two young men coming up.
One asked if he was Luis Corrons. He said yes while wondering who they were.
They introduced themselves which left him no wiser. Then, one of them said; "I'm Ostiator and this is Netkairo."
"It was then I realised these guys were the ones that were arrested in the Mariposa case," he told the BBC. "I thought they wanted to teach me a lesson."
Instead, they asked him for a job, saying that the shutdown of the botnet had "robbed them of their livelihood." Apparently, the two guys started following Corrons on Twitter, sending messages his way and commenting on his blog, before asking for work again. They finally brought in one of the guys for an interview, noting that they wouldn't hire anyone involved in criminal activity. The guy responded that he hadn't been charged with anything. However, Corrons also quickly realized that the guy barely had any technical skills -- pointing out that he didn't write the bot, he just ran it:
"He got really annoyed at that moment, when we told him he was not good enough," said Mr Corrons. Subsequent discussion revealed just how poor their skills were.
"They were given the botnet with all the stuff they needed," said Mr Corrons. "Using it was like using any other program."
So, for the script kiddies out there, perhaps before asking for a job from the security researchers who bring your botnet down, you do a bit of work to make sure you have the actual skills.
John Gruber recently highlighted one of the more annoying things I've seen on multiple news websites lately: attempts to muck with basic copy & paste features. I've noticed it on Wired.com and SFGate.com among others. Gruber points out that it's also happening on TechCrunch and The New Yorker's website. From a user's standpoint, what happens is that when you copy some text, and then paste it somewhere else, through some javascript shenanigans, it appends a bit of extra text that you did not copy, usually saying something like "read more:" with a URL linking back to the original story.
As someone who does a fair bit of copying and pasting in writing this blog, I agree with Gruber that this is a bit of a nuisance. It's not a hugely annoying thing, but it is annoying. If I'm copying and pasting from your website, I know what your website is, and I am already planning to link back to it. Adding that superfluous text is just annoying and basically forcing my computer to do something I did not ask it to do.
Gruber tracked down the source of this annoyance: a company called Tynt, that not only enables this functionality for a bunch of sites that probably don't realize how annoying it is, but also tracks what you copy by sending that info back to its server. That's a bit creepy, frankly. Of course, since it's javascript, it's easy enough to block for those who know how to do that sort of thing. Still, Gruber's analysis of this makes sense:
It's a bunch of user-hostile SEO bullshit.
Everyone knows how copy and paste works. You select text. You copy. When you paste, what you get is exactly what you selected. The core product of the "copy/paste company" is a service that breaks copy and paste.
The pitch from Tynt to publishers is that their clipboard jiggery-pokery allows publishers to track where text copied from their website is being used, on the assumption that whoever is pasting the text is leaving the Tynt-inserted attribution URL, with its gibberish-looking tracking ID. This is, I believe, a dubious assumption. Who, when they paste such text and find this "Read more:" attribution line appended, doesn't just delete it (and wonder how it got there)?
A reminder for folks that tomorrow, Wednesday May 26th, at 9am PT/noon ET, we'll be holding the next webinar in our IT Innovation series, What IT Needs To Know About The Law. I've been working on the content for this webinar with Dave Navetta and Larry Downes, and it's shaping up great, covering many of the issues we talk about here on a regular basis. In fact, there's so much good stuff, that we're down to figure out what we're leaving out -- perhaps to revisit at a future date. Either way, it should be chock full of good info that will be useful for any IT person, so don't miss it. Sign up now, and stop by tomorrow with questions ready. As with our past webinars, this one will be interactive. We'll be taking questions from attendees throughout the webinar. Please join us.
You may recall that, a couple weeks ago, we had a webinar on cloud security, with Jake Kaldenbaugh of CloudStrategies and Sam Quigley of Emerose, that was well attended and well reviewed (thank you!). The feedback on it was tremendous. If you happened to miss it, we've now made it available to watch, and also have put up the actual PowerPoint document for download as well. And... bonus time. The PDF file contains a series of extra slides, detailing some of the state of the cloud security market today -- as well as some details about Amazon's cloud security initiatives. Even if you caught the original presentation, there are probably some useful additional nuggets in there as well.
And, in the meantime, don't forget to sign up for our next Webinar, coming up this Wednesday at 9am PT/noon ET on What IT Needs To Know About The Law, with Dave Navetta and Larry Downes. The signups on this one have been through the roof and we've been working hard putting it together. The conversation should be very, very interesting, so definitely come ready with questions as well.
It always happens. A technology used for spying on people always opens up security vulnerabilities. Sony's "rootkit" DRM had huge security vulnerabilities that let people do bad things to your computer. And now comes the news that the LANrev system used by the Lower Merion School District to secretly photograph students at home also just happened to have a big security vulnerability that, in theory, made it possible for others to spy on children without them knowing it as well:
The LANrev program contains a vulnerability that would allow someone using the same network as one of the students to install malware on the laptop that could remotely control the computer. An intruder would be able to steal data from the computer or control the laptop webcam to snap surreptitious pictures....
The vulnerability in the LANrev system lies in the symmetric-key encryption it uses for authentication between the client and the server, and isn’t related to the optional Theft Track feature. Therefore, even computers that are not using the theft feature are potentially vulnerable.
The authentication key is stored in the client-side and server software and is fairly easy to decipher, says Frank Heidt, president and CEO of Leviathan. It took Leviathan just a few hours to determine that it’s a stanza from a German poem. The key is the same for every computer using LANrev.
The LANrev client software on a computer is configured to contact a server every minute or so to check in and see if the server has any commands for it. Knowing what the key is would let an attacker who has installed a sniffer on the network intercept that ping and masquerade as the server in communication back to the laptop. It requires the attacker to be on the same network as the target machine -- for example, on a wireless network at the school or anywhere else that offers free Wi-Fi the student might use.
To be fair, there's no evidence that anyone used this hack outside of the researchers who have discovered it, but it still raises more questions about the wisdom of using such software, especially on laptops used by kids.
Late last week, of course, Google 'fessed up to the fact that it was accidentally collecting some data being transmitted over open WiFi connections with its Google Street View mapping cars. As we noted at the time, it was bad that Google was doing this and worse that they didn't realize it. However, it wasn't nearly as bad as some have made it out to be. First of all, anyone on those networks could have done the exact same thing. As a user on a network, it's your responsibility to secure your connection. Second, at best, Google was getting a tiny fraction of any data, in that it only got a quick snippet as it drove by. Third, it seemed clear that Google had not done anything with that collected data. So, yes, it was not a good thing that this was done, but the actual harm was somewhat minimal -- and, again, anyone else could have easily done the same thing (or much worse).
That said, given the irrational fear over Google collecting any sort of information in some governments, this particular bit of news has quickly snowballed into investigations across Europe and calls for the FTC to get involved in the US. While one hopes that any investigation will quickly realize that this is not as big a deal as it's being made out to be, my guess is that, at least in Europe, regulators will come down hard on Google.
However, going to an even more ridiculous level, the class action lawyers are jumping into the game. Eric Goldman points us to a hastily filed class action lawsuit filed against Google over this issue. Basically, it looks like the lawyers found two people who kept open WiFi networks, and they're now suing Google, claiming that its Street View operations "harmed" them. For the life of me, I can't see how that argument makes any sense at all. Here's the filing:
Basically, you have two people who could have easily secured their WiFi connection or, barring that, secured their own traffic over their open WiFi network, and chose to do neither. Then, you have a vague claim, with no evidence, that Google somehow got their traffic when its Street View cars photographed the streets where they live. As for what kind of harm it did? Well, there's nothing there either.
My favorite part, frankly, is that one of the two people involved in bringing the lawsuit, Vicki Van Valin, effectively admits that she failed to secure confidential information as per her own employment requirements. Yes, this is in her own lawsuit filing:
Van Valin works in the high technology field, and works from her home over her internet-connect computer a substantial amount of time. In connection with her work and home life, Van Valin transmits and receives a substantial amount of data from and to her computer over her wireless connection ("wireless data"). A significant amount of the wireless data is also subject to her employer's non-disclosure and security regulations.
Ok. So your company has non-disclosure and security regulations... and you access that data unencrypted over an unencrypted WiFi connection... and then want to blame someone else for it? How's that work now? Basically, this woman appears to be admitting that she has violated her own company's rules in a lawsuit she's filed on her behalf. Wow.
While there's nothing illegal about setting up an open WiFi network -- and, in fact, it's often a very sensible thing to do -- if you're using an open WiFi network, it is your responsibility to recognize that it is open and any unencrypted data you send over that network can be seen by anyone else on the same access point.
This is clearly nothing more than a money grab by some people, and hopefully the courts toss it out quickly, though I imagine there will be more lawsuits like this one.
A few weeks ago, we had a post about why IT people need to be knowledgeable about the law, rather than just about technology. It was based on an excellent article by Dave Navetta on The Legal Defensibility Era (pdf). For years, IT folks have recognized that they often wear two hats, switching between a technology one and a business one, as they often have to explain or justify the business tradeoffs of the IT decisions they make. But these days, they also really need to add a legal hat.
Given the immense interest we received in this particular topic, we've decided that it will be the topic of our next webinar in our IT Innovation series: What IT needs to know about the law to be held next Wednesday, May 26th at 9am PT/noon ET. We're thrilled that Dave Navetta, who wrote the article that sparked the original discussion, will be participating and discussing this "era of legal defensibility" that IT people need to understand. Dave has built a career around bridging that gap between IT folks and legal folks, and is obviously perfect to be part of this discussion. With him will be Larry Downes, most recently the author of The Laws of Disruption, which is all about how the legal realm is hugely important to understanding business and technology in the world today, and how anyone looking to succeed in the internet age needs to understand some of these key legal principles. Larry's a well-known writer, speaker, pundit and consultant on this important intersection of the law and the technology world, and between David and Larry, the discussion should be quite a lot of fun. Once again, I'll be moderating.
I'm really excited about this particular topic and the two speakers. We've been preparing for the webinar over the past few days, and there are a ton of interesting topics to discuss, concerning how the law is impacting security, privacy and the wider IT world. Depending on timing, we may dip into some other areas, including intellectual property law, Section 230 and the like. Given the discussions we regularly have on this site, and how important legal issues have become in the IT world over the past few years, this is going to be a can't miss discussion, so sign up now. As with previous webinars, the discussion is designed to be interactive, and we can take questions from the audience via the web interface during the event, so please come ready with questions.
Remember a few months ago when a disgruntled ex-employee from a car dealer was able to login to the dealer's computer system and remotely disable over 100 cars? And, of course, there have been concerns over the ability to use systems like OnStar to remotely disable cars as well, with concerns about what would happen if malicious hackers were able to get their hands on the controls. Now, to add to those concerns, some researchers are reporting that modern day car computing is vulnerable to malicious hacks that could put drivers in danger.
The scientists say that they were able to remotely control braking and other functions, and that the car industry was running the risk of repeating the security mistakes of the PC industry....
The researchers, financed by the National Science Foundation, tested two versions of a late-model car in both laboratory and field settings. They did not identify the maker or the brand of the car, but said they believed they were representative of the computer network control systems that have proliferated in most cars today.
The researchers asked what could happen if a hacker could gain access to the network of a car, said Tadayoshi Kohno, a University of Washington computer scientist. He said the research teams were able to demonstrate their ability to circumvent a wide variety of systems critical to the safety of drivers and passengers.
They also demonstrated what they described as "composite attacks" that showed their ability to insert malicious software and then erase any evidence of tampering after a crash.
The researchers were able to activate dozens of functions and almost all of them while the car was in motion.
Happy driving, everyone...
To be fair, the researchers admit that they did not look at what kinds of "defense" the car might have to block such attacks, but they do point out that those developing car computing systems probably don't have as much experience or concern in the security realm. For the most part, this sounds like it's not a problem that anyone's going to face in the short-term. If anything, I'm guessing we'll have a lot more moral panic stories about what will happen before any reports of something bad actually happening. However, at some point, it seems likely that these sorts of stories will pass over from the hypothetical into the real world, and at that point, I'll be looking for a car that runs on open source software.
Miranda Neubauer was the first of a few of you to send in the news of a bizarre German court ruling that makes it effectively illegal to offer open WiFi. Seriously:
Germany's top criminal court ruled Wednesday that Internet users need to secure their private wireless connections by password to prevent unauthorized people from using their Web access to illegally download data.
Internet users can be fined up to euro100 ($126) if a third party takes advantage of their unprotected WLAN connection to illegally download music or other files, the Karlsruhe-based court said in its verdict.
"Private users are obligated to check whether their wireless connection is adequately secured to the danger of unauthorized third parties abusing it to commit copyright violation," the court said.
This is backwards in so many ways. First, open WiFi is quite useful, and requiring a password can be a huge pain, limiting all sorts of individuals and organizations who have perfectly good reasons for offering free and open WiFi. Second, fining the WiFi hotspot owner for actions of users of the service is highly troubling from a third party liability standpoint. The operator of the WiFi hotspot should not be responsible for the actions of users, and it's troubling that the German court would find otherwise. This is an unfortunate ruling no matter how you look at it.
Every so often we get complaints from people who point out that this site is called "Techdirt," and yet quite frequently talks about the legal issues. There are a few different responses to this, but one of the key points is that, if you're in the tech field these days, you actually really do need to be pretty familiar with the law in a lot of ways. This is a point that I've been thinking about a lot lately, so it seemed like great timing when Michael Scott directed our attention to an article about how IT and security folks now need to recognize that legal risks are a big part of the security realm:
The era of legal defensibility is upon us. The legal risk associated with information security is significant and will only increase over time. Security professionals will have to defend their security decisions in a foreign realm: the legal world. This article discusses implementing security that is both secure and legally defensible, which is key for managing information security legal risk.
It certainly takes things pretty far outside the world where information security folks are used to living. And while there may be a sense of being able to defend the technological decisions should there be a security breach, reaching the level of "legal defensibility" involves a whole different set of issues.
The blog post linked above notes that we're still early in realizing this overlapping arena of security and law, and it's important to have folks from all of these disciplines work together:
Now is the time for legal, privacy and security professionals to break down arbitrary and antiquated walls that separate their professions. The distinctions between security, privacy and compliance are becoming so blurred as to ultimately be meaningless. Like it or not, it all must be dealt with holistically, at the same time, and with expertise from multiple fronts. In this regard we must all develop thick skins and be not afraid to stop zealously guarding turf. The reality is, the legal and security worlds have collided, and most lawyers don't know enough about security, and most security professionals don't know enough about the law. Let's change that.
Indeed. In fact, this is part of the reason that I made sure there was at least some legal discussion in our upcoming webinar on security in the cloud -- because it's an important aspect of security these days, and the cloud raises some serious legal questions (if you haven't registered yet, please do!). But making sure that legal and security/IT people are talking about this regularly is important. Otherwise, you can bet that the legal folks are going to make decisions that are going to come back to haunt those in the IT and security worlds...