EternalSuffering: NSA Exploits Still Being Successfully Used To Hijack Computers More Than A Year After Patching

from the can't-stop-won't-stop dept

Everything old and awful is new again. And still awful. Zack Whittaker reports for TechCrunch that the NSA's purloined kit of computer nasties is still causing problems more than a year after security patches were issued by affected vendors.

[A]kamai says that attackers are using more powerful exploits to burrow through the router and infect individual computers on the network. That gives the attackers a far greater scope of devices it can target, and makes the malicious network far stronger.

“While it is unfortunate to see UPnProxy being actively leveraged to attack systems previously shielded behind the NAT, it was bound to happen eventually,” said Akamai’s Chad Seaman, who wrote the report.

There are more technical details in Akamai's report, which notes the deployment of EternalRed and EternalBlue, which target Linux and Windows machines respectively. It appears to be a massive crime of opportunity, with attackers possibly scanning the entire internet for vulnerable ports/paths and injecting code to gain control of computers and devices. The "shotgun approach" isn't efficient but it is getting the job done. Akamai refers to this new packaging of NSA exploits as EternalSilence, after the phrase "galleta silenciosa" ("silent cookie/cracker") found in the injected rulesets.

The damage caused by this latest wave of repurposed surveillance code could still be rather severe, even with several rounds of patches immunizing a large number of devices against this attack.

Currently, the 45,113 routers with confirmed injections expose a total of 1.7 million unique machines to the attackers. We've reached this conclusion by logging the number of unique IPs exposed per router, and then added them up. It is difficult to tell if these attempts led to a successful exposure as we don't know if a machine was assigned that IP at the time of the injection. Additionally, there is no way to tell if EternalBlue or EternalRed was used to successfully compromise the exposed machine. However, if only a fraction of the potentially exposed systems were successfully compromised and fell into the hands of the attackers, the situation would quickly turn from bad to worse.

More of the same, then. Perhaps not at the scale seen in the past, but more attacks using the NSA's hoarded exploits. Hoarding exploits is a pretty solid plan, so long as they don't fall into the hands of… well, anyone else really. Failing to plan for this inevitability is just one of the many problems with the NSA's half-assed participation in the Vulnerability Equities Process.

Since the tools began taking their toll on the world's computer systems last year, there's been no sign the NSA is reconsidering its stance on hunting and hoarding exploits. The intelligence gains are potentially too large to be sacrificed for the security of millions of non-target computer users. It may claim these tools are essential to national security, but for which nation? The exploits wreaked havoc all over the world, but it would appear the stash of exploits primarily benefited one nation before they were inadvertently dumped into the public domain. Do the net gains in national security outweigh the losses sustained worldwide? I'd like to see the NSA run the numbers on that.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: exploits, malware, nsa, online security, vulnerabilities


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • icon
    That One Guy (profile), 5 Dec 2018 @ 3:44am

    'Not our computers, not our problem'

    Since the tools began taking their toll on the world's computer systems last year, there's been no sign the NSA is reconsidering its stance on hunting and hoarding exploits.

    Why would it? Millions of systems infected doesn't matter if their systems aren't in the lot, and even if they are they've certainly got the money to simply scrap a system and replace it.

    Do the net gains in national security outweigh the losses sustained worldwide? I'd like to see the NSA run the numbers on that.

    I suspect any 'running' the numbers would run something like this:

    'How many systems were we able to crack using those exploits, versus how many systems that actually matter(read: ours) stand to be compromised if these exploits are leaked?' If the former outweighs the latter it's worth it, and if a little 'collateral damage' occurs, well, that's someone else's problem.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 5 Dec 2018 @ 6:04am

      Re: 'Not our computers, not our problem'

      I have no love for the NSA, but exploits aren't magic—they take advantage of security holes left by the original programmers. Let's save a bit of blame for the people writing and releasing insecure software, and a bit more for the subset keeping their source code a secret such that the "good guys" can't easily check its security.

      link to this | view in chronology ]

      • icon
        Thad (profile), 5 Dec 2018 @ 7:38am

        Re: Re: 'Not our computers, not our problem'

        Let's save a bit of blame for the people writing and releasing insecure software

        Let's be fair, here: short of formal verification, there's no way to build software that's perfectly secure. You know what software has vulnerabilities that will be exposed if the full weight of the US intelligence apparatus is turned toward finding them? All of it.

        and a bit more for the subset keeping their source code a secret such that the "good guys" can't easily check its security.

        Yeah, that's fair. If you release your source code, then it's a lot easier for white hats to find the vulnerabilities before state actors do.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 5 Dec 2018 @ 8:37am

          Re: Re: Re: 'Not our computers, not our problem'

          You know what software has vulnerabilities that will be exposed if the full weight of the US intelligence apparatus is turned toward finding them? All of it.

          The USA isn't so special. Every country's doing the same. If formal verification is the only way to get security, we'll need to pay that cost. The status quo—write shitty software for cheap, and maybe fix it after it's been compromised and people find out it's been compromised—isn't working.

          If there's one benefit we've seen from cryptocurrency, it's the idea that there are real and major costs to those running insecure software. That's the only way people are going to spend the money necessary.

          One could argue that we all benefit from those NSA exploits being released. It might have been nice to get software patched first, even to get descriptions of the vulnerabilities without having to reverse-engineer exploits. But of all the countries who developed exploits, at least America's led to better software. You can bet Russia and China have similar things that nobody else knows about.

          link to this | view in chronology ]

          • icon
            Thad (profile), 5 Dec 2018 @ 9:01am

            Re: Re: Re: Re: 'Not our computers, not our problem'

            The USA isn't so special. Every country's doing the same.

            Yes, but some countries have better intelligence apparati than others. I think that Microsoft can do a pretty good job of protecting itself against, say, the Lichtenstein intelligence apparatus.

            I could have said "Russian" or "Chinese", but...we're not talking about exploits discovered by those countries, we're talking about exploits discovered by the NSA.

            If formal verification is the only way to get security, we'll need to pay that cost.

            You know, you could have just admitted you don't know what formal verification is.

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 5 Dec 2018 @ 9:47am

              Re: Re: Re: Re: Re: 'Not our computers, not our problem'

              You know, you could have just admitted you don't know what formal verification is.

              I've done formal verification. What do you mean?

              we're not talking about exploits discovered by those countries, we're talking about exploits discovered by the NSA.

              Right. Because these exploits got out, Microsoft can protect us against them. They can't do the same for the other ones.

              link to this | view in chronology ]

              • icon
                Thad (profile), 5 Dec 2018 @ 9:54am

                Re: tl;dr

                I've done formal verification. What do you mean?

                I mean that simply stating "If formal verification is the only way to get security, we'll need to pay that cost" blithely ignores what that cost is.

                Formal verification has already proven its utility for small, special-purpose programs. But implementing it system-wide on a general-purpose operating system is not a practical possibility at this time. If you're talking about transitioning over to that model over a period of decades, then you may have a point. But right here, right now, the costs of formal verification are out of reach for the vast majority of projects -- in money, in computational complexity, and in man-hours.

                link to this | view in chronology ]

                • identicon
                  Nom de Clavier, 5 Dec 2018 @ 10:22am

                  Re: Re: tl;dr

                  Thad wrote: Formal verification has already proven its utility for small, special-purpose programs. But implementing it system-wide on a general-purpose operating system is not a practical possibility at this time.

                  Boy, you're really becoming an overweening weenie.

                  Within your extremes (by which you thereby rule out any point the other commentor makes and therefore you're "right"), is a broad range of reasonable compromise.

                  For instance, Microsoft's "Autoplay" turned on by default in all installs. That was just stupid, annoying at best, inviting compromise at worst, even facilitating it because one had only to slip a USB stick in to autoatically install malware.

                  Programmers NOT adding "features" would be the greatest security practice ever, and it wouldn't take any "verification" at all.

                  I've just proven you wrong. Since it's ME and I'm not going to bother with your "whatabout" and "you don't understand" replies, you shouldn't reply either. But I bet you can't resist trying to "win"...

                  link to this | view in chronology ]

                • identicon
                  Anonymous Coward, 5 Dec 2018 @ 11:59am

                  Re: Re: tl;dr

                  I mean that simply stating "If formal verification is the only way to get security, we'll need to pay that cost" blithely ignores what that cost is.

                  Not "blithely", and it's a response to your statement: "short of formal verification, there's no way to build software that's perfectly secure."

                  By that logic, we have two choices: insecurity or huge costs. I'll choose "huge costs", if I have to, and hope that economies of scale reduce them over time. These efforts will increase reliability as much as security, and given the increasingly safety-critical role of computers, that's something we need too.

                  I'm not suggesting timeframes here. If it will take decades, so be it. But that means from 2019-2039 we should be really careful what data we put on networked computers, and I don't see that happening. For example, why did Mariott have years worth of hotel data online? They could've dumped that stuff onto tapes and into a warehouse a month after anyone checked out. For that matter, why was passport data ever in the computer? It could've been kept on paper, at the hotel, and shredded after a short wait. (And honestly, how many of these data breaches do we think were done by nation-states?)

                  link to this | view in chronology ]

                  • icon
                    Thad (profile), 5 Dec 2018 @ 1:47pm

                    Re: tl;dr

                    I'm not suggesting timeframes here. If it will take decades, so be it. But that means from 2019-2039 we should be really careful what data we put on networked computers, and I don't see that happening.

                    Which is really the point I'm making here: in the time it takes to develop a formally-verified platform that's feature-equivalent to the platforms we use today...what do you expect people to do? They're not just going to stop using computers in the meantime.

                    For example, why did Mariott have years worth of hotel data online? They could've dumped that stuff onto tapes and into a warehouse a month after anyone checked out. For that matter, why was passport data ever in the computer? It could've been kept on paper, at the hotel, and shredded after a short wait.

                    And that's a good answer: absent perfect security, we should at least strive for the best security we can realistically achieve, and stop making stupid, sloppy mistakes.

                    (Formal verification will not, of course, protect against human error. A perfectly secure computer would still be vulnerable to social engineering.)

                    (And honestly, how many of these data breaches do we think were done by nation-states?)

                    Which is why I specifically referred to nation-state attacks as ones that vendors can't realistically defend against (depending, of course, on the nation-state). Improving security to make it very difficult for a random script kiddie to access your backend is an entirely different goal from improving security to the point that the NSA won't be able to find a way in.

                    link to this | view in chronology ]

                  • identicon
                    Anonymous Coward, 6 Dec 2018 @ 6:57am

                    Re: Re: Re: tl;dr

                    The security points that you bring up have nothing to do with formal verification. Indeed, a formally verified system could be wide open, as if that is what is specified, that is what formal verification will prove.

                    link to this | view in chronology ]

      • icon
        ShadowNinja (profile), 5 Dec 2018 @ 8:47am

        Re: Re: 'Not our computers, not our problem'

        > Let's save a bit of blame for the people writing and releasing insecure software

        While they can do more some of the times, the fact is if a Nation State is trying to break into your systems, you're 100% fucked no matter how seriously you take security.

        Nation states can afford to spend an infinite amount of money on finding vulnerabilities and breaking into your software/hardware/etc. Businesses that develop the software/hardware/etc. have limited amounts of money, even the giants like Microsoft/Google/Apple/Amazon. If you have an infinite amount of money and you want to find a way to hack users using their products, you're going to find it eventually.

        I'm not just making this up out of thin air. This is literally what the Cyber Security division of the company I work for says about Cyber Security.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 5 Dec 2018 @ 9:56am

          Re: Re: Re: 'Not our computers, not our problem'

          you're 100% fucked no matter how seriously you take security.

          It's theoretically possible to write secure software, but I'll settle for getting people to stop writing basic embarrassing bugs. Far from requiring "infinite" resources, some of the exploits use nothing more than simple buffer overflows, double-frees, etc., and we 100% know how to prevent those. The "but they're a nation state" excuse only goes so far, and shouldn't cover anything easily found by valgrind, compiler warnings, fuzzers, etc., or fixed by using better languages.

          link to this | view in chronology ]

          • icon
            Thad (profile), 5 Dec 2018 @ 10:13am

            Re: Re: Re: Re: 'Not our computers, not our problem'

            That's a fair point. No system can be perfectly secure (again, aside from formal verification, which is too expensive to be practical for most purposes at this point in time), but many systems can be more secure, and the ways to improve their security are well-known at this point.

            link to this | view in chronology ]

            • identicon
              Anonymous Coward, 5 Dec 2018 @ 12:25pm

              Re: Re: Re: Re: Re: 'Not our computers, not our problem'

              If "real" engineering were as bad as software engineering, we'd have a plane fall out of the sky every week. And although it would still be safer (much safer) than driving, we manage to pour a lot of money into preventing that. (And into the DHS, which has never been shown to solve any real problem.)

              link to this | view in chronology ]

              • icon
                Thad (profile), 5 Dec 2018 @ 1:39pm

                Re: tl;dr

                And if software engineering were as secure as civil and mechanical engineering, we wouldn't be having this conversation, because perfectly secure software means simple, single-purpose devices.

                link to this | view in chronology ]

      • icon
        That One Guy (profile), 5 Dec 2018 @ 1:10pm

        Re: Re: 'Not our computers, not our problem'

        No program is going to be perfect, and there will always be holes and exploitable parts, that's a given. So long as the company isn't half-assing development and just shoving it out the door, I don't really see any reason to blame developers for what amounts to being human. Now if they're informed of flaws after the fact and ignore them and/or attack the one trying to help them, that I would certainly blame them for, but 'not making perfectly secure programs' is a bar unreasonably high, and one I'm not going to ding them for not reaching.

        However, when you've got an agency that is (theoretically) serving and protecting the public finding those flaws and rather than informing the relevant company to get them fixed and make people safer using said flaws, that I most certainly can blame them for.

        They're leaving everyone else less secure just so they can go poking around various systems easier, and as this whole debacle has demonstrated their greed can be very costly to the public.

        link to this | view in chronology ]

  • identicon
    Anonymous Coward, 5 Dec 2018 @ 12:46pm

    What are the odds that the CIA and NSA didn't have moles inside hardware and software companies? So rather than trying to find existing exploits, they'd be actually creating them. Or does anyone actually think that the RSA fiasco was just a fluke?

    link to this | view in chronology ]

    • identicon
      Thads all, folks, 6 Dec 2018 @ 12:12am

      Re:

      BINGO.

      Full Spectrum Dominance.

      But dont forget that INFRAGARD is all these “retired” agents from the agencies, in an “information sharing” environment.

      That, along with the Reality Winners of the world, who only leak/mole/hack when it suits them personally, and politically.

      "America has fallen without a shot"

      link to this | view in chronology ]

      • identicon
        Wendy Cockcroft, 6 Dec 2018 @ 5:58am

        Re: Re:

        Reality Winner exposed Russian hacker interference in elections. Define "America." It's not the white supremacy paradise you think it is — or want it to be. Winner is twice the human you'll ever be in your wildest dreams.

        On June 3, 2017, while employed by the military contractor Pluribus International Corporation, Winner was arrested on suspicion of leaking an intelligence report about Russian interference in the 2016 United States elections to the news website The Intercept. The report suggested that Russian hackers had accessed at least one U.S. voting-software supplier. - Wikipedia

        link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.