White House Going With 'Security By Obscurity' As Excuse For Refusing To Release Healthcare.gov Security Details

from the hackers! dept

These days, in the computer security world, it's pretty well known that if you're relying on security by obscurity, you're not being secure. Somehow that message has not reached the techies working on Healthcare.gov. I guess it shouldn't be much of a surprise, given what a disaster the rollout of that site was, but everyone was claiming that the whole thing was under control these days, since real techies had been brought in to fix things. In fact, everyone was so happy with Mikey Dickerson's miraculous saving of the program that the White House set up a special US Digital Service for him to lead, allowing him to save other US government projects from near certain disaster.

But when the Associated Press filed a Freedom of Information Act request to find out how Healthcare.gov was handing its security it got rejected because, according to the White House, it might teach hackers how to break into the system:
In denying access to the documents, including what's known as a site security plan, Medicare told the AP that disclosing them could violate health-privacy laws because it might give hackers enough information to break into the service.

"We concluded that releasing this information would potentially cause an unwarranted risk to consumers' private information," CMS spokesman Aaron Albright said in a statement.
Of course, that suggests that merely revealing the security steps the site has taken will reveal massive vulnerabilities -- and, as most people with even the slightest bit of technological knowledge know, if that's the case, then it's likely the site has already been compromised. If revealing the security setup for the site will leave it open to being hacked, we should probably assume the site was hacked a long, long time ago. If they're deploying security right, merely telling the world what they're doing wouldn't increase the risk. The fact that they're afraid it will suggests that the security plan is dangerously weak.
Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: foia, healthcare.gov, security, security by obscurity
Companies: associated press


Reader Comments

The First Word

Layers, like an onion

We all know the real reason they won't release the documents, because they are doing a piss poor job.

That being said.....
Anything that makes it more difficult for hackers to breach a system is a useful security tool. Doubly so if they are doing a poor job at securing things which is very likely.

Sure, withholding information might not prevent a breach but it can help delay a breach and even help potential attacks get detected.

The whole "security through obscurity is no security at all" is a fallacy.

My password and username is just obscure combinations of characters, once you know them you can use them. If a system relies only on un/pw, it is not secure because it ONLY relies upon obscurity. When combined with other security methods the obscure un/pw becomes a useful tool to thwart hackers.

The more a hacker knows about a target, the easier it is for them to craft an undetectable attack. The hacker might know that a Barracuda firewall will detect one method and a Cisco firewall will not. Publishing data about your security serves no useful purpose and is very helpful to potential attackers.

Giving a burglar a map of all the video cameras and motion sensors in your building makes it easier for him to steal your stuff undetected. He could look through your windows and make his own map, in doing so he may also be detected. Keeping the obscure information about your layout secret makes his job harder and more likely that he is detected. No different in the digital world.

Never rely on obscurity alone but using it as part of your defence is useful.
—Anonymous Coward

Subscribe: RSS

View by: Time | Thread


  • icon
    AricTheRed (profile), 21 Aug 2014 @ 10:47am

    Really?

    "In denying access to the documents, including what's known as a site security plan, Medicare told the AP that disclosing them could violate health-privacy laws because it might give hackers enough information to break into the service."

    Like What? The web address?

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 21 Aug 2014 @ 10:50am

    Already Insecure

    So, Medicare is basically admitting that the site is already insecure, and thus already violates health-privacy laws. About right.

    link to this | view in chronology ]

    • icon
      Mason Wheeler (profile), 21 Aug 2014 @ 11:28am

      Re: Already Insecure

      Yes, to anyone with any understanding of computer security, they literally did admit exactly that.

      One of the fundamental rules of computer security is Kerkhoff's Principle. It can be informally stated as the following truism: if your system is not secure when an attacker knows every detail of the security system except the key, it is not secure.

      link to this | view in chronology ]

      • identicon
        rapnel, 21 Aug 2014 @ 11:54am

        Re: Re: Already Insecure

        That's a pretty big logic gap you leaped there. A secure system can have its methods known or not. I hardly condone the choice to make it easier to know, secure or not. It's not secure if it's not secure.

        The gist of the principal is "assume" that they know - not to fucking "tell them".

        link to this | view in chronology ]

        • icon
          Mason Wheeler (profile), 21 Aug 2014 @ 12:01pm

          Re: Re: Re: Already Insecure

          No, it's not a logic gap at all. It's the very heart of the reason why only open-source cryptography solutions are considered secure. Because you must assume that the adversary knows the system, there can be no harm in telling them, and only by allowing it to be analyzed independently can the system be trusted.

          link to this | view in chronology ]

          • identicon
            rapnel, 21 Aug 2014 @ 1:20pm

            Re: Re: Re: Re: Already Insecure

            Oh, nono, I understand but it seems that applying a singular cryptographic system's composition (along with its subsequent principles and theories) as a complete site security solution thus subjecting it all to that particular piece's guiding principles is incorrect, at best. Encryption is just a single component of any given "secure" system.

            Door manufactures, lock types and brands, chain link and barbed wire mfgs, camera spreads, makes and models, tripwires, EWS, vault type, make and capacity...

            Whom do I monitor for zero day exploits - the router mfg, the firewall mfg, the web server kit? I watch all three then since your components are published, thanks.

            Anyone that thinks it's OK to make public an entire security protocol seems a bit daft to me.

            You want it? Fine. Dig it up via discovery, trial and error or just dumb luck and then fire away. I'm not going to paint the target for you and if you think I should then I might impolitely disagree.

            It is correct that obscurity is not security. It is also correct that encryption is not security. Encryption is a single piece of a given secure system.

            Encryption obfuscates data therefore encryption is the epitome of security by obscurity. How hard do you have to work to make the data clear? That's the secure part and you make that more difficult, not less. We now live in a world of keyloggers and subverted hardware components so, no, you can't have the plan or the map.

            link to this | view in chronology ]

            • icon
              John Fenderson (profile), 21 Aug 2014 @ 1:59pm

              Re: Re: Re: Re: Re: Already Insecure

              "Anyone that thinks it's OK to make public an entire security protocol seems a bit daft to me"

              Anyone who things that not publishing it in any way enhances their security seems a bit daft to me.

              The reason is simple: everything gets reverse engineered and discovered anyway. There are no actual secrets. You need to design your security so it is effective even if the attacker knows everything there is to know about your security.

              The temptation is HUGE when dealing with security to include little things that must be kept secret -- after all, it's very easy to make security that relies on secrecy. A great way of removing this temptation is to establish that the entire system's operations will be revealed from the get-go.

              That keeps you from making stupid security decisions (such as anything that requires secrecy to work) and as a result enhances the security of the system you develop.

              link to this | view in chronology ]

              • identicon
                Anonymous Coward, 21 Aug 2014 @ 2:31pm

                Re: Re: Re: Re: Re: Re: Already Insecure

                Make public vs publishing it in any way

                Well done - arg moot.

                We encourage security (hopefully) through partnerships, plans, testing ad infinitum. Assuming the world is your partner and you'd like the world to have a go at your kit *can* be a valid proposal (often is, consider OSS) but to assume so is neither helpful nor correct.

                You have the entry point, correct? If you can't garner what you need from that then what exactly makes you think (assuming that you think this) the world has the "right" to your plan? Because your data might be there? Yeah, that's working well in the real world, ain't it?

                Your keys are secret, by the way. And your plan to secure your keys are what, again?

                link to this | view in chronology ]

                • identicon
                  rapnel, 21 Aug 2014 @ 2:48pm

                  Re: Re: Re: Re: Re: Re: Re: Already Insecure

                  Sorry John, ^AC c'est moi

                  link to this | view in chronology ]

                  • identicon
                    rapnel, 21 Aug 2014 @ 3:16pm

                    Re: Re: Re: Re: Re: Re: Re: Re: Already Insecure

                    Just to clarify my position before I bug out - I basically concur the first word.

                    I get the point. I understand the ideal. I understand security is an ever evolving posture (or should be) and I understand the value in disseminating a security plan for assistance in both validation and for further hardening benefits.

                    What I don't understand is the seemingly singular approach to this particular topic as uni-dimensional. A piece of software is not a plan. To assume that you can treat the plan as one piece of software with regards to "the plan" is, to me, outlandish and wholly subordinate to idealistic ideas that, at present, simply do not fit (in my mind) with the full spectrum of security.

                    Having pooped that out - I know there would be people frothing at the mouth to get this whitehouse plan, for fun and profit. Energy drinks aren't just for basketball. A part of "secure" security plan planning is still carefully placing the information about your kit into enough hands, not all the hands, enough hands. The right hands. It may very well be (very) insecure and this plan, and any like it, should be evolving. *Maybe* full disclosure is really not a good idea right now. There may very well be exactly three known "high" risk, un-patched vulnerabilities on what they have right now (or on any given day for that matter). Giving everyone an equal shot at them is an astoundingly juvenile stance (imho).

                    If you assume the worst then you prepare for it. Balking at releasing a full security plan should be a consistent and ever-present element of your security arsenal. They're choosing to use it and I can respect that.

                    link to this | view in chronology ]

                    • identicon
                      Anonymous Coward, 21 Aug 2014 @ 5:57pm

                      Re: Re: Re: Re: Re: Re: Re: Re: Re: Already Insecure

                      Damn man, I just got done reading you explanation, and I feel like you just left DefCON. My philosophy has always been nothing is secure, because well nothing is. If you rely on any code, be it open-source or closed, it's not. There is always a new attack vector even down to the ethernet adapter that could hose your whole objective, which is in this case hosting a website for health care. Sometimes, I wonder how much people really think about how much goes into simply requesting a webpage actually involves in the back end of programming and securing every piece of it.

                      link to this | view in chronology ]

                      • identicon
                        Anonymous Coward, 21 Aug 2014 @ 6:18pm

                        Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Already Insecure

                        --There is always a new attack vector even down to the ethernet adapter that could hose your whole objective

                        Which is precisely why it is advantageous to keep details secret.

                        Make the attackers work hard to find out you are using brand X Ethernet switches that they can exploit, don't just tell them. Maybe their probing is detected and upon investigation you realize you overlooked this flaw and can address it before it's exploited.

                        link to this | view in chronology ]

                    • icon
                      nasch (profile), 22 Aug 2014 @ 5:36am

                      Re: Re: Re: Re: Re: Re: Re: Re: Re: Already Insecure

                      A part of "secure" security plan planning is still carefully placing the information about your kit into enough hands, not all the hands, enough hands. The right hands.

                      It will get out. You might be able to keep nuclear launch codes secret forever (though the US military hasn't even succeeded at that) but something like a security plan will leak. For one thing, as we've so clearly seen, the government is unable to determine ahead of time who will not leak information.

                      link to this | view in chronology ]

                • icon
                  John Fenderson (profile), 22 Aug 2014 @ 8:07am

                  Re: Re: Re: Re: Re: Re: Re: Already Insecure

                  "Assuming the world is your partner and you'd like the world to have a go at your kit *can* be a valid proposal (often is, consider OSS) but to assume so is neither helpful nor correct."

                  The point isn't that you assume the world is your partner. The point is that you assume the opposite. You're going to expose your plan to your "enemies" specifically because it forces you to think much more all-inclusively about your plan.

                  "what exactly makes you think (assuming that you think this) the world has the "right" to your plan?"

                  I don't think that. The entire concept of who has "rights" to what is orthogonal to the point.

                  "Your keys are secret, by the way. And your plan to secure your keys are what, again?"

                  Your keys are secret, yes, and keys of any sort (crypto, passwords, even keys to the front door of your house) represent a weakness. That we haven't figured out a decent way to eliminate them doesn't really affect the point.

                  The way you address that is to arrange things so that, in the larger scheme of things, theft of your keys aren't catastrophic. A security system that can be defanged just because someone got a key when they shouldn't have is a weak security system.

                  link to this | view in chronology ]

        • identicon
          David, 21 Aug 2014 @ 12:43pm

          Re: Re: Re: Already Insecure

          Not a logic gap at all. Remember that when launched, there was a fuss that it never when through the normal security audits required for government web sites. It got a waiver. Given the patchwork last-minute rush to 'release' features on-time, security very likely took a back-seat - hence the need to get a security test waiver.

          The next question is, has it pass any third-party, rigorous, security audit at all?

          link to this | view in chronology ]

  • identicon
    Anonymous Coward, 21 Aug 2014 @ 10:59am

    I'm willing to bet there is no site security plan, and they just don't want to admit they don't have one.

    link to this | view in chronology ]

    • icon
      AricTheRed (profile), 21 Aug 2014 @ 11:11am

      Re:

      "I'm willing to bet there is no site security plan, and they just don't want to admit they don't have one."

      The indtial draft was ye olde bioeler plate response...

      No responsive documents to your request.... And someone decided that, while honest, would be embarrasing.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 21 Aug 2014 @ 11:01am

    This'll make your day

    I work with PHI all day every day. Have for years. A LOT of PHI, much of it sourced from various government or quasi-government agencies.

    And none of those sources has ever bothered to send data on encrypted media. That's right. It all comes on CD, DVD or external drive, unencrypted, because, gosh, PGP/GPG and Truecrypt are just way too hard.

    So. Would anyone like to hazard a guess about the state of healthcare.gov's backup media? I've got a twenty that says "not encrypted" or "so badly encrypted that they might as well not have bothered".

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 21 Aug 2014 @ 11:24am

    Layers, like an onion

    We all know the real reason they won't release the documents, because they are doing a piss poor job.

    That being said.....
    Anything that makes it more difficult for hackers to breach a system is a useful security tool. Doubly so if they are doing a poor job at securing things which is very likely.

    Sure, withholding information might not prevent a breach but it can help delay a breach and even help potential attacks get detected.

    The whole "security through obscurity is no security at all" is a fallacy.

    My password and username is just obscure combinations of characters, once you know them you can use them. If a system relies only on un/pw, it is not secure because it ONLY relies upon obscurity. When combined with other security methods the obscure un/pw becomes a useful tool to thwart hackers.

    The more a hacker knows about a target, the easier it is for them to craft an undetectable attack. The hacker might know that a Barracuda firewall will detect one method and a Cisco firewall will not. Publishing data about your security serves no useful purpose and is very helpful to potential attackers.

    Giving a burglar a map of all the video cameras and motion sensors in your building makes it easier for him to steal your stuff undetected. He could look through your windows and make his own map, in doing so he may also be detected. Keeping the obscure information about your layout secret makes his job harder and more likely that he is detected. No different in the digital world.

    Never rely on obscurity alone but using it as part of your defence is useful.

    link to this | view in chronology ]

    • icon
      Mason Wheeler (profile), 21 Aug 2014 @ 12:06pm

      Re: Layers, like an onion

      Wrong, wrong, wrong, wrong, wrong. I'm sorry, but this idea stems from a fundamental misunderstanding of Kerkhoff's Principle. The only thing that can be allowed to be obscure (and indeed, must be kept secret) in a secure information system is the key--the password, in this case. For everything else, it must be assumed that "the adversary knows the system," that all these details about the workings of the setup are already known to the hackers, and publishing it can cause no further harm.

      In light of this, publishing data about your security serves an immensely useful purpose: it allows your system to be independently audited and verified. Since you have to begin with the base assumption that it won't help the bad guys, the only thing left is for it to help the good guys.

      Remember, it's easy for anyone, no matter how smart or how dumb they are, to design a system so good that they can see no flaws in it.

      link to this | view in chronology ]

      • identicon
        Anonymous Coward, 21 Aug 2014 @ 5:32pm

        Re: Re: Layers, like an onion

        The key itself is useless, why the hell do they now suggest using PFS. Yes, keys can be broken. Social engineering like the Comodo breach, link, are still the weakest link in actual security. Well, besides some stupid engineers that still believe that default passwords on SCADA systems be it red lights just recently or traffic signs displaying zombie invasions are your thing. Just remember anyone can simply purchase a huge number of instances of AWS CUDA servers to run through any number of password attempts now that will crush Cray computers in the long run on simple multi-threaded attack vectors.

        link to this | view in chronology ]

      • identicon
        Anonymous Coward, 21 Aug 2014 @ 5:59pm

        Re: Re: Layers, like an onion

        --In light of this, publishing data about your security serves an immensely useful purpose: it allows your system to be independently audited and verified. Since you have to begin with the base assumption that it won't help the bad guys, the only thing left is for it to help the good guys.

        This ASSUMES the good guys did not overlook something that the bad guys found.

        Its a double edge sword.

        link to this | view in chronology ]

        • identicon
          Anonymous Coward, 22 Aug 2014 @ 5:11am

          Re: Re: Re: Layers, like an onion

          The bad guys will be doing everything they can to learn all about your system any way. No reason to make it harder for the good guys to help out.

          link to this | view in chronology ]

    • icon
      Austin (profile), 21 Aug 2014 @ 1:18pm

      Re: Layers, like an onion

      The flaw in this logic is actually pretty simple: perfect security doesn't exist, never has, and never will.

      Software has bugs, and those bugs only sometimes prevent the software from operating as intended. That isn't a secret, but it belies another simple truth: the remaining bugs are almost always 1) undetectable and 2) vectors for attack.

      This is why we have zero day exploits in the first place. Software vendors COULD spend 100+ years banging away at their software from outside, finding and patching every conceivable hole until it was 99.999% secure. If they did, Notepad would've cost 50 billion dollars to develop and wouldn't hit the market for another 70+ years. Corporations exist to make money, and they do that by shipping product ON A DEADLINE. Part of that is setting a standard for their software that is "secure enough" because common sense dictates they could never attain perfect security. Instead, they try to release software that is 99% secure and bug free, and still 100% on budget and on time.

      Sadly that other 1% is all it takes to end up leaking billions of usernames and passwords to anyone with enough time to find them. And that's the key here. Yanno that 100 years the corporations aren't going to sink into uber-securing their own product? Well, 100 hackers are each more than willing to sink a single year of their own time to find bugs. All it takes is one of those hackers getting lucky and we get an Adobe or PSN class leak of millions and millions of usernames and passwords, all sold to the highest bidder.

      Do you think Adobe or Sony managed to forestall the inevitable by keeping their security infrastructure secret? If so, they didn't forestall it very long.

      And herein lies why Security by Obscurity isn't secure. If they HAD published the plans for their security infrastructure (preferably before implementing it) and opened it for public comment, two things would've happened.

      1) Good, honest, white hat security researchers would've told them of the holes and bugs in their system before hand, allowing Adobe to patch most (though again, not all - no security is EVER perfect) of the bugs before they were implemented, and thus before they were exploitable.

      2) The evil black hats would've had FAR fewer holes to exploit, and thus had a MUCH harder time doing so.

      Security by Obscurity prevents this, and runs under the incorrect theory that it's better if nobody sees your bugs than if everybody can point them out to you. This theory is incorrect because - newsflash - the internet is public. As a result, no matter how much you try to close off your vulnerable code, unless you physically rip out your network adapter, it CAN be exploited by someone, somewhere, all the time.

      So no. Nevermind the MASSIVE difference between secrecy, linguistic randomness, and bit length - that's a much more in depth discussion for another day - security by obscurity is simply flawed, even in theory. And it's even more flawed in practice.

      link to this | view in chronology ]

      • identicon
        rapnel, 21 Aug 2014 @ 1:33pm

        Re: Re: Layers, like an onion

        Again - folks seem to be holding up the examples of single components and conflating each and any of those individual components as representative of "The Security Solution".

        Containment, for example, is a part of a security solution. Should we publish containment protocols as well? Fuck, ship's got a hole in it - I guess we'll let it sink? Do we give the boat's plan to an enemy so that they know where best to target it? No, we don't - we make it difficult to obtain, at best.

        link to this | view in chronology ]

      • icon
        LduN (profile), 22 Aug 2014 @ 12:30pm

        Re: Re: Layers, like an onion

        agreed that even without telling potential attackers your layout you can and will eventually be attakced. However, what is the point of giving them a head start? If it takes them an extra week or to to figure out what holes you may or may not have, thats an extra week for the security team to detect an intrusion. It's the same thing as a chain linked fence around a secret complex, it's a delaying tactic that hopefully lasts just long enough for a routine check (patroling etc) to detect the attack

        link to this | view in chronology ]

        • icon
          nasch (profile), 22 Aug 2014 @ 3:15pm

          Re: Re: Re: Layers, like an onion

          If it takes them an extra week or to to figure out what holes you may or may not have, thats an extra week for the security team to detect an intrusion.

          The problem is, it's not a persistent advantage. Once your security setup gets out, it will spread quickly. You won't have an extra week of deterrence for every attacker in perpetuity. So is six months of slightly delaying attackers worth the cost of not having any outside parties review your security procedures?

          As for the fence analogy, note that a fence doesn't lose its effectiveness by an attacker knowing it's there.

          link to this | view in chronology ]

          • icon
            LduN (profile), 25 Aug 2014 @ 11:34am

            Re: Re: Re: Re: Layers, like an onion

            But not knowing a fence is there can create some delays. After the first attack "everyone" knows the fence is there.

            link to this | view in chronology ]

            • icon
              nasch (profile), 25 Aug 2014 @ 12:14pm

              Re: Re: Re: Re: Re: Layers, like an onion

              After the first attack "everyone" knows the fence is there.

              Exactly. And after the first attack, "everyone" will know about the computer system's front line security measures too. So what is the point of trying to keep them secret? And remember that there are benefits to publishing, so "why not?" or something equivalent isn't a good enough reason.

              link to this | view in chronology ]

    • identicon
      Anonymous Coward, 21 Aug 2014 @ 3:03pm

      Re: Layers, like an onion

      I rely on URL obscurity and double ROT-13 encryption. I've got layers, like an onion! I'm secure!

      link to this | view in chronology ]

    • icon
      Ninja (profile), 22 Aug 2014 @ 5:24am

      Re: Layers, like an onion

      I'd say that a better approach would be to properly encrypt and secure your keys (pw) and add multi-factor authentication. As in you need the pw and other random things (think Google authenticator) or even external things (yubi key comes to mind). The obscurity referred in the article is about refusing to make the system transparent and open so more people can audit it.

      link to this | view in chronology ]

    • icon
      nasch (profile), 22 Aug 2014 @ 5:38am

      Re: Layers, like an onion

      The more a hacker knows about a target, the easier it is for them to craft an undetectable attack.

      So open source software must be much easier to attack right?

      link to this | view in chronology ]

  • identicon
    rapnel, 21 Aug 2014 @ 11:37am

    Uh

    I don't know, man. If you asked me for a site security plan I'd say "fuck you" too.

    Data protection and recovery plans? Sure. Security? Piss off.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 21 Aug 2014 @ 12:14pm

    Publishing data about your security serves no useful purpose and is very helpful to potential attackers.

    Good, so security experts can pick it over and make sure that this piss poor job that has been done with Healthcare.gov can resist hacker intrusion of personal medical records

    link to this | view in chronology ]

  • icon
    John Fenderson (profile), 21 Aug 2014 @ 12:19pm

    An admission of weak security

    "We concluded that releasing this information would potentially cause an unwarranted risk to consumers' private information," CMS spokesman Aaron Albright said in a statement.


    In other words, he's admitting that the security they've put in place is inadequate.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 21 Aug 2014 @ 12:51pm

    If the US Gov doesn't feel confident enough to release it's security practices, then I don't feel confident enough to store my personal information on their site.

    link to this | view in chronology ]

  • identicon
    hackity hack hack, 21 Aug 2014 @ 2:25pm

    well isn't them thar peoples stupid

    perhaps someone ought to just post all the changes in security they did and hten laugh utterly at the ensuing chaos....its way past time being nice ...oh wait we dont deface and tell anymore...

    ....grins.....

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 21 Aug 2014 @ 3:01pm

    Security by obscurity never works.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 21 Aug 2014 @ 3:06pm

    Obscurity of Security

    Obscurity of security is used to avoid accountability for poor security.

    link to this | view in chronology ]

  • identicon
    Mark A. Meggs, 21 Aug 2014 @ 3:13pm

    "Techies"

    Mike - Do NOT blame the "techies" (whatever you mean by that). They had no involvement in the decision not to release.

    That said, as a former federal IT specialist in IT security I can tell you that a system security plan (the preferred government term) would at the very least be "sensitive, but unclassified" material. It describes in detail what protections are in place for the system. The stupid decision would have been to release it.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 21 Aug 2014 @ 3:22pm

    Re: Already Insecure

    Sadly, publishing information about a thing does not guarantee that flaws will be found by whitehats in a timely fashion. (*cough* heartbleed)

    And the same coughing fit covers the case of not having enough proactive testing.

    I've not heard of any proactive security testing in relation to these health care websites.

    link to this | view in chronology ]

    • icon
      nasch (profile), 22 Aug 2014 @ 5:39am

      Re: Re: Already Insecure

      Sadly, publishing information about a thing does not guarantee that flaws will be found by whitehats in a timely fashion.

      And not publishing it does not guarantee black hats will not get it.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 21 Aug 2014 @ 6:00pm

    Back when Obamacare failed from bad implementation, there was a group that went through the junk that represented the official government programming code looking for the problems that caused all the issues. One of the findings was a total lack of security. Security was never planned as part of the programming from the start. The problem with that, is that you can't just go back in and reprogram security into it. To be done properly it has to start out on the ground floor being put in place. After the fact just creates more places for hackers to find the doors to get in.

    None of this was secret other than with all the hubbub over Obamacare not working as a site, all attention was put on the fixing so it would work. Even then, the time frame had every one screaming about how they weren't going to get it done in time to meet the deadline. So how much time do you think was put into the addressing of the security part?

    Somewhere and I believe it is here at Techdirt, was the mention of this lack of security. Evidently, Associated Press followed up on this aspect of the report that no one paid any attention to while everyone was running around with their head up their butts looking for the magic bullet that would make everything work within the time frame.

    I remember reading about it. I thought then I would not apply for the ACA because I didn't want my data to be part of the government free-for-all warehouse of data for the first hacker to make it past the winning ribbon at the finish line.

    While I have no idea if it will be better or not, this security part is the main foremost reason I did not apply for ACA. I will for better or worse apply for the medicare due to age instead. No matter how I turn I can't keep all the data from the government and I don't like providing that data for first come first serve hackers.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 21 Aug 2014 @ 6:39pm

    Bypass the whole bloodly thing.

    So let's assume that healthcare.gov is perfectly secure from hackers, but I'm interested in getting as much information as I can find about US citizen's information. Currently from my perspective, healthcare.gov is routed through AS20940, Akamai. They currently are not using RPKI, so I get the most of this. I purchase a couple of servers in some major cross connect point any where in the world and purchase through the sleaziest providers that I can find. From my current location they are pointing to 23.76.211.50 which is routed through 23.76.208.0/20 to them. All I have to do is advertize a smaller route say 23.76.211.0/24 and I can hijack every connection to healthcare.gov for at least a couple of hours considering how long it took for other victims to notice and be able to take action previously. I really wonder how many US citizens will realize that they are not putting information into a secure website, or if they are really tricky and hijack the CA to get the certificate will even bother to notice that all their personal information is compromised.

    link to this | view in chronology ]

  • identicon
    anonymous, 26 Aug 2014 @ 8:01am

    Obscurity only works to improve or preserve security (or at least slows down the bad guy) if the bad guys are not as smart as those that implement the security. Since it's a given they are smarter obscurity does not work by definition.

    link to this | view in chronology ]

The Last Word

Re: Re: Re: Re: Re: Re: Re: Re: Already Insecure

Just to clarify my position before I bug out - I basically concur the first word.

I get the point. I understand the ideal. I understand security is an ever evolving posture (or should be) and I understand the value in disseminating a security plan for assistance in both validation and for further hardening benefits.

What I don't understand is the seemingly singular approach to this particular topic as uni-dimensional. A piece of software is not a plan. To assume that you can treat the plan as one piece of software with regards to "the plan" is, to me, outlandish and wholly subordinate to idealistic ideas that, at present, simply do not fit (in my mind) with the full spectrum of security.

Having pooped that out - I know there would be people frothing at the mouth to get this whitehouse plan, for fun and profit. Energy drinks aren't just for basketball. A part of "secure" security plan planning is still carefully placing the information about your kit into enough hands, not all the hands, enough hands. The right hands. It may very well be (very) insecure and this plan, and any like it, should be evolving. *Maybe* full disclosure is really not a good idea right now. There may very well be exactly three known "high" risk, un-patched vulnerabilities on what they have right now (or on any given day for that matter). Giving everyone an equal shot at them is an astoundingly juvenile stance (imho).

If you assume the worst then you prepare for it. Balking at releasing a full security plan should be a consistent and ever-present element of your security arsenal. They're choosing to use it and I can respect that.
—rapnel

Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.