Given Spy Agencies' Love For Exploits And Malware, It's Never Been More Dangerous To Be A Security Researcher

from the only-acceptable-form-of-security-is-'national,'-apparently dept

There's probably never been a great time to be a security researcher, what with the attacks they suffer in response to their work, not to mention far too many corporations greeting the discovery of vulnerabilities with legal threats and criminal charges.

The worldwide adoption of the Wassenaar Arrangement threatens to treat the products of security research like weapons-grade plutonium, making every outbound flight with a laptop or USB drive akin to smuggling weapons out of the country.

But that's not the end of it. Security researchers work in the same shadowy areas as national security agencies, as well as other government entities focused on espionage. These agencies not only create their own malware and exploits, but also purchase them from companies in the vulnerabilities business.

Security researchers generally want to expose holes and dangerous software. Almost every other entity involved would prefer to keep these hidden. Any malicious software exposed and patched into irrelevance could carry national security implications. This puts researchers in the crosshairs of both governments and their exploit suppliers. And the system -- as it is -- has very little in the way of built-in protections for legitimate security research.

A paper published by Juan Andrés Guerrero-Saade of Kaspersky Lab points out the inherent dangers of security research in an age when one form of security (national) routinely takes precedent over another form (general computing). [h/t Slashdot]

Though private research teams and intelligence agencies will follow similar intelligence production cycles , we must not conflate their attributes.

(1) Intelligence agencies benefit from cover for action, meaning that other governmental institutions do not fi nd the agencies’ intelligence production activities suspect.
(2) Agency employees enjoy legal protections, even those involved in network exploitation activities. And finally,
(3) their work is shielded from political blowback or geopolitical incongruousness.

Each point is inversely applicable to security researchers and thus sets the tone for the power asymmetry:

1. Security researchers enjoy no cover for action for their production of intelligence reports into what may or may not constitute legitimate intelligence operations…
2. Security researchers are afforded no explicit legal protections for the grey areas regularly visited throughout the course of an investigation…
3. The companies too lack a cover for action and are in no way insulated from the political blowback that arises from the public disclosure of sensitive operations. They suffer from a further dimension of ‘guilt by association’ as research into sensitive operations and subsequent reporting is misconstrued as an act of geopolitical aggression when the victim and perpetrator are involved in any form of international tension...
On top of that, the "work cycles" are also inverted. Security researchers may not know they're treading on the cyber-toes of state operatives until long after the research has begun. Contrast that to the operations of intelligence agencies/exploit marketers, who are only seeking holes rather than fixes and know with certainty what they're deploying or who they're targeting. When security researchers stumble across vulnerabilities, they're not always aware of their origin and may find themselves facing prosecution or, at the very least, government interference and/or additional surveillance.

But those outcomes are the least worrying of the possibilities. Security researchers may find themselves involved with governments willing to deploy far more severe tactics.
The researcher as a private individual faces unique challenges when in the cross-hairs of a nation-state actor determined to enact some form of retribution. The operator of an espionage campaign is not a common criminal nor a simple citizen and his resources are truly manifold. As a special class of government insider responsible for a sensitive operation, the attacker can go so far as to legitimize special recourse in order to neutralize the threat posed by the meddling security researcher. The options available slide relative to the nature of the attacker, ranging from civilized to unscrupulous, and include: subtle pressure, patriotic enlistment, bribery, compromise and blackmail, legal repercussions, threat to livelihood, threat to viability of life in the actor’s area of influence, threat of force, or elimination.
With no built-in protections for researchers, the potential negative outcomes of their work may outweigh the intangible rewards of the work itself -- more secure computing.

With this in mind, a rather interesting awarded contract was posted to the Federal Business Opportunities (FBO) website. Kudu Dynamics -- which has secured previous DARPA/Dept. of Defense contracts related to cybersecurity -- apparently landed a $500,000 contract for a project that appears to give the company permission to spy on security researchers.

ACLU technologist Chris Soghoian phrased it this way:
DARPA awards 500k grant to spy on security vuln researchers. Seriously.
Here's the synopsis of the project (emphasis added):
The goal of Kudu's proposed effort, named "Internet Cyber Early Warning of Adversary Research and Development (ICEWARD)", is to determine whether it is possible to gain actionable insight into the intent of a cyber­ adversary by observing specific behaviors. In particular, the proposers hypothesize that vulnerability researchers make use of public information and resources (such as search engines and websites) that are relevant to their missions, targets, and techniques in such a way that it is possible to glean part of their intent if only we could observe such use and differentiate it from noise (e.g., search engine crawlers). The basis for this hypothesis is both the proposers' own experience as vulnerability researchers and a little ­noticed incident in 2010-2011. The proposed approach will investigate the feasibility of creating highly tailored information resources whose access via the public network will be inherently highly correlated with vulnerability research. A second aspect of the proposed work entails the creation of an evaluation methodology for proving or disproving this hypothesis.
What it seems to suggest is that Kudu would be allowed to read over the shoulders of security researchers as they used public resources, separate that use from the "noise" of bot activity, and extrapolate the researchers' intended goals.

I'm not as convinced as Soghoian that this means Kudu will be allowed to spy on/tap into security researchers' web browsing. It could mean that Kudu plans to use its own experience in the security research field to help refine processes and programs put in place to counter malicious activity -- a lot of which can seem almost indistinguishable from legitimate security research. This determination of intent would allow government agencies to focus on actual threats, rather than wasting resources chasing down legitimate security research.

Then again, certain government agencies would greatly benefit from this advance knowledge. It would give them a heads-up if researchers were probing exploits, vulnerabilties or malware they'd rather keep under wraps and in working condition.

What makes this murkier than it perhaps should be is the fact that the FBO scrubbed the synopsis from the listing a few hours after people began talking about it. (The EFF preserved the original post as a PDF.) There may be any number of non-nefarious reasons for it doing so, but considering most of the discussion centered around the theory that a government contractor was getting paid to spy on security researchers, the sudden burial of this contract info seems slightly suspicious.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: computer security, darpa, fbi, malware, nsa, security research, surveillance, vulnerabilities, zero days


Reader Comments

Subscribe: RSS

View by: Time | Thread


  1. identicon
    Anonymous Coward, 28 Oct 2015 @ 10:30am

    The very purpose of free speech is so that we can express free speech that the government doesn't want us to express. Otherwise what's the point? If security weaknesses are speech the government doesn't want us to express then that's exactly the type of speech that free speech laws are designed to protect in order to make the public aware of any security vulnerabilities that could negatively impact them and to encourage companies to fix those vulnerabilities.

    link to this | view in thread ]

  2. identicon
    Anonymous Coward, 28 Oct 2015 @ 11:11am

    It's not just spy agencies

    Oracle's CSO has blasted security researchers -- of course she has, Oracle's products are insecure junk and she's well-paid to help cover that up. The automakers are desperately trying to halt independent research into the crappy hardware, firmware, and software that they're installing in vehicles. The MPAA and RIAA and their counterparts elsewhere are busy equating security research with piracy. The voting machine makers are covering up their massive failures and systemic corruption by attacking security researchers.

    Pretty much everyone who's making money off security failures is pushing hard to have their critics turned into criminals so that they can use the power of the state to silence them. And thanks to the combination of Congressional ignorance and stupidity with the power of lobbying and the reach of campaign contributions (i.e., bribes) it's working.

    link to this | view in thread ]

  3. icon
    Ninja (profile), 28 Oct 2015 @ 12:49pm

    Re: It's not just spy agencies

    It will work this way till such security vulnerabilities start costing tons of money. Money speaks louder than anything, including human rights and ethics. So the path to solve this issue (and others) is to help make the problems cause as much economic damage as possible. See environmental issues. When extreme events started threatening the money flow companies started to worry a bit more about them.

    Today the most effective way to change something is to cause a disruption to the money flow. Preferably one that cannot be stopped and does not depend on human interference to keep growing if the problem is not tackled.

    link to this | view in thread ]

  4. identicon
    any moose cow word, 28 Oct 2015 @ 3:32pm

    Re: Re: It's not just spy agencies

    It's already costing tons of money, just usually not those who are actually responsible for the slipshod security that causes damages and cost to others. Until the financial burden is shifted to companies that fail to sensibly secure their systems, not a single damn thing will change.

    link to this | view in thread ]

  5. identicon
    any moose cow word, 28 Oct 2015 @ 3:47pm

    Stealing data from systems in foreign entities isn't "security", it's "intelligence". Deliberately degrading integrity of systems in domestic entities to make it easier to steal data from similar systems in foreign entities isn't "security", it's stupidity.

    link to this | view in thread ]

  6. identicon
    Anonymous Coward, 28 Oct 2015 @ 4:05pm

    YES ITS BRILLIANT. If we just silence the security researchers the bugs and vulns completely vanish from existence!

    BAM fixed.

    /gov logic.

    link to this | view in thread ]

  7. identicon
    Anonymous Coward, 28 Oct 2015 @ 5:53pm

    I thought the US at the very least already scrubbed itself of the bad side of the Wassenaar Agreement?

    link to this | view in thread ]

  8. identicon
    Anonymous Coward, 29 Oct 2015 @ 7:19am

    Boo Hoo.

    Don't want to go to jail? Good. Write secure daemon code instead of exploit packages. Nobody will pay you to do that? Well then, maybe that's the problem.

    OS vendors create the disease. Their subsidiary security companies create the cure. The state takes sloppy seconds.

    link to this | view in thread ]

  9. icon
    tqk (profile), 29 Oct 2015 @ 1:50pm

    Welcome to the 21st Century.

    The game's changed. When the authorities are reliant upon black arts, it's illegal to dabble in black arts if you wish to live freely. Might makes right, and all. New rules, which you ignore at your peril. The Emperor spoke. Thus is your new reality. In the word (!) of The Borg, "Comply!"

    link to this | view in thread ]

  10. icon
    tqk (profile), 29 Oct 2015 @ 1:59pm

    Re: Welcome to the 21st Century.

    As a long time reader of RISKS Digest, it's sad it was so easy to write that. We could collectively have focused on bulletproofing, but the authorities decided on prosecution as the shorter course. Revolution and civil war, anyone? I'm game. Beats !@#$ like this.

    link to this | view in thread ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.