Air Force, Lockheed Martin Combine Forces To 'Lose' 100,000 Inspector General Investigations
from the Up-in-the-air!-Into-the-wi34dz.eea.3rdek))we$#21....-[A]BORT,-[R]ETRY,-[F]AIL dept
In an era where storage is insanely cheap and the warning to schedule regular backups has been ringing in the ears of computer users for more than four decades, there's seemingly no explanation for the following:
The U.S. Air Force has lost records concerning 100,000 investigations into everything from workplace disputes to fraud.
A database that hosts files from the Air Force’s inspector general and legislative liaison divisions became corrupted last month, destroying data created between 2004 and now, service officials said. Neither the Air Force nor Lockheed Martin, the defense firm that runs the database, could say why it became corrupted or whether they’ll be able to recover the information.
The Air Force didn't lose investigations dating back to the mid-60s and stored on archaic, oddball-sized "floppies." It lost more than a decade's-worth of investigatory work -- from 2004 going forward, right up to the point that Lockheed discovered the "corruption" and spent two weeks trying to fix before informing its employer. At which point, the USAF kicked it up the ladder to its bosses, leaving them less than impressed.
In a letter to Secretary James on Monday, Sen. Mark Warner, D-Va., said the lost database “was intended to help the Air Force efficiently process and make decisions about serious issues like violations of law and policy, allegations or reprisal against whistleblowers, Freedom of Information Act requests, and Congressional inquiries.”
“My personal interest in the [Inspector General’s] ability to make good decisions about the outcomes of cases, and to do so in a timely manner, stems from a case involving a Virginia constituent that took more than two years to be completed, flagrantly violating the 180-day statutory requirement for case disposition,” Warner wrote.
Some notification is better than no notification, even if the "some" notification is extremely minimal and arrives well after the fact. Senator Warner remains underwhelmed.
“The five-sentence notification to Congress did not contain information that appeared to have the benefit of five days of working the issue,” Warner wrote.
The Air Force says there's no evidence of malicious intent, as far as it can tell. But there's also no evidence of competence. Why is it that files related to oversight of a government agency have no apparent redundancy? It's small details like these that show the government generally isn't much interested in policing itself.
If anything's going to be recovered, it's going to be Lockheed's job, and it's already spent a few weeks trying with little success. There may be some files stored locally at bases where investigations originated, but they're likely to be incomplete.
While I understand the inherent nature of bureaucracy makes it difficult to build fully-functioning systems that can handle digital migration with any sort of grace, it's completely incomprehensible that a system containing files collected over the last decade would funnel into a single storage space with no backup. It's one thing if this was just the Air Force's fault.
But this is more Lockheed's fault -- and despite its position as a favored government contractor -- it's also known for its innovation and technical prowess. Neither of those qualities are on display in this public embarrassment. And if it can't recover the data, it's pretty much erasing more than a decade's-worth of government mistakes, abuse, and misconduct. And while no one's going to say anything remotely close to this out loud, there has to be more than a few people relieved to see black marks on their permanent records suddenly converted to a useless tangle of 1s and 0s.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: air force, corrupted hard drive, dod, inspector general, investigations
Companies: lockheed martin
Reader Comments
Subscribe: RSS
View by: Time | Thread
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Unfortunately, it's not quite that simple. Making a backup is easy; restoring it afterwards is not always so easy, for a variety of technical reasons. Horror stories about losing everything, thinking you had it backed up, and then not being able to restore from backups abound.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
It doesn't work as well when you're talking about multiple terabytes of database spread across a raid.
It works even less well if you're not permitted to take the system down for maintenance - ie you can't simply snapshot the system.
I agree that if you can't restore it, it is worthless.
I submit, though, that if you try to restore it and find *some* data is seriously damaged ... then go back to the original and find that it was, er, faithfully copied from the original ... THEN you have a problem.
[ link to this | view in chronology ]
Re: Re: Re:
I'm sure that many mere ignorant newbies such as yourself actually believe that. You're wrong. It's really quite easy for anyone equipped with sufficient intelligence and experience.
For example, I've been backing up an operation that has about 4/10 of a petabyte of operational disk. Monthly (full) and daily (partial) backups are done. They all get compressed and encrypted and copied to external USB drives (currently: 4T drives, soon: 6T drives). Yes, they're tested. Yes, I've had to restore from them -- many times. Yes, it works. Everything is logged, everything is cataloged, everything is cycled through a retention process that ensures both backup/restore capability and disaster recovery.
It wasn't hard. I used standard Unix tools and a little scripting. The largest cost was buying all the drives.
I expect anyone of modest ability to be able to the same. Anyone who can't is incompetent and stupid.
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
I would also point out that backing up in and of itself is not a full solution. Large organisations (like the US Air Force) also need an offsite back system as well. That is, you need a place away from your main backup server where you store copies of your data. That way if the worse happens and disaster does strike (e.g. a fire or a nuclear blast), wiping out not only your main copy of the data but your main backup as well, you aren't left with absolutely nothing. You would still have at least one offsite copy. It might be a little dated, but that would be better than nothing at all.
In the case of of this particular fiasco, I suspect the underlying problem is that the US Air Force decided to outsource their storage of that database to Lockheed without fully investigating what it was they would get in exchange.
One of the consequences for the rest of us is that we can now see one of the downsides of storing data out in the cloud: in all likelihood there are no backups of that data. If your cloud service loses it (and these things do happen), it will be a case of Tough Luck Kiddo.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
And no, a lot of operations would not need such heavy backup. Plus, you can rollover those older backups as newer ones are tested and re-use the storage.
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re:
So you know they work as expected
Offline is best
So you can rest
When lightning strikes unexpected
[ link to this | view in chronology ]
Not to ignore the fact that it's a major company that should know better, the Government should have double checked if there was redundancy too. If I were to hire a company to backup my stuff I would not only want to both see the separate server/farm that's doing the work but also elect random content in my server/data center to retrieve from the backups and compare hashes. Of course, I would probably be interested in preserving such files whereas we can't be so sure when concerning the Government.
Also, conspiracy theories. Maybe this was intended? I wouldn't be surprised.
[ link to this | view in chronology ]
Re:
...and then you end up with a fun balancing act. The more copies of your data that exist, and the more places they exist in, the more likely it is that one of these copies will be the subject of a data breach at some point. When dealing with sensitive information, this is something that has to be taken into account.
[ link to this | view in chronology ]
Re:
One of the biggest problems with backup ages ago was that it was either:
1. expensive and somewhat convenient
2. inexpensive and highly inconvenient
Today it can be inexpensive and fairly convenient.
Today a 2 TB pocket hard drive, which can be disconnected, labelled and then locked in a fire safe, costs less than what once was an expensive, slow, and inconvenient sequential access backup tape that required a very expensive tape drive. And usually required overnight backup. And probably various differential or partial backups in order to not use up too much backup capacity.
Today, you can back up, well, probably everything, to one or two pocket drives in a fairly short time. The more clever can rsync to a backup drive.
For what you once invested in 14 days worth of backup tapes, you can now spend on 14 days worth of pocket hard drives that are easy to use.
With databases, things are more complex. But you could have automated backups to a specified folder. And that folder could get backed up to other storage (like pocket drives) which go in a fire safe. Databases could also be replicated across multiple machines. And with backups.
Databases could be dumped to text SQL scripts that can reconstruct the database, and those are *extremely* compressible.
These schemes are easy to verify. And at least once in a while you should set up a VM with a database server and try doing a restore of the database. Maybe yearly. And you could just keep a snapshot of that VM (before the restore) to practice doing the restore with again next year. In fact, that VM is worth backing up because it is what you use to do your annual testing of your restore procedure. What handier way to know that you can restore, but even if software changes, you've got a VM that less than a year ago was able to restore the backed up media.
These days with clusters, if you can automate builds and deployments of systems, you could automate backups, and restores to separate systems just to prove every night's backup actually can restore and simply get daily reports on the success.
I could go on, but I agree with your basic premise. This is either extreme incompetence (not surprising for government work) or a conspiracy to cover up something (also not surprising).
[ link to this | view in chronology ]
Not quite enough information to determine what happened.
[ link to this | view in chronology ]
Re: Not quite enough information to determine what happened.
[ link to this | view in chronology ]
Re: Re: Not quite enough information to determine what happened.
[ link to this | view in chronology ]
Re: Re: Re: Not quite enough information to determine what happened.
If that's a law or a rule, then responsibility for the incident goes to whoever was responsible for such rules rather than the poor admin who was ordered to follow it. It's still incompetence in that case, just not on the part of LM's tech crew.
If there was someone not allowing the tech team to do their job by not allowing them to shut the DB down when required, then this needs to be brought up in the investigation to ensure that lower level lackeys are not blamed for having to follow the chain of command (I know how likely that is but still...).
Either way, this was a predictable risk and should have been mitigated. Presuming no deliberate sabotage, someone somewhere was incompetent.
[ link to this | view in chronology ]
Re: Re: Re: Not quite enough information to determine what happened.
Do you have a source for that being the case or are you just making crap up?
[ link to this | view in chronology ]
Re: Re: Re: Re: Not quite enough information to determine what happened.
What part of "Not quite enough information to determine what happened" did you not understand?
Some years back, I did work in the military and was assigned to WHCA. Believe me that when a high ranking technically ignorant individual wants someone, they get it even if it's a rather stupid thing. And if the affected database is being used over a wide geographic area (e.g. Being accessed world wide) there would be more than enough idiots who think the database can NEVER go down because it would impact users. Now with more modern file systems (think ZFS in Solaris), it's trivial to perform backups of entire file systems snapshotted at a moment in time even if there are other processed actively updating the file system. And that capability is fantastic for databases that have to be up 24/7 because a well designed database engine is capable of recovering a coherent database even if there is a power failure and the system goes down hard. But such a database usually isn't capable of performing the recovery if the various files are not internally consistent which would be the case if simple file copies were made while the database was being actively updated. Hence the requirement for the database to be quiescent when being backed up. But the question is "Were they using such a system?" Somehow I doubt they were since ZFS was introduced in late 2005 and the article mentioned records from 2004.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Not quite enough information to determine what happened.
[ link to this | view in chronology ]
Re: Not quite enough information to determine what happened.
I realized we could have the same problem, so I commandeered a machine with enough storage to hold the production database and then set it up so it was always 15-30 minutes behind the production database. The idea being we would have time to stop the replication if the production database went down, or we could switch to it with minimal loss if necessary.
Within a week of me leaving, the machine had been repurposed, because obviously we didn't need a warm spare database. After all we had backups.
About a week after that, a planned database test resulted in a destroyed database and the discovery that the backups didn't work. They were down for weeks.
A few years before that, I suspect that the IT department wasn't backing up one of the databases. Even though the head of IT assured me that all the databases were being properly backed up.
I quietly started dumping the database nightly to a development machine. A couple of weeks after I left, I was forwarded a message from the head of IT that said, "Oh, that database. We don't back up that database. We only backup the MS SQL databases." With a note attached saying, "Where did you say you put those backups again?"
[ link to this | view in chronology ]
Headline error?
[ link to this | view in chronology ]
Heads need to roll
1 - Lockheed, a major defense contractor with decades of experience with computers and IT systems, for 12 years running, failed to backup or check their backups of a critical USAF system needed to verify USAF compliance with law.
or,
2 - The system was deliberately corrupted to cover up criminal activity.
It's hard to tell which is more likely.
Either way, heads need to roll. Every person in the management chain responsible for this debacle needs to be fired, from the CEO on down.
How many millions did the Pentagon pay Lockheed to screw this up?
[ link to this | view in chronology ]
Re: Heads need to roll
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
smh
[ link to this | view in chronology ]
LockMart would benefit the most
Of all the contractors who could benefit from this all the backups don't work issue, Lockheed Martin is one of the biggest (F-35, F-22, Atlas V and on and on) and probably had alot of entries in that database.
Scandalous is what this is - there is no way one of the biggest prime contractors for the Air Force itself should be in charge of the Air Force Inspector General's DB and storage backups like this, the temptation for self interested manipulation is too large.
[ link to this | view in chronology ]
Let me get this straight
[ link to this | view in chronology ]
Re: Let me get this straight
You work for a for-profit firm. Which, being for-profit and owned by investors who hope to make money, is inherently evil.
While the government, on the other hand, works for the peeple. So they're inherently good, and don't need any watching.
Because democracy, you see. Each peeple gets a 1-in-300-million say in what the government does. So the government would never do anything to hurt a peeple.
While an investor would of course kill anyone for a penny, if they could get away with it.
[ link to this | view in chronology ]
Re: Let me get this straight
[ link to this | view in chronology ]
operation diversion of public interest ongoing and working beautifully by all reports.
[ link to this | view in chronology ]
Update - it's fixed
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re: deliberate delition
[ link to this | view in chronology ]
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
Oh, you mean "presidential material".
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
wtf
[ link to this | view in chronology ]
Re: wtf
[ link to this | view in chronology ]
Your query returned 0 rows.
[ link to this | view in chronology ]
Encryption
[ link to this | view in chronology ]
do not believe this story
[ link to this | view in chronology ]
Re: do not believe this story
The story is reporting on what the Government/Air Force is saying. Of course, the Government/Air Force may not be telling the truth.
[ link to this | view in chronology ]