I’ve been thinking a lot about spaceflight accidents this week and how to avoid them. One technique is to study lessons from past accidents and near misses to learn what can go wrong and how to avoid being the subject of the next Presidential commission investigating a disaster.
Several years ago, when I was running the Space Shuttle program, we contracted with a small firm to help write up about two dozen close call incidents that had occurred during various shuttle flights. A fairly broad spectrum of folks associated with the shuttle were asked to describe a close call that they remembered, and the contract team would do the hard work of pulling the information out of the archives and putting it into readable form. These two dozen close call incidents then became didactic tools to teach new members of the space flight team – and remind old ones – of just how close to the edge of the cliff we skate in human space flight – and how to avoid the big fall.
I wish you could read these, I think they would be very valuable to those folks who are contemplating building a spacecraft, for example. But in the IT Security world at NASA, the decree was made that these lessons are protected by firewall from the outside world. Something that you all paid for as tax payers is simply not available for your perusal. But my point today is not the lack of transparency at NASA.
As the word spread, many more than two dozen close calls were identified, so we had to pick the most illustrative.
I, of course, contributed to the list. In particular there is one flight that stands crystal clear in my memory.
The Spacelab module has an emergency vent valve in case of fire or toxic leak. The crew would (theoretically) throw the valve open and evacuate the lab shutting the pressure hatch from the crew compartment behind them. This is a manual valve and cannot be operated remotely. Of course, it is normally in the closed position. On one launch attempt of a spacelab flight, the launch was scrubbed for a few days. During the interim before the next launch attempt, the IFM (In Flight Maintenance) guys spent their time pouring over the closeout photos that were taking of the interior of the shuttle and the spacelab. Hundreds of photos are taken to document the condition of every possible part prelaunch. The IFM guys are tasked to be ready to fix anything that might break inside the habitable volume, so studying the close out photos has a lot of value in preparing for any eventuality. To their horror they discovered that the Spacelab emergency vent valve was in the full open/depress position.
So if we had launched, the hatch between spacelab and the crew module was closed, so the crew would have been in no danger; but during the climb to orbit, while the crew was strapped in their seats, the spacelab would have totally depressed and there would be no way to repressurize it on orbit. Loss of mission, probably early return, certainly equipment damage to the Spacelab and its experiments would have resulted.
But scrubbing the launch and the vigilance of one guy going beyond his usual duty saved the day. The Spacelab module was opened up, the valve positioned correctly, and the mission launched and was fully successful.
Great story, right? It is crystal clear in my memory; I’m just a little hazy about exactly which mission number it was. But no problem there were only about a dozen Spacelab module flights so finding the records would not be hard.
Except they couldn’t find any record of any incident like the one I remembered.
There is nothing in the records, and worse, nowhere in the memory of the IFM guys, the Spacelab guys, or the KSC riggers. Nobody remembered this except me. But I knew it to be true – so I sent them back to search again. Looking for this incident because a priority for me. But to no avail.
It never happened.
So what am I to say? Obviously my memory is at fault. That is a terrible thing to contemplate. Did I dream it? Or maybe there was a simulated mission where this was one of the failure conditions that Mission Control was supposed to handle, and that training has gotten mixed up in my mind with the real flights. Or it could be something worse. I think that at the very least it says that I’m fallible, at least as fallible as the next guy, maybe more so.
That is a good lesson to learn. That I (and you) are not as smart as we think we are. That my (and your) memory is not as good as you think it is. That there is value in having good documentation and a team to look over the data.
But nonetheless: check your closeout photos before you launch.