Most accidents originate in actions committed by reasonable, rational individuals who were acting to achieve an assigned task in what they perceived to be a responsible and professional manner.
— Peter Harle, Director of Accident Prevention,Transportation Safety Board of Canada and former RCAF pilot, ‘Investigation of human factors: The link to accident prevention.’ In Johnston, N., McDonald, N., & Fuller, R. (Eds.), Aviation Psychology in Practice, 1994
Recent news stories have made me think about STS-28 landing. That flight is special to me because it was my first shuttle flight to sit in the big chair in the center of mission control. I was the Flight Director on the planning shift. New flight directors always start on the night shift when the crew is asleep. Get in less trouble that way. But your first time is always special and I won’t forget that flight.
STS-28 was a ‘classified’ flight that carried a national security payload. Someday, perhaps a long time from now, they will declassify it and let me know what exactly it was we were carrying. But for now, all I know is that they told me it was ‘important’. Important enough, in that post-Challenger era, to put a flight crew at risk. Because every shuttle flight is risky.
Brewster Shaw was the commander of STS-28, his first time in that role. Brewster is a remarkable pilot, one of the best, and went on to demonstrate significant skills as Program Manager for the Space Shuttle and later a leader in the Boeing Space and Defense organization. Not all astronauts make good managers but Brewster certainly did. But in those days Brewster was best known for his piloting ability.
Immediately prior to the flight of STS-28 a problem was uncovered with the way the flight software worked in connection with the small sensors on the landing gear. These so called ‘squat switches’ made contact as the landing gear was compressed and the software moded from flying to rolling on the wheels control. I’ve forgotten the particulars but there was a failure mode that if the switches made contact in a certain way that the computers would put the flight control system into the wrong mode – steering with the nose wheels when steering should be controlled by the rudder and elevens or something like that. Could lead to catastrophic loss of control.
It was too late to modify the software, and the switches were inaccessible with the shuttle attached to the external tank. A manual workaround by the pilot was required to ensure safety. So the Commander and Pilot got briefed – multiple times – in the last few days before flight about the need to land very softly – with a low ‘sink rate’ at touchdown – so the switches and software would work properly.
On the last night of the flight, I supervised the team as we prepared the entry messages for the crew. One of those was a reminder to land ‘softly’. The Entry flight control team came on and I went home hoping for a good landing. One of the first calls that the Capcom made – the crew was waking up as I was leaving the MCC – was a reminder to land softly.
So we set Brewster up.
Nominal deorbit burn, nominal entry, TAEM and HAC acquisition all normal, Commander took over flying manually as planned just as the orbiter decelerated to subsonic speeds. A perfect final glideslope. And now for the moment of truth, would the landing be soft enough to prevent the software glitch?
Normally an orbiter lands with a heavyweight payload in the bay at 205 kts – that is really fast for airplanes, but those stubby delta wings on the shuttle don’t create a lot of lift. With the payload bay empty – as it was for STS-28 – the lightweight landing speed is targeted at 195 kts. Under special circumstances, the pilots were allowed to land as slow as 185 kts. Brewster kept working and working to get the landing sink rate low and the speed kept dropping and dropping. At some point, as any fixed wing aircraft slows down, the wings will ‘stall’ and the aircraft will drop like a rock. Also, as the speed goes down, the pilot has to adjust the nose higher and higher – increasing the ‘angle of attack’ – to maintain lift. At some point with a high angle of attack at low altitude, the tail will scrape on the runway – always considered to be a catastrophic event for the shuttle.
The shuttle touched down at 154 kts. It is still the record for the slowest shuttle touchdown speed by a wide margin. It was less than 5 kts above stall speed. The tail avoided scraping by inches.
Oh, and by the way, the squat switches and software worked perfectly. No issues.
The post flight debriefings were all very positive and constructive – except for the entry and landing analysis. You can look back in my posts for the one called ‘Hockstein’s Law’ for a flavor.
I’ve never seen Brewster so embarrassed. In trying to avoid one hazard he nearly created another. In colorful pilot language (which I won’t repeat) he told us all that ‘on any given day the pilot can foul things up’. And it’s true. But I never blamed Brewster. We had set him up.
By concentrating on one issue to the exclusion of all others, and not reminding him of the training – probably years earlier – about very slow landing hazards – we, the flight control team, the program office, the NASA management – we set him up.
When doing an accident (or close call) investigation, I’ve been told to ask ‘why’ seven times before getting to root cause. The root cause, for example, can never be “the bolt broke”; a good accident investigator would ask “why did the bolt break”. Otherwise, the corrective action would not prevent the next problem. Simply putting another bolt in might lead to the same failure again. Finding out the bolt was not strong enough for the application and putting in a stronger bolt, that is the better solution – and so on.
The Russians had a spectacular failure of a Proton rocket a while back – check out the video on YouTube of a huge rocket lifting off and immediately flipping upside down to rush straight into the ground. The ‘root cause’ was announced that some poor technician had installed the guidance gyro upside down. Reportedly the tech was fired. I wonder if they still send people to the gulag over things like that. But that is not the root cause: better ask why did the tech install the gyro upside down? Were the blueprints wrong? Did the gyro box come from the manufacturer with the ‘this side up’ decal in the wrong spot? Then ask – why were the prints wrong, or why was the decal in the wrong place. If you want to fix the problem you have to dig deeper. And a real root cause is always a human, procedural, cultural, issue. Never ever hardware.
So it is with pilot error. Pilot error is never ever a root cause. Better to ask: was the training wrong? Were the controls wrong? Did the pilot get briefed on some other problem that cause distraction and made him/her fly the plane badly?
Corrective actions must go to root causes, not intermediate causes. Really fixing the problem requires more work than simply blaming the pilot.