If it weren’t for its checkered history, the latest episode of the Boeing 737 Max, this time suffering the loss of a cabin door during an Alaska Airlines flight, may have not resonated as strongly. Traumatic as it must have been for those on board, the latest incident was fairly trivial – by the lugubrious standards of aeroplanes malfunctions. There had been, thankfully, no casualties. And the cause – a loose screw, is blissfully easy to fix.
Boeing’s reputation, on the other hand, is another matter.
Like in a typical Netflix series, the story first jumps back four years, when two brand new Boing 737 MAX, just fresh out the nylons, fell out the skies in quick succession, killing all on board. The cause then was an obscure software called MCAS, that in a fit of madness caused the planes to plunge. This software was so obscure, that pilots did not know about it.
The real mystery was not why the software failed, that is, after all, in their nature, but why was it there to begin with?
Hop back to about 2010. Jealous with rival Airbus decision to install fuel efficient engines on its A320 series, Boeing followed suit. Yet, where the A320 had plenty of room under its wings for the (larger) new engines, the squatter 737 did not, forcing Boeing to place the engines further up the wings. This made the plane less aerodynamically stable and necessitated pilots retraining. Only that Boeing came up with the MCAS software to avoid such inconvenience. Its job was to make the new 737 MAX plane dynamically identical to the old 737. A job it did well – until it didn’t. If these sound like ‘Cat in the Hat’ solutions, it’s because they are.
How did the mighty Boeing get it so wrong?
I’ve not been there, but as an innovation strategist I feel somewhat responsible too. Part of our song and dance is that the tech team cannot be relied upon, unaided, for product development lest they end up solving a problem that no one has. Hence the commercial team presents a problem for the techies to solve. My guess is that is more or less what happened. The problem defined would have been something like “Install fuel efficient engines on the 737, but leave everything else unchanged – because retraining pilots is costly”. At first glance this seems very reasonable. This is precisely what customers wanted (and what Airbus was offering). And customer is king. That is what innovation strategists say, that is what management schools teach. Safety? Ah, that’s a detail. If engineers protested, management turned a deaf ear, preferring the much more agreeable sound of the company coffers being filled.
Clearly the problems at Boeing went much further than this, as it showed little remorse after the first crash – blaming pilot error, where it could have come clean and prevent the second crash.
Would good innovation practices have altered all this?
Yes. Management should never turn a deaf ear to anyone in the company. Issues brought up must be addressed seriously. The excuse of protecting the company’s interests (and earnings) are shortsighted. The losses made by Boeing as result of the crashes and coverup that ensued, dwarfed any costs that would have been caused by further development of the MCAS software, and training pilots to address that software malfunction. Secondly, for a product that thrives on reputation of safety, risk assessment must thump any other consideration. Good risk management would have exposed the fallacy in the design. These should have been part of the problem definition, not some document that was made to be shelved.
Were these lesson learnt? Ask those passengers of Alaska Airlines.