The Times has a loooong article about the 737 MAX. It’s well worth a read:
It’s a bit different than other coverage I’ve seen in that it absolutely savages the pilots in the Lion Air and Ethiopian crashes, and the system that generates them. “Airmanship” is the word the author (a pilot) keeps coming back to, basically claiming that even with the bad decisions Boeing made, a pilot who actually understood their airplane should have been easily manage these situations.
Still, as a software engineer, this is still the part I keep coming back to is this:
One of Boeing’s bewildering failures in the MCAS design is that despite the existence of two independent angle-of-attack sensors, the system did not require agreement between them to conclude that a stall had occurred.
The idea that you would have redundancy inputs in a safety critical system and decide to ignore one seems unfathomable to me (indeed this article says there’s a third “standby” instrument that could be used to evaluate disagreement between the other two) . The best explanation I’ve heard (not mentioned in this article) is that MCAS as originally conceived was really just please the FAA about an odd corner case that was extremely unlikely to happen, even given the huge number of 737 flight hours, so Boeing wasn’t concerned about the scenario where both that unlikely scenario and an instrument failure occurred at the same time, and then didn’t realize the increased risk when MCAS’s role was expanded.
Regardless, I don’t think this article so much absolves Boeing as says that we have another huge problem.