Absolutely. In adversarial setting (XZ backdoor) there’s no point in relaxing accountability.
The Air Maroc case is interesting though because it’s exactly the case when one would expect blameless postmortems to work: The employer and the employee are aligned—neither of them wants the plane to crash and so the assumption of no ill intent should hold.
Reading the article from the point of view of a former SRE… it stinks.
There’s something going on there that wasn’t investigated. The accident in question is probably just one instance of it, but how did the culture in the airlines got to the point where excessive risk taking and the captain bullying the pilot became acceptable? Even if they fired the captain, the culture would persist and similar accidents would still happen. Something was swept under the carpet here and that may have been avoided if the investigation was careful not to assign blame.
Absolutely. In adversarial setting (XZ backdoor) there’s no point in relaxing accountability.
Well, but you don’t necessarily know if a setting is adversarial, right? And a process that starts by assuming everyone had good intentions probably isn’t the most reliable way to find out.
it’s exactly the case when one would expect blameless postmortems to work: The employer and the employee are aligned—neither of them wants the plane to crash and so the assumption of no ill intent should hold.
Not necessarily fully aligned, since e.g. the captain might benefit from getting to bully his first officer, or from coming to work drunk every day, or might have theatre tickets soon after the scheduled landing time, or.… Obviously he doesn’t want to crash, but he might have different risk tolerances than his employer.
(Back of the envelope: google says United Airlines employs ~10,000 pilots. If a pilot has a 50-year career, then United has 200 pilot careers every year. Since 2010, they’ve had about 3000 careers and 9 accidents, no fatalities. That’s about 0.3% chance of accident per career. An individual pilot might well want to make choices that have a higher-than-that chance of accident over their career.)
Something was swept under the carpet here and that may have been avoided if the investigation was careful not to assign blame.
Stipulated, but… suppose that the investigators had run a blameless process, the captain had told the truth (“I disabled the EGPWS because it would have been distracting while I deliberately violated a bunch of regulations”) with no risk of getting fired, and they found some rot in the culture there. Can they do anything about it? It seems like if the answer is “yes”, and if anyone benefits from the current culture, then we’re back at people having incentives to lie.
Absolutely agree with everything you’ve said. The problem of balancing accountability and blamelessness is hard. All I can say is, let’s look at how it plays out in the real word. Here, I think, few general trends can be observed as to when less rigid process and less accountability is used:
In highly complex areas (professors, researchers)
When creativity is at a premium (dtto, art?)
When unpredictability is high (emergency medicine, SRE, rescue, flight control, military?)
When the incentives at the high and low level are aligned (e.g. procurement requires more rigid process, because incentives to prefer friends/family are just too high)
Few examples:
When working at an assembly line, the environment is deliberately crafted to be predictable, creativity hurts rather than helps. The process is rigid, there’s no leeway.
At Google, there are both SREs and software engineers. The former are working in a highly unpredictable environment, the latter are not. From my observation, the former have both much less rigid processes and are blamed much less when things go awry.
Military is an interesting example which I would one day like to look more deeply into… As far as I understand the modern western system of units having their own agency instead of blindly following orders can be tracked back to Prussia trying to cope with the defeat by Napoleon. The idea is that a unit gets an goal without the instructions of how to accomplish it. The unit can then act creatively. But while that’s the theory, it’s hard to implement even to this day. It does not work at all for low-trust armies (Russian army), but even where trust is higher, there tend to be hick-ups. (I’ve also heard that the OKR system in business management may be descended from this framework, but again, more investigation would be needed.)
To give an example of how disastrously incompetence can interact with the lack of personal accountability in medicine, a recent horrifying case I found was this one:
According to the hospital, Matsui has been involved in a number of medical accidents during surgeries he performed over a period of around six months since joining the hospital in 2019, resulting in the deaths of two patients and leaving six others with disabilities.
Matsui was subsequently banned from performing surgery by the hospital and resigned in 2021.
January 22nd: Dr. Chiba’s heart sinks when he learns that Matsui has pressured yet another patient into surgery. The patient is 74-year-old Mrs. Fukunaga, and the procedure is a laminoplasty—the same one that left Mrs. Saito paralyzed from the neck down 3 months ago. ‘Please let Matsui learn from his mistakes,’ Chiba pleads. Knowing that Matsui’s grasp of anatomy is tenuous at best, Chiba tries to tell Matsui exactly what he needs to do. ‘Drill here,’ Chiba says, pointing at a vertebra. Matsui drills, but the patient starts bleeding, constantly bleeding. He calls for more suction, but it’s no use; blood is now seeping from everywhere. Matsui is confronted by his greatest weakness: the inability to staunch bleeding, the one skill that every surgeon needs. The operating field is a sea of red. As sweat rolls down his face, Matsui is in complete despair. He knows he has to continue the surgery, so the only thing he can do is pick a spot and drill.
A sickening silence. Even Matsui can feel that something is wrong because his drill hits something that is definitely not bone. Dr. Chiba looks over and lets out a little whimper. Matsui has made the exact same mistake as last time: he’s drilled into the spinal cord, and this time the damage is so bad that the patient’s nerves look like a ball of yarn. There’s actually video footage of this surgery. Yes, it really looks like a ball of yarn, and no, you really don’t want to watch it, trust me. The footage ends with Matsui literally just stuffing the nerves back into the hole he drilled and hoping for the best. This was Matsui’s most serious surgical error yet, and it would later come back to haunt him. But for now, all he got was a slap on the wrist. A month later, he was back at it. He was going to perform another brain tumor removal—the very first procedure he failed at Ako.
One aspect I found interesting: Japan’s defamation laws are so severe that the hospital staff whistleblowers had to resort to drawing a serialized manga about a “fictional” incompetent neurosurgeon to signal the alarm.
That’s pretty messed up. This thread is a good examination of the flaws of blame vs blamelessness.
I wonder if we could somehow promote the idea that “outing yourself as incompetent or malevolent” is heroic and venerable. Like with penetration testers, if you could, as an incompetent or malevolent actor, get yourself into a position of trust, then you have found a flaw in the system. If people got big payouts and maybe a little fame if they wanted it for saying. “Oh hey, I’m in this important role, and I’m a total fuck up. I need to be removed from this role”, that might promote them doing so, even if they are malevolent but especially if they are incompetent.
Possible flaws are that this would incentivize people to defect sometimes, pretending to be incompetent or malevolent, which is a form of malevolence, but this could get out of control. Also people would be more incentivized to try to get into roles they shouldn’t be in, but as with crypto, I’d rather have lots of people trying to break the system to demonstrate it’s strength than security through obscurity.
It’s not infinitely no-blame. Instead, Just culture would distinguish (my paraphrase):
Blameless behaviour:
Human error: To Err is Human. Human error is inevitable[1] and shall not be punished — not even singling out individuals for additional training.
At-risk behaviour: People can become complacent with experience, and use shortcuts — most often when under time pressure, or to workaround dysfunctional systems. At-risk behaviour shall not be punished, but people should share their lessons and may need to receive additional training.
Blameworthy behaviour:
Reckless behaviour: Someone who knows the risks to be unjustifiedly high, and still acts in that unsafe and norm-deviant manner. This is worthy of discipline, and possibly legal action — it’s similar to recklessness in law.
note that, if the same behaviour were the norm, then just culture no longer considers that person to have acted recklessly! Instead, the norm — a cultural factor in safety — was inadequate. (The legal system may disagree and assign liability nonetheless.)
Malicious behaviour: Similar to the purposeful level of criminal intent. This is worthy of a criminal investigation.
Instead, the focus is on designing the whole system (mechanical and electronic; and human and cultural):
to be robust to component failures — not just mechanical or electronic components, but also human components. Usually this means redundancy and error-checking, but robustness can also be obtained by simplifying the system and reducing dependencies;
so that human errors are less likely. For example, an exposed “in case of fire, break glass” fire alarm call point may give frequent false alarms from people accidentally bumping into them, so you add a simple hinged cover that stops these accidental alarms.
From an aviation perspective, Eurocontrol has a model policy, which aims to facilitate accident investigations by making evidence collected by accident investigators inadmissible in criminal courts (without preventing prosecutors from independently collecting evidence).
Absolutely. In adversarial setting (XZ backdoor) there’s no point in relaxing accountability.
The Air Maroc case is interesting though because it’s exactly the case when one would expect blameless postmortems to work: The employer and the employee are aligned—neither of them wants the plane to crash and so the assumption of no ill intent should hold.
Reading the article from the point of view of a former SRE… it stinks.
There’s something going on there that wasn’t investigated. The accident in question is probably just one instance of it, but how did the culture in the airlines got to the point where excessive risk taking and the captain bullying the pilot became acceptable? Even if they fired the captain, the culture would persist and similar accidents would still happen. Something was swept under the carpet here and that may have been avoided if the investigation was careful not to assign blame.
Well, but you don’t necessarily know if a setting is adversarial, right? And a process that starts by assuming everyone had good intentions probably isn’t the most reliable way to find out.
Not necessarily fully aligned, since e.g. the captain might benefit from getting to bully his first officer, or from coming to work drunk every day, or might have theatre tickets soon after the scheduled landing time, or.… Obviously he doesn’t want to crash, but he might have different risk tolerances than his employer.
(Back of the envelope: google says United Airlines employs ~10,000 pilots. If a pilot has a 50-year career, then United has 200 pilot careers every year. Since 2010, they’ve had about 3000 careers and 9 accidents, no fatalities. That’s about 0.3% chance of accident per career. An individual pilot might well want to make choices that have a higher-than-that chance of accident over their career.)
Stipulated, but… suppose that the investigators had run a blameless process, the captain had told the truth (“I disabled the EGPWS because it would have been distracting while I deliberately violated a bunch of regulations”) with no risk of getting fired, and they found some rot in the culture there. Can they do anything about it? It seems like if the answer is “yes”, and if anyone benefits from the current culture, then we’re back at people having incentives to lie.
Absolutely agree with everything you’ve said. The problem of balancing accountability and blamelessness is hard. All I can say is, let’s look at how it plays out in the real word. Here, I think, few general trends can be observed as to when less rigid process and less accountability is used:
In highly complex areas (professors, researchers)
When creativity is at a premium (dtto, art?)
When unpredictability is high (emergency medicine, SRE, rescue, flight control, military?)
When the incentives at the high and low level are aligned (e.g. procurement requires more rigid process, because incentives to prefer friends/family are just too high)
Few examples:
When working at an assembly line, the environment is deliberately crafted to be predictable, creativity hurts rather than helps. The process is rigid, there’s no leeway.
At Google, there are both SREs and software engineers. The former are working in a highly unpredictable environment, the latter are not. From my observation, the former have both much less rigid processes and are blamed much less when things go awry.
Military is an interesting example which I would one day like to look more deeply into… As far as I understand the modern western system of units having their own agency instead of blindly following orders can be tracked back to Prussia trying to cope with the defeat by Napoleon. The idea is that a unit gets an goal without the instructions of how to accomplish it. The unit can then act creatively. But while that’s the theory, it’s hard to implement even to this day. It does not work at all for low-trust armies (Russian army), but even where trust is higher, there tend to be hick-ups. (I’ve also heard that the OKR system in business management may be descended from this framework, but again, more investigation would be needed.)
To give an example of how disastrously incompetence can interact with the lack of personal accountability in medicine, a recent horrifying case I found was this one:
Doctor indicted without being charged for professional negligence resulting in injury
This youtube video goes over the case. An excerpt:
One aspect I found interesting: Japan’s defamation laws are so severe that the hospital staff whistleblowers had to resort to drawing a serialized manga about a “fictional” incompetent neurosurgeon to signal the alarm.
That’s pretty messed up. This thread is a good examination of the flaws of blame vs blamelessness.
I wonder if we could somehow promote the idea that “outing yourself as incompetent or malevolent” is heroic and venerable. Like with penetration testers, if you could, as an incompetent or malevolent actor, get yourself into a position of trust, then you have found a flaw in the system. If people got big payouts and maybe a little fame if they wanted it for saying. “Oh hey, I’m in this important role, and I’m a total fuck up. I need to be removed from this role”, that might promote them doing so, even if they are malevolent but especially if they are incompetent.
Possible flaws are that this would incentivize people to defect sometimes, pretending to be incompetent or malevolent, which is a form of malevolence, but this could get out of control. Also people would be more incentivized to try to get into roles they shouldn’t be in, but as with crypto, I’d rather have lots of people trying to break the system to demonstrate it’s strength than security through obscurity.
It’s not infinitely no-blame. Instead, Just culture would distinguish (my paraphrase):
Blameless behaviour:
Human error: To Err is Human. Human error is inevitable[1] and shall not be punished — not even singling out individuals for additional training.
At-risk behaviour: People can become complacent with experience, and use shortcuts — most often when under time pressure, or to workaround dysfunctional systems. At-risk behaviour shall not be punished, but people should share their lessons and may need to receive additional training.
Blameworthy behaviour:
Reckless behaviour: Someone who knows the risks to be unjustifiedly high, and still acts in that unsafe and norm-deviant manner. This is worthy of discipline, and possibly legal action — it’s similar to recklessness in law.
note that, if the same behaviour were the norm, then just culture no longer considers that person to have acted recklessly! Instead, the norm — a cultural factor in safety — was inadequate. (The legal system may disagree and assign liability nonetheless.)
Malicious behaviour: Similar to the purposeful level of criminal intent. This is worthy of a criminal investigation.
Instead, the focus is on designing the whole system (mechanical and electronic; and human and cultural):
to be robust to component failures — not just mechanical or electronic components, but also human components. Usually this means redundancy and error-checking, but robustness can also be obtained by simplifying the system and reducing dependencies;
so that human errors are less likely. For example, an exposed “in case of fire, break glass” fire alarm call point may give frequent false alarms from people accidentally bumping into them, so you add a simple hinged cover that stops these accidental alarms.
From healthcare perspectives, ISMP has a good article, and here is a good table summary.
From an aviation perspective, Eurocontrol has a model policy, which aims to facilitate accident investigations by making evidence collected by accident investigators inadmissible in criminal courts (without preventing prosecutors from independently collecting evidence).
And LessWrong also has a closely-related article by Raemon!
Human decision-making, too, has a mean time between failures.