It seems to me that there’s a third key message, or possibly a reframing of#1, which is that people without power should be considered less morally culpable for their actions -eg the Wells Fargo employees should be judged less harshly.
The concept of “human error” is often invoked to explain system breakdown as resulting from individual deficiencies (eg, early public discussion of the Boeing 737 MAX crashes had an underlying theme of “Ethiopian and Indonesian pilots are just not as skilled as American pilots”) - but a human factors / resilient engineering perspective recognises that humans’ roles in technical systems can be empowered or constrained by the system design. And of course it was other humans who designed (approved, built, …) the system in the first place.
that people without power should be considered less morally culpable for their actions
I strongly disagree with this. People without power often have less impact from their actions, and actions that do less harm should be judged less harshly. But this is a judgement of the degree of wrongness of the action, not the blame-ability of the person.
Also, moral culpability is not zero-sum. There’s plenty of blame for everyone making harmful decisions, and “just following orders” is not a valid defense. Giving bad orders is clearly more harmful than following, but in fact more followers adds to the total and to the individual blame, rather than distributing it.
eg the Wells Fargo employees should be judged less harshly.
I go back and forth on this, and I think the answer might depend on exactly what question you’re asking. If what you want to know is “how do we get Wells Fargo to stop defrauding customers?”, the answer is obviously to focus on executives, not entry level employees. But if the question is “Do I want to go into business with Dave, who defrauded customers as part of his role as a teller at Wells Fargo? Or Jill, who sliced and diced her data to get her paper count up”? That answer is going to depend a lot on particulars and context.
Thanks for this insightful piece.
It seems to me that there’s a third key message, or possibly a reframing of#1, which is that people without power should be considered less morally culpable for their actions -eg the Wells Fargo employees should be judged less harshly.
The concept of “human error” is often invoked to explain system breakdown as resulting from individual deficiencies (eg, early public discussion of the Boeing 737 MAX crashes had an underlying theme of “Ethiopian and Indonesian pilots are just not as skilled as American pilots”) - but a human factors / resilient engineering perspective recognises that humans’ roles in technical systems can be empowered or constrained by the system design. And of course it was other humans who designed (approved, built, …) the system in the first place.
I strongly disagree with this. People without power often have less impact from their actions, and actions that do less harm should be judged less harshly. But this is a judgement of the degree of wrongness of the action, not the blame-ability of the person.
Also, moral culpability is not zero-sum. There’s plenty of blame for everyone making harmful decisions, and “just following orders” is not a valid defense. Giving bad orders is clearly more harmful than following, but in fact more followers adds to the total and to the individual blame, rather than distributing it.
I go back and forth on this, and I think the answer might depend on exactly what question you’re asking. If what you want to know is “how do we get Wells Fargo to stop defrauding customers?”, the answer is obviously to focus on executives, not entry level employees. But if the question is “Do I want to go into business with Dave, who defrauded customers as part of his role as a teller at Wells Fargo? Or Jill, who sliced and diced her data to get her paper count up”? That answer is going to depend a lot on particulars and context.