One thing that has long surprised me about the strict Kantian rule-following point of view is the seeming certainty that the rule needs to be a short sentence, on the length scale of “Thou shalt not kill.” (And yes, I see it as the same sort of error that many people make who think there’s a simple utility function we could safely give an AGI.) My POV makes more of a distinction on the lines of axiology/morality/law, where if you want a fundamental principal in ethics, one that should never be violated, it’s going to be axiological and also way too complicated for a human mind to consciously grasp, let alone compute and execute in real time. Morality and law are ways of simplifying the fractally complex edges the axiology would have in order to make it possible in principle for a human to follow, or a human society to enforce. (Side note: It looks to me like as society makes moral progress and has more wealth to devote to its ethics, both morals and laws are getting longer and more complicated and harder to follow.)
In short: I think both the engineer and manager classes are making the same sort of choice by simplifying underlying (potentially mutually compatible) ethics models in favor of different kinds of simplified edges. I don’t think either is making a mistake in doing so, per se, but I am looking forward to hearing in more detail what kind of process you think they should follow in the cases when their ideas conflict.
Relatedly, it’s about the durability and completeness of the ruleset. I’m a consequentialist, and I get compatibility by relabeling many rules as “heuristics”—this is not a deontologist’s conception, but it works way better for me.
One thing that has long surprised me about the strict Kantian rule-following point of view is the seeming certainty that the rule needs to be a short sentence, on the length scale of “Thou shalt not kill.”
I think this is a misconception actually. In an initial draft for this post, I submitted it for feedback and the reviewer, who studied moral philosophy in college, mentioned that real deontologists have a) more sensible rules than that and b) have rules for when to follow which rules. So eg. “Thou shalt not kill” might be a rule, but so would “Thou shalt save an innocent person”, and since those rules can conflict, there’d be another rule to determine which wins out.
In short: I think both the engineer and manager classes are making the same sort of choice by simplifying underlying (potentially mutually compatible) ethics models in favor of different kinds of simplified edges. I don’t think either is making a mistake in doing so, per se, but I am looking forward to hearing in more detail what kind of process you think they should follow in the cases when their ideas conflict.
To make sure I am understanding you correctly, are you saying that each class is choosing to simplify things, trading off accuracy for speed? I suppose there is a tradeoff there, but I don’t think it falls on the side of simplification. It doesn’t actually take much time or effort to think to yourself or to bring up in conversation something like “What would the rule consequentialist rules/guidelines say? How much weight do they deserve here?”
I think you’re right in practice, but the last formal moral philosophy class I took was Michael Sandel’s intro course, Justice, and it definitely left me with the impression that deontologists lean towards simple rules. I do wonder, with the approach you outline here, if there’s a highest-level conflict-resolving rule somewhere in the set of rules, or if it’s an infinite regress. I suspect the conflict-resolving rules end up looking pretty consequentialist a lot of the time.
It doesn’t actually take much time or effort to think to yourself or to bring up in conversation something like “What would the rule consequentialist rules/guidelines say? How much weight do they deserve here?”
I disagree, mostly. Conscious deliberation is costly, and in practice having humans trust their own reasoning on when to follow which rules doesn’t tend to lead to great outcomes, especially when they’re doing the reasoning in real-time either in discussion with other humans they disagree with, or when they are under external pressure to achieve certain outcomes like a release timeline or quarterly earnings. I think having default guidelines, that are different for different layers of an organization, can be good. Basically, you’re guaranteeing regular conflict between the engineers and the managers, so that the kind of effort you’re calling for happens in discussions between the two groups, instead of within a single mind.
I suspect the conflict-resolving rules end up looking pretty consequentialist a lot of the time.
Yeah I think so too. I further suspect that a lot of the ethical theories end up looking consequentialist when you dig deep enough. Which makes me wonder if they actually disagree on important, real world moral dilemmas. If so I wish that common intro to ethics types of discussions would talk about it more.
I disagree, mostly. Conscious deliberation is costly, and in practice having humans trust their own reasoning on when to follow which rules doesn’t tend to lead to great outcomes
I suspect we just don’t see eye-to-eye on this crux of how costly this sort of deliberation is. But I wonder if your feelings change at all if you try thinking of it as more of a spectrum (maybe you already are, I’m not sure). Ie, at least IMO, there is a spectrum of how much effort you expend on this conscious deliberation, so it isn’t really a question of doing it vs not doing it, it’s more a question of how much effort is worthwhile. Unless you think that in practice, such conversations would be contentious and drag on (in cultures I’ve been a part of this happens more often than not). In that scenario I think it’d be best to have simple rules and no/very little deliberation.
One thing that has long surprised me about the strict Kantian rule-following point of view is the seeming certainty that the rule needs to be a short sentence, on the length scale of “Thou shalt not kill.” (And yes, I see it as the same sort of error that many people make who think there’s a simple utility function we could safely give an AGI.) My POV makes more of a distinction on the lines of axiology/morality/law, where if you want a fundamental principal in ethics, one that should never be violated, it’s going to be axiological and also way too complicated for a human mind to consciously grasp, let alone compute and execute in real time. Morality and law are ways of simplifying the fractally complex edges the axiology would have in order to make it possible in principle for a human to follow, or a human society to enforce. (Side note: It looks to me like as society makes moral progress and has more wealth to devote to its ethics, both morals and laws are getting longer and more complicated and harder to follow.)
In short: I think both the engineer and manager classes are making the same sort of choice by simplifying underlying (potentially mutually compatible) ethics models in favor of different kinds of simplified edges. I don’t think either is making a mistake in doing so, per se, but I am looking forward to hearing in more detail what kind of process you think they should follow in the cases when their ideas conflict.
Relatedly, it’s about the durability and completeness of the ruleset. I’m a consequentialist, and I get compatibility by relabeling many rules as “heuristics”—this is not a deontologist’s conception, but it works way better for me.
I think this is a misconception actually. In an initial draft for this post, I submitted it for feedback and the reviewer, who studied moral philosophy in college, mentioned that real deontologists have a) more sensible rules than that and b) have rules for when to follow which rules. So eg. “Thou shalt not kill” might be a rule, but so would “Thou shalt save an innocent person”, and since those rules can conflict, there’d be another rule to determine which wins out.
To make sure I am understanding you correctly, are you saying that each class is choosing to simplify things, trading off accuracy for speed? I suppose there is a tradeoff there, but I don’t think it falls on the side of simplification. It doesn’t actually take much time or effort to think to yourself or to bring up in conversation something like “What would the rule consequentialist rules/guidelines say? How much weight do they deserve here?”
I think you’re right in practice, but the last formal moral philosophy class I took was Michael Sandel’s intro course, Justice, and it definitely left me with the impression that deontologists lean towards simple rules. I do wonder, with the approach you outline here, if there’s a highest-level conflict-resolving rule somewhere in the set of rules, or if it’s an infinite regress. I suspect the conflict-resolving rules end up looking pretty consequentialist a lot of the time.
I disagree, mostly. Conscious deliberation is costly, and in practice having humans trust their own reasoning on when to follow which rules doesn’t tend to lead to great outcomes, especially when they’re doing the reasoning in real-time either in discussion with other humans they disagree with, or when they are under external pressure to achieve certain outcomes like a release timeline or quarterly earnings. I think having default guidelines, that are different for different layers of an organization, can be good. Basically, you’re guaranteeing regular conflict between the engineers and the managers, so that the kind of effort you’re calling for happens in discussions between the two groups, instead of within a single mind.
Yeah I think so too. I further suspect that a lot of the ethical theories end up looking consequentialist when you dig deep enough. Which makes me wonder if they actually disagree on important, real world moral dilemmas. If so I wish that common intro to ethics types of discussions would talk about it more.
I suspect we just don’t see eye-to-eye on this crux of how costly this sort of deliberation is. But I wonder if your feelings change at all if you try thinking of it as more of a spectrum (maybe you already are, I’m not sure). Ie, at least IMO, there is a spectrum of how much effort you expend on this conscious deliberation, so it isn’t really a question of doing it vs not doing it, it’s more a question of how much effort is worthwhile. Unless you think that in practice, such conversations would be contentious and drag on (in cultures I’ve been a part of this happens more often than not). In that scenario I think it’d be best to have simple rules and no/very little deliberation.