I think you’re right in practice, but the last formal moral philosophy class I took was Michael Sandel’s intro course, Justice, and it definitely left me with the impression that deontologists lean towards simple rules. I do wonder, with the approach you outline here, if there’s a highest-level conflict-resolving rule somewhere in the set of rules, or if it’s an infinite regress. I suspect the conflict-resolving rules end up looking pretty consequentialist a lot of the time.
It doesn’t actually take much time or effort to think to yourself or to bring up in conversation something like “What would the rule consequentialist rules/guidelines say? How much weight do they deserve here?”
I disagree, mostly. Conscious deliberation is costly, and in practice having humans trust their own reasoning on when to follow which rules doesn’t tend to lead to great outcomes, especially when they’re doing the reasoning in real-time either in discussion with other humans they disagree with, or when they are under external pressure to achieve certain outcomes like a release timeline or quarterly earnings. I think having default guidelines, that are different for different layers of an organization, can be good. Basically, you’re guaranteeing regular conflict between the engineers and the managers, so that the kind of effort you’re calling for happens in discussions between the two groups, instead of within a single mind.
I suspect the conflict-resolving rules end up looking pretty consequentialist a lot of the time.
Yeah I think so too. I further suspect that a lot of the ethical theories end up looking consequentialist when you dig deep enough. Which makes me wonder if they actually disagree on important, real world moral dilemmas. If so I wish that common intro to ethics types of discussions would talk about it more.
I disagree, mostly. Conscious deliberation is costly, and in practice having humans trust their own reasoning on when to follow which rules doesn’t tend to lead to great outcomes
I suspect we just don’t see eye-to-eye on this crux of how costly this sort of deliberation is. But I wonder if your feelings change at all if you try thinking of it as more of a spectrum (maybe you already are, I’m not sure). Ie, at least IMO, there is a spectrum of how much effort you expend on this conscious deliberation, so it isn’t really a question of doing it vs not doing it, it’s more a question of how much effort is worthwhile. Unless you think that in practice, such conversations would be contentious and drag on (in cultures I’ve been a part of this happens more often than not). In that scenario I think it’d be best to have simple rules and no/very little deliberation.
I think you’re right in practice, but the last formal moral philosophy class I took was Michael Sandel’s intro course, Justice, and it definitely left me with the impression that deontologists lean towards simple rules. I do wonder, with the approach you outline here, if there’s a highest-level conflict-resolving rule somewhere in the set of rules, or if it’s an infinite regress. I suspect the conflict-resolving rules end up looking pretty consequentialist a lot of the time.
I disagree, mostly. Conscious deliberation is costly, and in practice having humans trust their own reasoning on when to follow which rules doesn’t tend to lead to great outcomes, especially when they’re doing the reasoning in real-time either in discussion with other humans they disagree with, or when they are under external pressure to achieve certain outcomes like a release timeline or quarterly earnings. I think having default guidelines, that are different for different layers of an organization, can be good. Basically, you’re guaranteeing regular conflict between the engineers and the managers, so that the kind of effort you’re calling for happens in discussions between the two groups, instead of within a single mind.
Yeah I think so too. I further suspect that a lot of the ethical theories end up looking consequentialist when you dig deep enough. Which makes me wonder if they actually disagree on important, real world moral dilemmas. If so I wish that common intro to ethics types of discussions would talk about it more.
I suspect we just don’t see eye-to-eye on this crux of how costly this sort of deliberation is. But I wonder if your feelings change at all if you try thinking of it as more of a spectrum (maybe you already are, I’m not sure). Ie, at least IMO, there is a spectrum of how much effort you expend on this conscious deliberation, so it isn’t really a question of doing it vs not doing it, it’s more a question of how much effort is worthwhile. Unless you think that in practice, such conversations would be contentious and drag on (in cultures I’ve been a part of this happens more often than not). In that scenario I think it’d be best to have simple rules and no/very little deliberation.