Do you know real deontologists that really believe “Do X and don’t do Y” without any explanation whatsoever? (How would they react if you ask them “why”?)
Without any explanation? No. Without any appeal to expected consequences? Yes.
In general, the answer to “why?” from these folks is some form of “because it’s the right thing to do” and “because it’s wrong.” For theists, this is sometimes expressed as “Because God wants that,” but I would not call that an appeal to expected consequences in any useful sense. (I have in fact asked “What differential result do you expect from doing X or not doing X?” and gotten the response “I don’t know; possibly none.”)
Just for clarity, I’ll state explicitly that most of the self-identified deists I know are consequentialists, as evidenced by the fact that when asked “why should I refrain from X?” their answer is “because otherwise you’ll suffer in Hell” or “because God said to and God knows a lot more than we do and is trustworthy” or something else in that space.
The difference between that position and “because it’s wrong” or “because God said to and that means it’s wrong” is sometimes hard to tease out in casual conversation, though.
It may be worth adding that in some sense, any behavioral framework can be modeled in utilitarian terms. That is, I could reply “Oh! OK, so you consider doing what God said to be valuable, so you have a utility function for which that’s a strongly weighted term, and you seek to maximize utility according to that function” to a theist, or ”...so you consider following these rules intrinsically valuable...” to a nontheist, or some equivalent. But ordinarily we don’t use the label “utilitarian” to refer to such people.
who explains that, for some clever theological reasons, God actually does not mind you torturing this specific person in this specific situation
Sure. In much the same sense that I can convince a consequentialist to torture by cleverly giving reasons for believing that the expected value, taking everything into account, of torturing this specific person in this specific situation is positive. As far as I know, no choice of value system makes me immune to being cleverly manipulated.
It may be worth adding that in some sense, any behavioral framework can be modeled in utilitarian terms.
The agents that can be modeled as having a utility function are precisely the VNM-rational agents. Having a deontological rule that you always stick to even in the probabilistic sense is not VNM-rational (it violates continuity). On the other hand, I don’t believe that most people who sound like they’re deontologists are actually deontologists.
This is something of a strawman, but suppose one of your deontological rules was “thou shalt not kill” and you refused to accept outcomes where there is a positive probability that you will end up killing someone. (We’ll ignore the question of how you decide between outcomes both of which involve killing someone.) In the notation of the Wikipedia article, if L is an outcome that involves killing someone and M and N are not, then the continuity axiom is not satisfied for (L, M, N).
Behaving in this way is more or less equivalent to having a utility function in which killing people has infinite negative utility, but this isn’t a case covered by the VNM theorem (and is a terrible idea in practice because it leaves you indifferent between any two outcomes that involve killing people).
I’m trying to avoid eliding the difference between “I think the right thing to do is given by this rule” and “I always stick to this rule”… that is, the difference between having a particular view of what morality is, vs. actually always being moral according to that view.
But I agree that VNM-violations are problematic for any supposedly utilitarian agent, including humans who self-describe as deontologists and I assert above can nevertheless be modeled as utilitarians, but also including humans who self-describe as utilitarians.
Without any explanation? No.
Without any appeal to expected consequences? Yes.
In general, the answer to “why?” from these folks is some form of “because it’s the right thing to do” and “because it’s wrong.” For theists, this is sometimes expressed as “Because God wants that,” but I would not call that an appeal to expected consequences in any useful sense. (I have in fact asked “What differential result do you expect from doing X or not doing X?” and gotten the response “I don’t know; possibly none.”)
Just for clarity, I’ll state explicitly that most of the self-identified deists I know are consequentialists, as evidenced by the fact that when asked “why should I refrain from X?” their answer is “because otherwise you’ll suffer in Hell” or “because God said to and God knows a lot more than we do and is trustworthy” or something else in that space.
The difference between that position and “because it’s wrong” or “because God said to and that means it’s wrong” is sometimes hard to tease out in casual conversation, though.
It may be worth adding that in some sense, any behavioral framework can be modeled in utilitarian terms. That is, I could reply “Oh! OK, so you consider doing what God said to be valuable, so you have a utility function for which that’s a strongly weighted term, and you seek to maximize utility according to that function” to a theist, or ”...so you consider following these rules intrinsically valuable...” to a nontheist, or some equivalent. But ordinarily we don’t use the label “utilitarian” to refer to such people.
Sure. In much the same sense that I can convince a consequentialist to torture by cleverly giving reasons for believing that the expected value, taking everything into account, of torturing this specific person in this specific situation is positive. As far as I know, no choice of value system makes me immune to being cleverly manipulated.
The agents that can be modeled as having a utility function are precisely the VNM-rational agents. Having a deontological rule that you always stick to even in the probabilistic sense is not VNM-rational (it violates continuity). On the other hand, I don’t believe that most people who sound like they’re deontologists are actually deontologists.
That’s interesting, can you elaborate?
This is something of a strawman, but suppose one of your deontological rules was “thou shalt not kill” and you refused to accept outcomes where there is a positive probability that you will end up killing someone. (We’ll ignore the question of how you decide between outcomes both of which involve killing someone.) In the notation of the Wikipedia article, if L is an outcome that involves killing someone and M and N are not, then the continuity axiom is not satisfied for (L, M, N).
Behaving in this way is more or less equivalent to having a utility function in which killing people has infinite negative utility, but this isn’t a case covered by the VNM theorem (and is a terrible idea in practice because it leaves you indifferent between any two outcomes that involve killing people).
I’m trying to avoid eliding the difference between “I think the right thing to do is given by this rule” and “I always stick to this rule”… that is, the difference between having a particular view of what morality is, vs. actually always being moral according to that view.
But I agree that VNM-violations are problematic for any supposedly utilitarian agent, including humans who self-describe as deontologists and I assert above can nevertheless be modeled as utilitarians, but also including humans who self-describe as utilitarians.