I’ve heard people refer to something that does approximately that job as deontological injunctions. I don’t much like the name either.
Do you have a real example of deontology outperforming consequentialism IRL? Bonus: do you have one that isn’t just hardcoding a result computed by consequentialism? (not intended to be a challenge, BTW; actually curious, because I’ve heard a lot of people implying, but can’t find any)
Do you have a real example of deontology outperforming consequentialism IRL?
I’m not sure what that would look like. If consequentialism and deontology shared a common set of performance metrics, they would not be different value systems in the first place.
For example, I would say “Don’t torture people, no matter what the benefits of doing so are!” is a fine example of a deontological injunction. My intuition is that people raised with such an injunction are less likely to torture people than those raised with the consequentialist equivalent (“Don’t torture people unless it does more good than harm!”), but as far as I know the study has never been done.
Supposing it is true, though, it’s still not clear to me what is outperforming what in that case. Is that a point for deontological injunctions, because they more effectively constrain behavior independent of the situation? Or a point for consequentialism, because it more effectively allows situation-dependent judgments?
Seems to me that even supposedly deontologic arguments usually have some (not always explicit) explanation, such as ”...because God wants that” or ”...because otherwise people will not like you” or maybe even ”...because the famous philosopher Kant would disagree with you”. Although I am not sure whether those explanations were present since the beginning, or whether just my consequentialist mind adds them when modeling other people. Do you know real deontologists that really believe “Do X and don’t do Y” without any explanation whatsoever? (How would they react if you ask them “why”?)
Assuming my model of deontologists is correct, then their beliefs are like “Don’t torture people, no matter what the benefits of doing so are, because God does not want you to torture people!” Then all it needs is a charismatic priest who explains that, for some clever theological reasons, God actually does not mind you torturing this specific person in this specific situation. (For other implicit explanations, add other convincing factors.) For a example, a typical deontologist norm “Thou shalt not kill” was violated routinely, often with a clever explanation why it does not apply for this specific class of situations.
Do you know real deontologists that really believe “Do X and don’t do Y” without any explanation whatsoever? (How would they react if you ask them “why”?)
Without any explanation? No. Without any appeal to expected consequences? Yes.
In general, the answer to “why?” from these folks is some form of “because it’s the right thing to do” and “because it’s wrong.” For theists, this is sometimes expressed as “Because God wants that,” but I would not call that an appeal to expected consequences in any useful sense. (I have in fact asked “What differential result do you expect from doing X or not doing X?” and gotten the response “I don’t know; possibly none.”)
Just for clarity, I’ll state explicitly that most of the self-identified deists I know are consequentialists, as evidenced by the fact that when asked “why should I refrain from X?” their answer is “because otherwise you’ll suffer in Hell” or “because God said to and God knows a lot more than we do and is trustworthy” or something else in that space.
The difference between that position and “because it’s wrong” or “because God said to and that means it’s wrong” is sometimes hard to tease out in casual conversation, though.
It may be worth adding that in some sense, any behavioral framework can be modeled in utilitarian terms. That is, I could reply “Oh! OK, so you consider doing what God said to be valuable, so you have a utility function for which that’s a strongly weighted term, and you seek to maximize utility according to that function” to a theist, or ”...so you consider following these rules intrinsically valuable...” to a nontheist, or some equivalent. But ordinarily we don’t use the label “utilitarian” to refer to such people.
who explains that, for some clever theological reasons, God actually does not mind you torturing this specific person in this specific situation
Sure. In much the same sense that I can convince a consequentialist to torture by cleverly giving reasons for believing that the expected value, taking everything into account, of torturing this specific person in this specific situation is positive. As far as I know, no choice of value system makes me immune to being cleverly manipulated.
It may be worth adding that in some sense, any behavioral framework can be modeled in utilitarian terms.
The agents that can be modeled as having a utility function are precisely the VNM-rational agents. Having a deontological rule that you always stick to even in the probabilistic sense is not VNM-rational (it violates continuity). On the other hand, I don’t believe that most people who sound like they’re deontologists are actually deontologists.
This is something of a strawman, but suppose one of your deontological rules was “thou shalt not kill” and you refused to accept outcomes where there is a positive probability that you will end up killing someone. (We’ll ignore the question of how you decide between outcomes both of which involve killing someone.) In the notation of the Wikipedia article, if L is an outcome that involves killing someone and M and N are not, then the continuity axiom is not satisfied for (L, M, N).
Behaving in this way is more or less equivalent to having a utility function in which killing people has infinite negative utility, but this isn’t a case covered by the VNM theorem (and is a terrible idea in practice because it leaves you indifferent between any two outcomes that involve killing people).
I’m trying to avoid eliding the difference between “I think the right thing to do is given by this rule” and “I always stick to this rule”… that is, the difference between having a particular view of what morality is, vs. actually always being moral according to that view.
But I agree that VNM-violations are problematic for any supposedly utilitarian agent, including humans who self-describe as deontologists and I assert above can nevertheless be modeled as utilitarians, but also including humans who self-describe as utilitarians.
Seems to me that even supposedly deontologic arguments usually have some (not always explicit) explanation,
Yes, and in my experience consequentialists usually have deontological sounding explanations for their choice of utility function.
Do you know real deontologists that really believe “Do X and don’t do Y” without any explanation whatsoever? (How would they react if you ask them “why”?)
How would a conventionalist react if I asked why maximize [utility function X]?
Then all it needs is a charismatic priest who explains that, for some clever theological reasons, God actually does not mind you torturing this specific person in this specific situation.
And all that a consequentialist needs to start torturing people is a clever argument for why torturing this specific person in this specific situation maximizes utility.
Seems to me that even supposedly deontologic arguments usually have some (not always explicit) explanation, such as ”...because God wants that” or ”...because otherwise people will not like you” or maybe even ”...because the famous philosopher Kant would disagree with you”.
I’m told that these explanations fall under the realm of meta-ethics. As far as I (not being a deontologist) can tell, all deontological ethical systems rely on assuming some basic maxim—such as “because God said so”, or “follow that rule one would wish to be universal law”.
I don’t see how deontology would work without that maxim.
Assuming my model of deontologists is correct, then their beliefs are like “Don’t torture people, no matter what the benefits of doing so are, because God does not want you to torture people!” Then all it needs is a charismatic priest who explains that, for some clever theological reasons, God actually does not mind you torturing this specific person in this specific situation.
For a historical example of exactly this, see the Spanish Inquisition. (They did torture people, and I did once come across some clever theological reasons for it, in which it is actually quite difficult to find the flaw).
That there is a basic maxim doesn’t mean there isn’t a) an explanation of why that maxim is of overriding importance and b)an explanation of how that maxim leads to particular actions.
That there is a basic maxim doesn’t mean there isn’t a) an explanation of why that maxim is of overriding importance and b)an explanation of how that maxim leads to particular actions
Presumably meaning that it isn’t obvious how you get to (a) and (b). Phils. are very aware that you need to get to (a) and (b) and have argued elaborately (see Kant) towards them. (Has anyone here read so much as one wiki or SEP page on the subject?)
Quite. In order to have a good deontological basis of ethics, both (a) and (b) are necessary; and I would expect to find both. These build on and enhance the maxim on which they are based; indeed, these would seem, to me, to be the two things that change a simple maxim into a full deontological basis for ethics.
For a historical example of exactly this, see the Spanish Inquisition. (They did torture people, and I did once come across some clever theological reasons for it, in which it is actually quite difficult to find the flaw).
Was it by any chance “If we don’t torture these people they’ll go to hell, which is worse than torture”?
Well, in a purely deontologic moral system, the beliefs are like “Don’t torture people, torturing people is bad”. That is, there is a list of “bad” things, and the system is very simple: You may do thing X if and only if X is not on the list. The list is outside the system. In the same way as consequentialism does not provide you with what you should place utility in, deontology does not tell you what the list is.
So when you look at it like that, what the charismatic priest is doing is not inside the moral system, but rather outside it. That is, he is trying to get his followers to change what is on their lists. This is no different from a ice cream advertisement trying to convince a consequentialist that they should place a higher utility on eating ice cream.
To summarize, the issue you are talking about is not one meant to be handled by the belief system itself. The priest in your example is trying to hack people by changing their belief system, which is not something deontologists in particular are susceptible to beyond anyone with a different system.
in a purely deontologic moral system [..] there is a list of “bad” things, and the system is very simple: You may do thing X if and only if X is not on the list.
Are you asserting that purely deontologic systems don’t include good things which it is preferable to do than leave undone, or that it is mandatory to do, but only bad things which it is mandatory to refrain from doing?
And are you asserting that purely deontologic systems don’t allow for (or include) any mechanism for trading off among things on the list? For example, if a moral system M has on its list of “bad” things both speaking when Ganto enters my tent and not-speaking when Ganto enters my tent, and Ganto enters my tent, then either M has nothing to say about whether speaking is better than not-speaking, or M is not a purely deontologic system?
If you’re making either or both of those assertions, I’d be interested in your grounds for them.
Well, in a purely deontologic moral system, the beliefs are like “Don’t torture people, torturing people is bad”.
Is there any such a “pure” system? Deontological metaethics has to put forward a justifications because it is philosophy. I don’t see how you can arrvie at yoru conclusion without performing the double whammy of both ignoring what people who call themselves deontologists say, AND dubbing the attitudes of some unreflective people who don’t call themseles deontologists “deontology”.
Well, in a purely deontologic moral system, the beliefs are like “Don’t torture people, torturing people is bad”. That is, there is a list of “bad” things, and the system is very simple: You may do thing X if and only if X is not on the list.
What if not torturing people requires you to torture person, like in the trolley problem? What then? Do deontologists not care about torturing those people because they did not personally torture them, or do they secretly do consequentialism and dress it up in “rules” after the fact?
I see everything in terms of AI algorithms; with consequentialism, I imagine a utility-maximizing search over counterfactuals in an internal model (utility-based agent), and with deontology, I imagine a big bunch of if-statements and special cases (a reflex-based agent).
What if not torturing people requires you to torture person, like in the trolley problem? What then? Do deontologists not care about torturing those people because they did not personally torture them, or do they secretly do consequentialism and dress it up in “rules” after the fact?
No, archetypal deontologists don’t torture people even to prevent others from being tortured. Not pushing the guy off the bridge is practically the definition of a deontologist.
Exactly. I am not advocating dentology, just clarifying what it means. A true deontologist who thought that torture is bad would not torture anyone, no matter what the circumstances. Obviously, this is silly, and again, I do not advocate this as a good moral system.
A true deontologists who thought that torture is bad would not torture anyone, no matter what the circumstances.Obviously, this is silly
No. That is not obvious. You are probably being misled by thought experiments where it is stipulated that so-and-so really does have life-saving information and really would yield it under torture. In real life, things are not so simple. You might have the wrong person. They might be capable of resisiting torture. They might die under torture. They might go insane under torture They might lie and give you disonformation that harms your cause. Your cause might be wrong..you might be the bad guy...
Real life is always much more complex and messy than maths.
Do you have a real example of deontology outperforming consequentialism IRL? Bonus: do you have one that isn’t just hardcoding a result computed by consequentialism?
If we look at it from a consequentialist perspective, any success for another meta-ethical theory will boil down to caching a complicated consequential calculation. It’s just that even if you have all the time in the world, it’s still hard to correctly perform that consequential calculation because you’ve got all of these biases in the way. Take the classic ticking time-bomb scenario, for an example: everyone would like to believe that they’ve got the power to save the world in their hands; that the guy they have caught is the right guy; that they have caught just in the nick of time. But nobody can produce an example where this has ever been true. So, the deontological rule turns out to be equivalent to the outside view consequentialist argument for ticking time-bomb scenarios. Naturally, other cases of torture would require more analysis to show the equivalence (or non-equivalence; I have not yet done the research to know which way it would come out).
Do you have a real example of deontology outperforming consequentialism IRL?
I don’t see how you could perform a meaningful calculation without presuming which system is actually right. Who wants an efficient programme that yields the wrong results?
so I can say moral system A outperforms moral system B just where A serves my selfish purposes better than B. Hmm. Isn’t that a rather amoral way of looking at morality?
It’s an honest assessment of the state of the world.
I’m not agreeing with that position, I’m just saying that there are folks who would prefer an efficient program that yielded the wrong results if it benefited them, and would engage in all manner of philosophicalish circumlocutions to justify it to themselves.
I’ve heard people refer to something that does approximately that job as deontological injunctions. I don’t much like the name either.
Do you have a real example of deontology outperforming consequentialism IRL? Bonus: do you have one that isn’t just hardcoding a result computed by consequentialism? (not intended to be a challenge, BTW; actually curious, because I’ve heard a lot of people implying, but can’t find any)
I’m not sure what that would look like. If consequentialism and deontology shared a common set of performance metrics, they would not be different value systems in the first place.
For example, I would say “Don’t torture people, no matter what the benefits of doing so are!” is a fine example of a deontological injunction. My intuition is that people raised with such an injunction are less likely to torture people than those raised with the consequentialist equivalent (“Don’t torture people unless it does more good than harm!”), but as far as I know the study has never been done.
Supposing it is true, though, it’s still not clear to me what is outperforming what in that case. Is that a point for deontological injunctions, because they more effectively constrain behavior independent of the situation? Or a point for consequentialism, because it more effectively allows situation-dependent judgments?
At least one performance metric that allows for the two systems to be different is: “How difficult is the value system for humans to implement?”
They can be different metaethically, but the same at the object level.
Seems to me that even supposedly deontologic arguments usually have some (not always explicit) explanation, such as ”...because God wants that” or ”...because otherwise people will not like you” or maybe even ”...because the famous philosopher Kant would disagree with you”. Although I am not sure whether those explanations were present since the beginning, or whether just my consequentialist mind adds them when modeling other people. Do you know real deontologists that really believe “Do X and don’t do Y” without any explanation whatsoever? (How would they react if you ask them “why”?)
Assuming my model of deontologists is correct, then their beliefs are like “Don’t torture people, no matter what the benefits of doing so are, because God does not want you to torture people!” Then all it needs is a charismatic priest who explains that, for some clever theological reasons, God actually does not mind you torturing this specific person in this specific situation. (For other implicit explanations, add other convincing factors.) For a example, a typical deontologist norm “Thou shalt not kill” was violated routinely, often with a clever explanation why it does not apply for this specific class of situations.
Without any explanation? No.
Without any appeal to expected consequences? Yes.
In general, the answer to “why?” from these folks is some form of “because it’s the right thing to do” and “because it’s wrong.” For theists, this is sometimes expressed as “Because God wants that,” but I would not call that an appeal to expected consequences in any useful sense. (I have in fact asked “What differential result do you expect from doing X or not doing X?” and gotten the response “I don’t know; possibly none.”)
Just for clarity, I’ll state explicitly that most of the self-identified deists I know are consequentialists, as evidenced by the fact that when asked “why should I refrain from X?” their answer is “because otherwise you’ll suffer in Hell” or “because God said to and God knows a lot more than we do and is trustworthy” or something else in that space.
The difference between that position and “because it’s wrong” or “because God said to and that means it’s wrong” is sometimes hard to tease out in casual conversation, though.
It may be worth adding that in some sense, any behavioral framework can be modeled in utilitarian terms. That is, I could reply “Oh! OK, so you consider doing what God said to be valuable, so you have a utility function for which that’s a strongly weighted term, and you seek to maximize utility according to that function” to a theist, or ”...so you consider following these rules intrinsically valuable...” to a nontheist, or some equivalent. But ordinarily we don’t use the label “utilitarian” to refer to such people.
Sure. In much the same sense that I can convince a consequentialist to torture by cleverly giving reasons for believing that the expected value, taking everything into account, of torturing this specific person in this specific situation is positive. As far as I know, no choice of value system makes me immune to being cleverly manipulated.
The agents that can be modeled as having a utility function are precisely the VNM-rational agents. Having a deontological rule that you always stick to even in the probabilistic sense is not VNM-rational (it violates continuity). On the other hand, I don’t believe that most people who sound like they’re deontologists are actually deontologists.
That’s interesting, can you elaborate?
This is something of a strawman, but suppose one of your deontological rules was “thou shalt not kill” and you refused to accept outcomes where there is a positive probability that you will end up killing someone. (We’ll ignore the question of how you decide between outcomes both of which involve killing someone.) In the notation of the Wikipedia article, if L is an outcome that involves killing someone and M and N are not, then the continuity axiom is not satisfied for (L, M, N).
Behaving in this way is more or less equivalent to having a utility function in which killing people has infinite negative utility, but this isn’t a case covered by the VNM theorem (and is a terrible idea in practice because it leaves you indifferent between any two outcomes that involve killing people).
I’m trying to avoid eliding the difference between “I think the right thing to do is given by this rule” and “I always stick to this rule”… that is, the difference between having a particular view of what morality is, vs. actually always being moral according to that view.
But I agree that VNM-violations are problematic for any supposedly utilitarian agent, including humans who self-describe as deontologists and I assert above can nevertheless be modeled as utilitarians, but also including humans who self-describe as utilitarians.
Yes, and in my experience consequentialists usually have deontological sounding explanations for their choice of utility function.
How would a conventionalist react if I asked why maximize [utility function X]?
And all that a consequentialist needs to start torturing people is a clever argument for why torturing this specific person in this specific situation maximizes utility.
TO be clear, are you saying they both have the same response or that this is also a valid criticism of consequentialism?
I’m saying this is also a valid criticism of consequentialism.
Thanks for clarifying.
I’m told that these explanations fall under the realm of meta-ethics. As far as I (not being a deontologist) can tell, all deontological ethical systems rely on assuming some basic maxim—such as “because God said so”, or “follow that rule one would wish to be universal law”.
I don’t see how deontology would work without that maxim.
For a historical example of exactly this, see the Spanish Inquisition. (They did torture people, and I did once come across some clever theological reasons for it, in which it is actually quite difficult to find the flaw).
That there is a basic maxim doesn’t mean there isn’t a) an explanation of why that maxim is of overriding importance and b)an explanation of how that maxim leads to particular actions.
Presumably meaning that it isn’t obvious how you get to (a) and (b). Phils. are very aware that you need to get to (a) and (b) and have argued elaborately (see Kant) towards them. (Has anyone here read so much as one wiki or SEP page on the subject?)
Right. This thread is full of bizarrely strawmanish characterizations of deontology.
Quite. In order to have a good deontological basis of ethics, both (a) and (b) are necessary; and I would expect to find both. These build on and enhance the maxim on which they are based; indeed, these would seem, to me, to be the two things that change a simple maxim into a full deontological basis for ethics.
Was it by any chance “If we don’t torture these people they’ll go to hell, which is worse than torture”?
That’s a large part of it, but not all of it. I can’t quite remember the whole thing, but I can look it up in a day or two.
Well, in a purely deontologic moral system, the beliefs are like “Don’t torture people, torturing people is bad”. That is, there is a list of “bad” things, and the system is very simple: You may do thing X if and only if X is not on the list. The list is outside the system. In the same way as consequentialism does not provide you with what you should place utility in, deontology does not tell you what the list is.
So when you look at it like that, what the charismatic priest is doing is not inside the moral system, but rather outside it. That is, he is trying to get his followers to change what is on their lists. This is no different from a ice cream advertisement trying to convince a consequentialist that they should place a higher utility on eating ice cream.
To summarize, the issue you are talking about is not one meant to be handled by the belief system itself. The priest in your example is trying to hack people by changing their belief system, which is not something deontologists in particular are susceptible to beyond anyone with a different system.
Are you asserting that purely deontologic systems don’t include good things which it is preferable to do than leave undone, or that it is mandatory to do, but only bad things which it is mandatory to refrain from doing?
And are you asserting that purely deontologic systems don’t allow for (or include) any mechanism for trading off among things on the list? For example, if a moral system M has on its list of “bad” things both speaking when Ganto enters my tent and not-speaking when Ganto enters my tent, and Ganto enters my tent, then either M has nothing to say about whether speaking is better than not-speaking, or M is not a purely deontologic system?
If you’re making either or both of those assertions, I’d be interested in your grounds for them.
Is there any such a “pure” system? Deontological metaethics has to put forward a justifications because it is philosophy. I don’t see how you can arrvie at yoru conclusion without performing the double whammy of both ignoring what people who call themselves deontologists say, AND dubbing the attitudes of some unreflective people who don’t call themseles deontologists “deontology”.
What if not torturing people requires you to torture person, like in the trolley problem? What then? Do deontologists not care about torturing those people because they did not personally torture them, or do they secretly do consequentialism and dress it up in “rules” after the fact?
I see everything in terms of AI algorithms; with consequentialism, I imagine a utility-maximizing search over counterfactuals in an internal model (utility-based agent), and with deontology, I imagine a big bunch of if-statements and special cases (a reflex-based agent).
No, archetypal deontologists don’t torture people even to prevent others from being tortured. Not pushing the guy off the bridge is practically the definition of a deontologist.
Exactly. I am not advocating dentology, just clarifying what it means. A true deontologist who thought that torture is bad would not torture anyone, no matter what the circumstances. Obviously, this is silly, and again, I do not advocate this as a good moral system.
No. That is not obvious. You are probably being misled by thought experiments where it is stipulated that so-and-so really does have life-saving information and really would yield it under torture. In real life, things are not so simple. You might have the wrong person. They might be capable of resisiting torture. They might die under torture. They might go insane under torture They might lie and give you disonformation that harms your cause. Your cause might be wrong..you might be the bad guy...
Real life is always much more complex and messy than maths.
No it isn’t. It’s just more complex and messy than trivial maths.
If its too non-trivial for your brain to handle as maths, it might as well not be maths
If we look at it from a consequentialist perspective, any success for another meta-ethical theory will boil down to caching a complicated consequential calculation. It’s just that even if you have all the time in the world, it’s still hard to correctly perform that consequential calculation because you’ve got all of these biases in the way. Take the classic ticking time-bomb scenario, for an example: everyone would like to believe that they’ve got the power to save the world in their hands; that the guy they have caught is the right guy; that they have caught just in the nick of time. But nobody can produce an example where this has ever been true. So, the deontological rule turns out to be equivalent to the outside view consequentialist argument for ticking time-bomb scenarios. Naturally, other cases of torture would require more analysis to show the equivalence (or non-equivalence; I have not yet done the research to know which way it would come out).
I don’t see how you could perform a meaningful calculation without presuming which system is actually right. Who wants an efficient programme that yields the wrong results?
That very much depends on who benefits from those wrong results.
so I can say moral system A outperforms moral system B just where A serves my selfish purposes better than B. Hmm. Isn’t that a rather amoral way of looking at morality?
No.
It’s an honest assessment of the state of the world.
I’m not agreeing with that position, I’m just saying that there are folks who would prefer an efficient program that yielded the wrong results if it benefited them, and would engage in all manner of philosophicalish circumlocutions to justify it to themselves.
That’s not very relevant to the benefits or otherwise fo consequentialism and deontology.