Certainly. If someone deserves death, that means that it is good for them to die, even if their death does not serve any further purpose. The death penalty is given to those who “deserve” to die.
In order for it to be a positive net utility for someone to die, the consequences of their living simply have to be worse than the consequences of their death. If someone has a stress-induced breakdown and goes on a shooting spree, it is better to kill them than not to kill them (by killing them you are averting more deaths), despite them not “deserving” to die in any meaningful sense.
The idea of someone deserving death in itself is deontological (some people must be punished and that’s a rule) while talking about the net utility of whatever is consequentialist. Ethics should be impersonal (that is, treat everyone equally) so a consequentialist ethical system that doesn’t approve of death in general should never approve of a death of any single person as an end in itself.
Generally, it seems to me that for a consequentialist, talking about an act or a person being evil should only be computational shortcuts over the real substance of moral reasoning (which consists of assigning utility to world-states). Like in the common example of an airplane that we describe using aerodynamics because that’s convenient even though really it runs on the same fundamental laws as everything else. We tend to use those shortcuts reflexively without really thinking what we are trying to say in consequentialist terms.
Of course, the deontological view does have its place, specifically where it precommits to punishing undesirable behaviors even if there is no benefit to doing so after the behavior has occurred.
But would you want to “[punish] undesirable behaviors even if there is no benefit to doing so after the behavior has occurred”?
I would want to pre-commit to punishing criminals after the fact if I thought this would lead to a world where the pos-util of averted crime outweighed the neg-util of punishing people, but not if there were no benefit, and I would be doing this on consequentialist grounds. (I’m basically asking if the deontological view truly “has its place’ in this scenario.)
Before the person made the choice of whether or not to do the undesirable behavior, I would want to have precommitted to punishing them if they did the behavior.
In the real world, punishing criminals (probably) does reduce crime. In a world where it didn’t, precommitment wouldn’t be a useful strategy. But it looks like we live in a world where it does.
Yes. And since we (probably) live in such a world, we can precommit to punishing criminals based on consequentialism. We don’t need the deontological view for this.
I disagree with your implication that there is no benefit to punishing undesirable behaviors after they have occurred… there sometimes is.
In cases where there is in fact no benefit, though, then the fact that holding a deontological view precommits me to doing so is not a reason for me to hold that view.
Interesting. Does that include the secondary effects of their deaths acting as an example and a deterrent for future undesirable behavior? Because if so, you share my view precisely (that deontology is a useful approximation of consequentialism and allows for more credible precommitment to punishment).
It does include the secondary effects of their deaths acting as a deterrent.
But I don’t share your view that deontology allows for more credible precommitment to punishment, except in the somewhat trivial sense that such a precommitment is more credible to observers who consider deontological precommitments more credible than consequentialist ones.
That is, a commitment to punishment based on an adequate understanding of the consequences of punishment is no less likely to lead to punishment than a commitment to punishment based on deontological rules, and therefore a predicter ought to be no less likely to predict punishment from a committed consequentialist than a committed deontologist. Of course, predicters in the real world don’t always predict as they ought, so it’s possible that a real-world predictor might consider my commitment less credible if it’s expressed consequentially.
It’s also possible they might consider it more so. Or that they might consider it more credible if I wear a red silk robe when I make it. Or any number of things.
It’s valuable to know what factors will make a claim of precommitment credible to my audience (whether I precommit or not), but that doesn’t make deontology any more valuable than red robes.
NOTE: As pointed out here, my use of “precommitment” here is potentially misleading. What I’m talking about is an assertion A that I will do X in the future, made in such a way that the existence of A (or, rather, the existence of other things that derive from A having existed in the past, such as memories of A or written records of A or what have you) creates benefits for actually doing X in the future (or, equivalently, costs to not doing so) that can outweigh the costs of doing X (not considering A).
I have no idea what might be meant by “conventionalist precommitment,” nor why you put that phrase in quotes, since I didn’t use it myself. Assuming you meant “consequentialist precommitment”, I mean a position I precommit to because I believe that precommitting to it has better consequences than not doing so.
I’m not exactly sure what you mean by your question about TDT/UDT, but in general I would agree that being known to operate under a TDT/UDT-like decision theory provides the same kinds of benefits I’m talking about here.
I have no idea what might be meant by “conventionalist precommitment,” nor why you put that phrase in quotes, since I didn’t use it myself. Assuming you meant “consequentialist precommitment”,
Thanks fixed.
I mean a position I precommit to because I believe that precommitting to it has better consequences than not doing so.
Of course, after you make the precommitment you are no longer a strict consequentialist.
Of course, after you make the precommitment you are no longer a strict consequentialist.
Fair enough. Rather than talking about precommittments to X, I ought to have talked about assertions that I will X in the future, made in such a way that the benefits of actually Xing in the future that derive from the fact of my having made that assertion (in terms of my reputation and associated credibility boosts and so forth) and the costs of failing to X (ibid) are sufficiently high that I will X even in situations where Xing incurs significant costs. Correction duly noted.
Boy would I like a convenient way of referring to that second thing, though.
Certainly. If someone deserves death, that means that it is good for them to die, even if their death does not serve any further purpose. The death penalty is given to those who “deserve” to die.
In order for it to be a positive net utility for someone to die, the consequences of their living simply have to be worse than the consequences of their death. If someone has a stress-induced breakdown and goes on a shooting spree, it is better to kill them than not to kill them (by killing them you are averting more deaths), despite them not “deserving” to die in any meaningful sense.
The idea of someone deserving death in itself is deontological (some people must be punished and that’s a rule) while talking about the net utility of whatever is consequentialist. Ethics should be impersonal (that is, treat everyone equally) so a consequentialist ethical system that doesn’t approve of death in general should never approve of a death of any single person as an end in itself.
Generally, it seems to me that for a consequentialist, talking about an act or a person being evil should only be computational shortcuts over the real substance of moral reasoning (which consists of assigning utility to world-states). Like in the common example of an airplane that we describe using aerodynamics because that’s convenient even though really it runs on the same fundamental laws as everything else. We tend to use those shortcuts reflexively without really thinking what we are trying to say in consequentialist terms.
Some disagree. And beware of “should” statements regarding “ethics”.
This.
Of course, the deontological view does have its place, specifically where it precommits to punishing undesirable behaviors even if there is no benefit to doing so after the behavior has occurred.
But would you want to “[punish] undesirable behaviors even if there is no benefit to doing so after the behavior has occurred”?
I would want to pre-commit to punishing criminals after the fact if I thought this would lead to a world where the pos-util of averted crime outweighed the neg-util of punishing people, but not if there were no benefit, and I would be doing this on consequentialist grounds. (I’m basically asking if the deontological view truly “has its place’ in this scenario.)
Before the person made the choice of whether or not to do the undesirable behavior, I would want to have precommitted to punishing them if they did the behavior.
In the real world, punishing criminals (probably) does reduce crime. In a world where it didn’t, precommitment wouldn’t be a useful strategy. But it looks like we live in a world where it does.
Yes. And since we (probably) live in such a world, we can precommit to punishing criminals based on consequentialism. We don’t need the deontological view for this.
I disagree with your implication that there is no benefit to punishing undesirable behaviors after they have occurred… there sometimes is.
In cases where there is in fact no benefit, though, then the fact that holding a deontological view precommits me to doing so is not a reason for me to hold that view.
OK, thanks for clarifying.
FWIW, I don’t share your model of what it means for someone to deserve death.
Out of curiousity, what is your model?
That the consequences of their living are worse than the consequences of their death.
“Their death” is too abstract, I think. The world might be better is a person died suddenly by accident, but not better if they were killed.
Surely it’s no more abstract than “deserve death”? Such a person would deserve to die suddenly by accident, but not deserve to be killed.
Interesting. Does that include the secondary effects of their deaths acting as an example and a deterrent for future undesirable behavior? Because if so, you share my view precisely (that deontology is a useful approximation of consequentialism and allows for more credible precommitment to punishment).
It does include the secondary effects of their deaths acting as a deterrent.
But I don’t share your view that deontology allows for more credible precommitment to punishment, except in the somewhat trivial sense that such a precommitment is more credible to observers who consider deontological precommitments more credible than consequentialist ones.
That is, a commitment to punishment based on an adequate understanding of the consequences of punishment is no less likely to lead to punishment than a commitment to punishment based on deontological rules, and therefore a predicter ought to be no less likely to predict punishment from a committed consequentialist than a committed deontologist. Of course, predicters in the real world don’t always predict as they ought, so it’s possible that a real-world predictor might consider my commitment less credible if it’s expressed consequentially.
It’s also possible they might consider it more so. Or that they might consider it more credible if I wear a red silk robe when I make it. Or any number of things.
It’s valuable to know what factors will make a claim of precommitment credible to my audience (whether I precommit or not), but that doesn’t make deontology any more valuable than red robes.
NOTE: As pointed out here, my use of “precommitment” here is potentially misleading. What I’m talking about is an assertion A that I will do X in the future, made in such a way that the existence of A (or, rather, the existence of other things that derive from A having existed in the past, such as memories of A or written records of A or what have you) creates benefits for actually doing X in the future (or, equivalently, costs to not doing so) that can outweigh the costs of doing X (not considering A).
Once you add TDT to consequentialism, the differences between it and intelligent deontology are pretty trivial.
Mm. Can you expand on what you mean by “intelligent deontology”? In particular, what determines whether a particular deontology is intelligent or not?
...whether it checks out as useful in a consequentialist sense… I see what you’re getting at.
What do you mean by “consequentionalist precommitment”? Or are you including with like TDT and UDT in your definition of “consequentialist”?
I have no idea what might be meant by “conventionalist precommitment,” nor why you put that phrase in quotes, since I didn’t use it myself. Assuming you meant “consequentialist precommitment”, I mean a position I precommit to because I believe that precommitting to it has better consequences than not doing so.
I’m not exactly sure what you mean by your question about TDT/UDT, but in general I would agree that being known to operate under a TDT/UDT-like decision theory provides the same kinds of benefits I’m talking about here.
Thanks fixed.
Of course, after you make the precommitment you are no longer a strict consequentialist.
Fair enough. Rather than talking about precommittments to X, I ought to have talked about assertions that I will X in the future, made in such a way that the benefits of actually Xing in the future that derive from the fact of my having made that assertion (in terms of my reputation and associated credibility boosts and so forth) and the costs of failing to X (ibid) are sufficiently high that I will X even in situations where Xing incurs significant costs. Correction duly noted.
Boy would I like a convenient way of referring to that second thing, though.