It does include the secondary effects of their deaths acting as a deterrent.
But I don’t share your view that deontology allows for more credible precommitment to punishment, except in the somewhat trivial sense that such a precommitment is more credible to observers who consider deontological precommitments more credible than consequentialist ones.
That is, a commitment to punishment based on an adequate understanding of the consequences of punishment is no less likely to lead to punishment than a commitment to punishment based on deontological rules, and therefore a predicter ought to be no less likely to predict punishment from a committed consequentialist than a committed deontologist. Of course, predicters in the real world don’t always predict as they ought, so it’s possible that a real-world predictor might consider my commitment less credible if it’s expressed consequentially.
It’s also possible they might consider it more so. Or that they might consider it more credible if I wear a red silk robe when I make it. Or any number of things.
It’s valuable to know what factors will make a claim of precommitment credible to my audience (whether I precommit or not), but that doesn’t make deontology any more valuable than red robes.
NOTE: As pointed out here, my use of “precommitment” here is potentially misleading. What I’m talking about is an assertion A that I will do X in the future, made in such a way that the existence of A (or, rather, the existence of other things that derive from A having existed in the past, such as memories of A or written records of A or what have you) creates benefits for actually doing X in the future (or, equivalently, costs to not doing so) that can outweigh the costs of doing X (not considering A).
I have no idea what might be meant by “conventionalist precommitment,” nor why you put that phrase in quotes, since I didn’t use it myself. Assuming you meant “consequentialist precommitment”, I mean a position I precommit to because I believe that precommitting to it has better consequences than not doing so.
I’m not exactly sure what you mean by your question about TDT/UDT, but in general I would agree that being known to operate under a TDT/UDT-like decision theory provides the same kinds of benefits I’m talking about here.
I have no idea what might be meant by “conventionalist precommitment,” nor why you put that phrase in quotes, since I didn’t use it myself. Assuming you meant “consequentialist precommitment”,
Thanks fixed.
I mean a position I precommit to because I believe that precommitting to it has better consequences than not doing so.
Of course, after you make the precommitment you are no longer a strict consequentialist.
Of course, after you make the precommitment you are no longer a strict consequentialist.
Fair enough. Rather than talking about precommittments to X, I ought to have talked about assertions that I will X in the future, made in such a way that the benefits of actually Xing in the future that derive from the fact of my having made that assertion (in terms of my reputation and associated credibility boosts and so forth) and the costs of failing to X (ibid) are sufficiently high that I will X even in situations where Xing incurs significant costs. Correction duly noted.
Boy would I like a convenient way of referring to that second thing, though.
It does include the secondary effects of their deaths acting as a deterrent.
But I don’t share your view that deontology allows for more credible precommitment to punishment, except in the somewhat trivial sense that such a precommitment is more credible to observers who consider deontological precommitments more credible than consequentialist ones.
That is, a commitment to punishment based on an adequate understanding of the consequences of punishment is no less likely to lead to punishment than a commitment to punishment based on deontological rules, and therefore a predicter ought to be no less likely to predict punishment from a committed consequentialist than a committed deontologist. Of course, predicters in the real world don’t always predict as they ought, so it’s possible that a real-world predictor might consider my commitment less credible if it’s expressed consequentially.
It’s also possible they might consider it more so. Or that they might consider it more credible if I wear a red silk robe when I make it. Or any number of things.
It’s valuable to know what factors will make a claim of precommitment credible to my audience (whether I precommit or not), but that doesn’t make deontology any more valuable than red robes.
NOTE: As pointed out here, my use of “precommitment” here is potentially misleading. What I’m talking about is an assertion A that I will do X in the future, made in such a way that the existence of A (or, rather, the existence of other things that derive from A having existed in the past, such as memories of A or written records of A or what have you) creates benefits for actually doing X in the future (or, equivalently, costs to not doing so) that can outweigh the costs of doing X (not considering A).
Once you add TDT to consequentialism, the differences between it and intelligent deontology are pretty trivial.
Mm. Can you expand on what you mean by “intelligent deontology”? In particular, what determines whether a particular deontology is intelligent or not?
...whether it checks out as useful in a consequentialist sense… I see what you’re getting at.
What do you mean by “consequentionalist precommitment”? Or are you including with like TDT and UDT in your definition of “consequentialist”?
I have no idea what might be meant by “conventionalist precommitment,” nor why you put that phrase in quotes, since I didn’t use it myself. Assuming you meant “consequentialist precommitment”, I mean a position I precommit to because I believe that precommitting to it has better consequences than not doing so.
I’m not exactly sure what you mean by your question about TDT/UDT, but in general I would agree that being known to operate under a TDT/UDT-like decision theory provides the same kinds of benefits I’m talking about here.
Thanks fixed.
Of course, after you make the precommitment you are no longer a strict consequentialist.
Fair enough. Rather than talking about precommittments to X, I ought to have talked about assertions that I will X in the future, made in such a way that the benefits of actually Xing in the future that derive from the fact of my having made that assertion (in terms of my reputation and associated credibility boosts and so forth) and the costs of failing to X (ibid) are sufficiently high that I will X even in situations where Xing incurs significant costs. Correction duly noted.
Boy would I like a convenient way of referring to that second thing, though.