I don’t think morality enters into this at all. I don’t see any moral concerns in the described scenario, only prudential ones (i.e., concerns about how best to satisfy one’s own values).
As such, your reply seems to me to be non-responsive to TAG’s comment…
TAG’s comment was in part about the ambiguity between telling the reader what to do given their existing UF/preferences, and telling the reader to have a different UF. The former is an outcome of recommending a decision theory, while the latter is the outcome of recommending a moral theory. Hence my comment about how to recognize distinctions between them as a reader, and differences in properties of the scenarios that are relevant as a writer.
I also evaluated this scenario (and implicitly, torture vs dust specks) for how well it illustrates decision theory aspects, and found that it does so poorly in the sense that it includes elements that are more suited to moral theory scenarios. I hoped this would go some way toward explaining why these scenarios might indeed seem ambiguous between telling the reader what to do, and telling the reader what to value.
TAG’s comment was in part about the ambiguity between telling the reader what to do given their existing UF/preferences, and telling the reader to have a different UF. The former is an outcome of recommending a decision theory, while the latter is the outcome of recommending a moral theory.
I don’t think this is right. Suppose that you prefer apples to pears, pears to grapes, and grapes to apples. I tell you that this is irrational (because intransitive), and that you should alter your preferences, on pain of Dutch-booking (or some such).
Is that a moral claim? It does not seem to me to be any such thing; and I think that most moral philosophers would agree with me…
Sure, there are cases that aren’t moral theory discussions in which you might be told to change your values. I didn’t claim that my options were exhaustive, though I did make an implicit claim that those two seemed to cover the vast majority of potential ambiguity in cases like this. I still think that claim has merit.
More explicitly, I think that the common factor here is assuming some utility to the outcomes that has a finite ratio, and arriving at an unpalatable conclusion. Setting aside errors in the presentation of the scenario for now, there are (at least) two ways to view the outcome:
FDT says that you should let yourself burn to death in some scenario, because the ratio of disutility of burning to death vs paying $100 is not infinite. This is ridiculous, therefore FDT is wrong.
FDT says that you should let yourself burn to death in some scenario, because the ratio of disutility of burning to death vs paying $100 is not infinite. This is ridiculous, therefore the utilities are wrong.
Questions like “is an increased probability (no mater how small) of someone suffering a horrible painful death, no matter how small, always worse than a moderate amount of money” are typical questions of moral theory rather than decision theory.
The ambiguity would go away if the stakes were simply money on both sides.
Questions like “is an increased probability (no mater how small) of someone suffering a horrible painful death, no matter how small, always worse than a moderate amount of money” are typical questions of moral theory rather than decision theory.
Er, no. I don’t think this is right either. Since “someone” here refers to yourself, the question is: “is an increased probability (no matter how small) of you suffering a horrible painful death, always worse than a moderate amount of money?” This is not a moral question; it’s a question about your own preferences.
(Of course, it’s also not the question we’re being asked to consider in the “Bomb” scenario, because there we’re not faced with a small probability of horrible painful death, or a small increase in the probability of horrible painful death, but rather a certain horrible painful death; and we’re comparing that to the loss of a moderate amount of money. This also is not a moral question, of course.)
The ambiguity would go away if the stakes were simply money on both sides.
Well, first of all, that would make the problem less interesting. And surely we don’t want to say “this decision theory only handles questions of money; as soon as we ask it to evaluate questions of life and death, it stops giving sensible answers”?
Secondly, I don’t think that any problem goes away if there’s just money on both sides. What if it were a billion dollars you’d have to pay to take Left, and a hundred to take Right? Well… in that case, honestly, the scenario would make even less sense than before, because:
What if I don’t have a billion dollars? Am I now a billion dollars in debt? To whom? I’m the last person in existence, right?
What’s the difference between losing a hundred dollars and losing a billion dollars, if I’m the only human in existence? What am I even using money for? What does it mean to say that I have money?
Can I declare myself to be a sovereign state, issue currency (conveniently called the “dollar”), and use it to pay the boxes? Do they have to be American dollars? Can I be the President of America? (Or the King?) Who’s going to dispute my claim?
I’ve posted a similar scenario which is based on purely money here.
I avoid “burning to death” outcomes in my version because some people do appear to endorse theoretically infinite disutilities for such things, even when they don’t live by such. Likewise there are no insanely low probabilities of failure that are mutually contradictory with other properties of the scenario.
It’s just a straightforward scenario in which FDT says you should choose to lose $1000 whenever that option is available, despite always having an available option to lose only $100.
I don’t think morality enters into this at all. I don’t see any moral concerns in the described scenario, only prudential ones (i.e., concerns about how best to satisfy one’s own values).
As such, your reply seems to me to be non-responsive to TAG’s comment…
TAG’s comment was in part about the ambiguity between telling the reader what to do given their existing UF/preferences, and telling the reader to have a different UF. The former is an outcome of recommending a decision theory, while the latter is the outcome of recommending a moral theory. Hence my comment about how to recognize distinctions between them as a reader, and differences in properties of the scenarios that are relevant as a writer.
I also evaluated this scenario (and implicitly, torture vs dust specks) for how well it illustrates decision theory aspects, and found that it does so poorly in the sense that it includes elements that are more suited to moral theory scenarios. I hoped this would go some way toward explaining why these scenarios might indeed seem ambiguous between telling the reader what to do, and telling the reader what to value.
I don’t think this is right. Suppose that you prefer apples to pears, pears to grapes, and grapes to apples. I tell you that this is irrational (because intransitive), and that you should alter your preferences, on pain of Dutch-booking (or some such).
Is that a moral claim? It does not seem to me to be any such thing; and I think that most moral philosophers would agree with me…
Sure, there are cases that aren’t moral theory discussions in which you might be told to change your values. I didn’t claim that my options were exhaustive, though I did make an implicit claim that those two seemed to cover the vast majority of potential ambiguity in cases like this. I still think that claim has merit.
More explicitly, I think that the common factor here is assuming some utility to the outcomes that has a finite ratio, and arriving at an unpalatable conclusion. Setting aside errors in the presentation of the scenario for now, there are (at least) two ways to view the outcome:
FDT says that you should let yourself burn to death in some scenario, because the ratio of disutility of burning to death vs paying $100 is not infinite. This is ridiculous, therefore FDT is wrong.
FDT says that you should let yourself burn to death in some scenario, because the ratio of disutility of burning to death vs paying $100 is not infinite. This is ridiculous, therefore the utilities are wrong.
Questions like “is an increased probability (no mater how small) of someone suffering a horrible painful death, no matter how small, always worse than a moderate amount of money” are typical questions of moral theory rather than decision theory.
The ambiguity would go away if the stakes were simply money on both sides.
Er, no. I don’t think this is right either. Since “someone” here refers to yourself, the question is: “is an increased probability (no matter how small) of you suffering a horrible painful death, always worse than a moderate amount of money?” This is not a moral question; it’s a question about your own preferences.
(Of course, it’s also not the question we’re being asked to consider in the “Bomb” scenario, because there we’re not faced with a small probability of horrible painful death, or a small increase in the probability of horrible painful death, but rather a certain horrible painful death; and we’re comparing that to the loss of a moderate amount of money. This also is not a moral question, of course.)
Well, first of all, that would make the problem less interesting. And surely we don’t want to say “this decision theory only handles questions of money; as soon as we ask it to evaluate questions of life and death, it stops giving sensible answers”?
Secondly, I don’t think that any problem goes away if there’s just money on both sides. What if it were a billion dollars you’d have to pay to take Left, and a hundred to take Right? Well… in that case, honestly, the scenario would make even less sense than before, because:
What if I don’t have a billion dollars? Am I now a billion dollars in debt? To whom? I’m the last person in existence, right?
What’s the difference between losing a hundred dollars and losing a billion dollars, if I’m the only human in existence? What am I even using money for? What does it mean to say that I have money?
Can I declare myself to be a sovereign state, issue currency (conveniently called the “dollar”), and use it to pay the boxes? Do they have to be American dollars? Can I be the President of America? (Or the King?) Who’s going to dispute my claim?
And so on…
I’ve posted a similar scenario which is based on purely money here.
I avoid “burning to death” outcomes in my version because some people do appear to endorse theoretically infinite disutilities for such things, even when they don’t live by such. Likewise there are no insanely low probabilities of failure that are mutually contradictory with other properties of the scenario.
It’s just a straightforward scenario in which FDT says you should choose to lose $1000 whenever that option is available, despite always having an available option to lose only $100.