These arguments—the Bomb argument and Torture versus Dust Specs—suffer from an ambiguity between telling the reader what to do given their existing UF/preferences, telling the reader to have a different UF, and saying what an abstract agent , but not the reader, would do.
Suppose the reader has a well defined utility function where death of torture are set to minus infinity. Then the writer can’t persuade them to trade off death or torture against any finite amount of utility. So, in what sense is the reader wrong about their own preferences?
Maybe their preferences don’t have the right mathematical structure to be a utility function in the technical sense. But then why would someone reformat their preferences into an ideal von Neumann form. What’s the advantage? It’s sometimes said that coherent preferences prevent you from being money pumped , or Dutch booked. But that doesn’t sound nearly as bad as being killed or tortured. If a perfectly rational decision theorist would accept being killed or tortured, then I don’t want to be one.
Or maybe these arguments just descibe ideal rationalists, and aren’t intended to persuade the reader at all.
Suppose the reader has a well defined utility function where death of torture are set to minus infinity. Then the writer can’t persuade them to trade off death or torture against any finite amount of utility. So, in what sense is the reader wrong about their own preferences?
I think the original Bomb scenario should have come with a, say, $1,000,000 value for “not being blown up”. That would have allowed for easy and agreed-upon expected utility calculations.
I think sometimes writers mix up moral theories with decision theories.
Decision theory problems are best expressed using reasonably modest amounts of money, because even if readers don’t themselves have linear utility of money over that range, it’s something that’s easily imagined.
Moral theories are usually best expressed in non-monetary terms, but going straight to torture and murder is pretty lazy in my opinion. Fine, they’re things that most people think are “generally wrong” without being politically hot, but they still seem to bypass rationality, which makes discussion go stupid.
This bomb example did the stupid thing of including torture and death and annihilation of all intelligent life in the universe balanced against money and implausibly small probabilities and a bunch of other crap, and also left such huge holes in the specification that their argument didn’t even work. Pretty much a dumpster fire of what not to do in illustrating some fine points of decision theory.
I don’t think morality enters into this at all. I don’t see any moral concerns in the described scenario, only prudential ones (i.e., concerns about how best to satisfy one’s own values).
As such, your reply seems to me to be non-responsive to TAG’s comment…
TAG’s comment was in part about the ambiguity between telling the reader what to do given their existing UF/preferences, and telling the reader to have a different UF. The former is an outcome of recommending a decision theory, while the latter is the outcome of recommending a moral theory. Hence my comment about how to recognize distinctions between them as a reader, and differences in properties of the scenarios that are relevant as a writer.
I also evaluated this scenario (and implicitly, torture vs dust specks) for how well it illustrates decision theory aspects, and found that it does so poorly in the sense that it includes elements that are more suited to moral theory scenarios. I hoped this would go some way toward explaining why these scenarios might indeed seem ambiguous between telling the reader what to do, and telling the reader what to value.
TAG’s comment was in part about the ambiguity between telling the reader what to do given their existing UF/preferences, and telling the reader to have a different UF. The former is an outcome of recommending a decision theory, while the latter is the outcome of recommending a moral theory.
I don’t think this is right. Suppose that you prefer apples to pears, pears to grapes, and grapes to apples. I tell you that this is irrational (because intransitive), and that you should alter your preferences, on pain of Dutch-booking (or some such).
Is that a moral claim? It does not seem to me to be any such thing; and I think that most moral philosophers would agree with me…
Sure, there are cases that aren’t moral theory discussions in which you might be told to change your values. I didn’t claim that my options were exhaustive, though I did make an implicit claim that those two seemed to cover the vast majority of potential ambiguity in cases like this. I still think that claim has merit.
More explicitly, I think that the common factor here is assuming some utility to the outcomes that has a finite ratio, and arriving at an unpalatable conclusion. Setting aside errors in the presentation of the scenario for now, there are (at least) two ways to view the outcome:
FDT says that you should let yourself burn to death in some scenario, because the ratio of disutility of burning to death vs paying $100 is not infinite. This is ridiculous, therefore FDT is wrong.
FDT says that you should let yourself burn to death in some scenario, because the ratio of disutility of burning to death vs paying $100 is not infinite. This is ridiculous, therefore the utilities are wrong.
Questions like “is an increased probability (no mater how small) of someone suffering a horrible painful death, no matter how small, always worse than a moderate amount of money” are typical questions of moral theory rather than decision theory.
The ambiguity would go away if the stakes were simply money on both sides.
Questions like “is an increased probability (no mater how small) of someone suffering a horrible painful death, no matter how small, always worse than a moderate amount of money” are typical questions of moral theory rather than decision theory.
Er, no. I don’t think this is right either. Since “someone” here refers to yourself, the question is: “is an increased probability (no matter how small) of you suffering a horrible painful death, always worse than a moderate amount of money?” This is not a moral question; it’s a question about your own preferences.
(Of course, it’s also not the question we’re being asked to consider in the “Bomb” scenario, because there we’re not faced with a small probability of horrible painful death, or a small increase in the probability of horrible painful death, but rather a certain horrible painful death; and we’re comparing that to the loss of a moderate amount of money. This also is not a moral question, of course.)
The ambiguity would go away if the stakes were simply money on both sides.
Well, first of all, that would make the problem less interesting. And surely we don’t want to say “this decision theory only handles questions of money; as soon as we ask it to evaluate questions of life and death, it stops giving sensible answers”?
Secondly, I don’t think that any problem goes away if there’s just money on both sides. What if it were a billion dollars you’d have to pay to take Left, and a hundred to take Right? Well… in that case, honestly, the scenario would make even less sense than before, because:
What if I don’t have a billion dollars? Am I now a billion dollars in debt? To whom? I’m the last person in existence, right?
What’s the difference between losing a hundred dollars and losing a billion dollars, if I’m the only human in existence? What am I even using money for? What does it mean to say that I have money?
Can I declare myself to be a sovereign state, issue currency (conveniently called the “dollar”), and use it to pay the boxes? Do they have to be American dollars? Can I be the President of America? (Or the King?) Who’s going to dispute my claim?
I’ve posted a similar scenario which is based on purely money here.
I avoid “burning to death” outcomes in my version because some people do appear to endorse theoretically infinite disutilities for such things, even when they don’t live by such. Likewise there are no insanely low probabilities of failure that are mutually contradictory with other properties of the scenario.
It’s just a straightforward scenario in which FDT says you should choose to lose $1000 whenever that option is available, despite always having an available option to lose only $100.
These arguments—the Bomb argument and Torture versus Dust Specs—suffer from an ambiguity between telling the reader what to do given their existing UF/preferences, telling the reader to have a different UF, and saying what an abstract agent , but not the reader, would do.
Suppose the reader has a well defined utility function where death of torture are set to minus infinity. Then the writer can’t persuade them to trade off death or torture against any finite amount of utility. So, in what sense is the reader wrong about their own preferences?
Maybe their preferences don’t have the right mathematical structure to be a utility function in the technical sense. But then why would someone reformat their preferences into an ideal von Neumann form. What’s the advantage? It’s sometimes said that coherent preferences prevent you from being money pumped , or Dutch booked. But that doesn’t sound nearly as bad as being killed or tortured. If a perfectly rational decision theorist would accept being killed or tortured, then I don’t want to be one.
Or maybe these arguments just descibe ideal rationalists, and aren’t intended to persuade the reader at all.
I think the original Bomb scenario should have come with a, say, $1,000,000 value for “not being blown up”. That would have allowed for easy and agreed-upon expected utility calculations.
I think sometimes writers mix up moral theories with decision theories.
Decision theory problems are best expressed using reasonably modest amounts of money, because even if readers don’t themselves have linear utility of money over that range, it’s something that’s easily imagined.
Moral theories are usually best expressed in non-monetary terms, but going straight to torture and murder is pretty lazy in my opinion. Fine, they’re things that most people think are “generally wrong” without being politically hot, but they still seem to bypass rationality, which makes discussion go stupid.
This bomb example did the stupid thing of including torture and death and annihilation of all intelligent life in the universe balanced against money and implausibly small probabilities and a bunch of other crap, and also left such huge holes in the specification that their argument didn’t even work. Pretty much a dumpster fire of what not to do in illustrating some fine points of decision theory.
I don’t think morality enters into this at all. I don’t see any moral concerns in the described scenario, only prudential ones (i.e., concerns about how best to satisfy one’s own values).
As such, your reply seems to me to be non-responsive to TAG’s comment…
TAG’s comment was in part about the ambiguity between telling the reader what to do given their existing UF/preferences, and telling the reader to have a different UF. The former is an outcome of recommending a decision theory, while the latter is the outcome of recommending a moral theory. Hence my comment about how to recognize distinctions between them as a reader, and differences in properties of the scenarios that are relevant as a writer.
I also evaluated this scenario (and implicitly, torture vs dust specks) for how well it illustrates decision theory aspects, and found that it does so poorly in the sense that it includes elements that are more suited to moral theory scenarios. I hoped this would go some way toward explaining why these scenarios might indeed seem ambiguous between telling the reader what to do, and telling the reader what to value.
I don’t think this is right. Suppose that you prefer apples to pears, pears to grapes, and grapes to apples. I tell you that this is irrational (because intransitive), and that you should alter your preferences, on pain of Dutch-booking (or some such).
Is that a moral claim? It does not seem to me to be any such thing; and I think that most moral philosophers would agree with me…
Sure, there are cases that aren’t moral theory discussions in which you might be told to change your values. I didn’t claim that my options were exhaustive, though I did make an implicit claim that those two seemed to cover the vast majority of potential ambiguity in cases like this. I still think that claim has merit.
More explicitly, I think that the common factor here is assuming some utility to the outcomes that has a finite ratio, and arriving at an unpalatable conclusion. Setting aside errors in the presentation of the scenario for now, there are (at least) two ways to view the outcome:
FDT says that you should let yourself burn to death in some scenario, because the ratio of disutility of burning to death vs paying $100 is not infinite. This is ridiculous, therefore FDT is wrong.
FDT says that you should let yourself burn to death in some scenario, because the ratio of disutility of burning to death vs paying $100 is not infinite. This is ridiculous, therefore the utilities are wrong.
Questions like “is an increased probability (no mater how small) of someone suffering a horrible painful death, no matter how small, always worse than a moderate amount of money” are typical questions of moral theory rather than decision theory.
The ambiguity would go away if the stakes were simply money on both sides.
Er, no. I don’t think this is right either. Since “someone” here refers to yourself, the question is: “is an increased probability (no matter how small) of you suffering a horrible painful death, always worse than a moderate amount of money?” This is not a moral question; it’s a question about your own preferences.
(Of course, it’s also not the question we’re being asked to consider in the “Bomb” scenario, because there we’re not faced with a small probability of horrible painful death, or a small increase in the probability of horrible painful death, but rather a certain horrible painful death; and we’re comparing that to the loss of a moderate amount of money. This also is not a moral question, of course.)
Well, first of all, that would make the problem less interesting. And surely we don’t want to say “this decision theory only handles questions of money; as soon as we ask it to evaluate questions of life and death, it stops giving sensible answers”?
Secondly, I don’t think that any problem goes away if there’s just money on both sides. What if it were a billion dollars you’d have to pay to take Left, and a hundred to take Right? Well… in that case, honestly, the scenario would make even less sense than before, because:
What if I don’t have a billion dollars? Am I now a billion dollars in debt? To whom? I’m the last person in existence, right?
What’s the difference between losing a hundred dollars and losing a billion dollars, if I’m the only human in existence? What am I even using money for? What does it mean to say that I have money?
Can I declare myself to be a sovereign state, issue currency (conveniently called the “dollar”), and use it to pay the boxes? Do they have to be American dollars? Can I be the President of America? (Or the King?) Who’s going to dispute my claim?
And so on…
I’ve posted a similar scenario which is based on purely money here.
I avoid “burning to death” outcomes in my version because some people do appear to endorse theoretically infinite disutilities for such things, even when they don’t live by such. Likewise there are no insanely low probabilities of failure that are mutually contradictory with other properties of the scenario.
It’s just a straightforward scenario in which FDT says you should choose to lose $1000 whenever that option is available, despite always having an available option to lose only $100.