Suppose the utility of living is 1 and of dying is 0. (Since these are the only two possible outcomes, it doesn’t matter what value you take, as long as dead < alive.) In case 1 you’re purchasing 1⁄6 of a utilon and in case 2 1⁄3, therefore (assuming linear value of money to simplify things) you should pay twice as much in the second case.
The cited argument goes wrong, so obviously that at first I had difficulty in understanding how it could be seriously put forward, in their comparision between case B (pay to remove the one bullet from a 3-shooter) and case C (half a chance of execution and half a chance of case B). In case B you’re buying 1⁄3 of a utilon, in case C 1⁄6, hence pay twice as much in case B.
So their cases B and C don’t work as an intuition pump for me. I think the point is that in case C, you should only consider the utility of the branch in which you are not executed, since if you are executed, then you don’t have any use for the money anyway, so paying before the chance of execution is equivalent to being offered the choice of paying after escaping execution, but I think we’re stepping outside standard decision theory in considering the agent’s mortality. Quantum suicide anyone? If I leave that consideration aside, then clearly a certainty of something is worth twice a 50% chance of it, which is one of the basic assumptions of the utility theorem, and B is worth twice C.
BTW, here is an original paper by Jeffrey on the problem, but paywalled, and I can’t get to it.
Note that “Zeckhauser’s problem” sometimes refers to a different version, with just one bullet in case 2.
ETA: The different version is here, page 11, and differs in case 2, which has just one bullet in the six-shooter, and you can pay to remove it. The author there says that the two cases have the same value, because in both cases you are removing a 1⁄6 chance of dying. But if he is right, and Jeffrey is right about the original problem, then both versions of case 2 have the same value: you should pay the same amount to remove all the bullets whether there is one or two. Sounds like there’s a divide by zero error somewhere, because there seems a straight road from there to proving that you should pay exactly the same amount to avoid any non-zero chance of death.
ETA2: In Dekker’s game-show version referenced elsewhere in the comments, the same amount should be paid in both cases, but that’s because you’re not actually paying with money, but money you only have a 50% chance of having. You’re getting half the benefit but paying with money you only have a 50% chance of, so the numbers come out the same. I can see the correspondence with the Russian roulette version, but unintuitive hypotheses (in this case about the lack of value of anything to you when you’re dead) make for unintuitive intuition pumps.
The cited argument goes wrong, so obviously that at first I had difficulty in understanding how it could be seriously put forward, in their comparision between case B (pay to remove the one bullet from a 3-shooter) and case C (half a chance of execution and half a chance of case B). In case B you’re buying 1⁄3 of a utilon, in case C 1⁄6, hence pay twice as much in case B.
Cases B and C are equivalent according to standard decision theory.
Let L be the difference in utility between living-and-not-paying and dying. Let the difference in utility between living-and-paying and living-and-not-paying be X. Assume that you have no control over what happens if you die, so that the utility of dying is the same no matter what you decided to do. Normalize so that the utility of dying is 0.
In Case B, the expected utility of not-paying is 2⁄3 L + 1⁄3 0 = 2⁄3 L. The expected utility of paying is L − X. Thus, you agree to pay if and only if 2/3L ≤ L − X. That is, you pay if and only if X ≤ 1⁄3 L.
In Case C, the expected utility of not-paying is 1⁄2 0 + 1⁄22⁄3 L = 1⁄3 L. The expected utility of paying is 1⁄2 * 0 + 1⁄2 (L − X) = 1⁄2 (L − X). Thus, you agree to pay if and only if 1⁄3 L ≤ 1⁄2 (L − X). That is, you pay if and only if X ≤ 1⁄3 L.
Thus, in both cases, you will agree to pay the same amounts.
I understand the argument, but it absolutely depends on your estate being of no value when you’re dead. Ok, that’s simply one of the rules given in the problem, but in reality, people generally
care very much) about the posthumous disposal of their assets. The gameshow version makes the problem much clearer, because one can very easily imagine a gameshow run exactly according to those rules.
But then, by making it much clearer, the paradox is reduced: it is easy to understand the equality of the two cases, even if one’s first guess was wrong.
I also reject the claim that C and B are equivalent (unless the utility of survival is 0, +infinity, or -infinity). If I accepted their line of argument, then I would also have to answer the following set of questions with a single answer.
Question E: Given that you’re playing Russian Roulette with a full 100-shooter, how much would you pay to remove all 100 of the bullets?
Question F: Given that you’re playing Russian Roulette with a full 1-shooter, how much would you pay to remove the bullet?
Question G: With 99% certainty, you will be executed. With 1% certainty you will be forced to play Russian Roulette with a full 1-shooter. How much would you pay to remove the bullet?
Question H: Given that you’re playing Russian Roulette with a full 100-shooter, how much would you pay to remove one of the bullets?
You reject the claim, but can you point out a flaw in their argument?
I claim that the answers to E, F, and G should indeed be the same, but H is not equivalent to them. This should be intuitive. Their line of argument does not claim H is equivalent to E/F/G—do the math out and you’ll see.
Actually my revised opinion, as expressed in my reply to Tyrell_McAllister, is that the authors’ analysis is correct given the highly unlikely set-up. In a more realistic scenario, I accept the equivalences A~B and C~D, but not B~C.
I claim that the answers to E, F, and G should indeed be the same, but H is not equivalent to them. This should be intuitive. Their line of argument does not claim H is equivalent to E/F/G—do the math out and you’ll see.
I really don’t know what you have in mind here. Do you also claim that cases A, B, C are equivalent to each other but not to D?
After further reflection, I want to say that the problem is wrong (and several other commenters have said something similar): the premise that your money buys you no expected utility post mortem is generally incompatible with your survival having finite positive utility.
Your calculation is of course correct insofar as it stays within the scope of the problem. But note that it goes through exactly the same for my cases F and G. There you’ll end up paying iff X ≤ L, and thus you’ll pay the same amount to remove just 1 bullet from a full 100-shooter as to remove all 100 of them.
Suppose the utility of living is 1 and of dying is 0. (Since these are the only two possible outcomes, it doesn’t matter what value you take, as long as dead < alive.) In case 1 you’re purchasing 1⁄6 of a utilon and in case 2 1⁄3, therefore (assuming linear value of money to simplify things) you should pay twice as much in the second case.
The cited argument goes wrong, so obviously that at first I had difficulty in understanding how it could be seriously put forward, in their comparision between case B (pay to remove the one bullet from a 3-shooter) and case C (half a chance of execution and half a chance of case B). In case B you’re buying 1⁄3 of a utilon, in case C 1⁄6, hence pay twice as much in case B.
So their cases B and C don’t work as an intuition pump for me. I think the point is that in case C, you should only consider the utility of the branch in which you are not executed, since if you are executed, then you don’t have any use for the money anyway, so paying before the chance of execution is equivalent to being offered the choice of paying after escaping execution, but I think we’re stepping outside standard decision theory in considering the agent’s mortality. Quantum suicide anyone? If I leave that consideration aside, then clearly a certainty of something is worth twice a 50% chance of it, which is one of the basic assumptions of the utility theorem, and B is worth twice C.
BTW, here is an original paper by Jeffrey on the problem, but paywalled, and I can’t get to it.
Note that “Zeckhauser’s problem” sometimes refers to a different version, with just one bullet in case 2.
ETA: The different version is here, page 11, and differs in case 2, which has just one bullet in the six-shooter, and you can pay to remove it. The author there says that the two cases have the same value, because in both cases you are removing a 1⁄6 chance of dying. But if he is right, and Jeffrey is right about the original problem, then both versions of case 2 have the same value: you should pay the same amount to remove all the bullets whether there is one or two. Sounds like there’s a divide by zero error somewhere, because there seems a straight road from there to proving that you should pay exactly the same amount to avoid any non-zero chance of death.
ETA2: In Dekker’s game-show version referenced elsewhere in the comments, the same amount should be paid in both cases, but that’s because you’re not actually paying with money, but money you only have a 50% chance of having. You’re getting half the benefit but paying with money you only have a 50% chance of, so the numbers come out the same. I can see the correspondence with the Russian roulette version, but unintuitive hypotheses (in this case about the lack of value of anything to you when you’re dead) make for unintuitive intuition pumps.
Cases B and C are equivalent according to standard decision theory.
Let L be the difference in utility between living-and-not-paying and dying. Let the difference in utility between living-and-paying and living-and-not-paying be X. Assume that you have no control over what happens if you die, so that the utility of dying is the same no matter what you decided to do. Normalize so that the utility of dying is 0.
In Case B, the expected utility of not-paying is 2⁄3 L + 1⁄3 0 = 2⁄3 L. The expected utility of paying is L − X. Thus, you agree to pay if and only if 2/3L ≤ L − X. That is, you pay if and only if X ≤ 1⁄3 L.
In Case C, the expected utility of not-paying is 1⁄2 0 + 1⁄2 2⁄3 L = 1⁄3 L. The expected utility of paying is 1⁄2 * 0 + 1⁄2 (L − X) = 1⁄2 (L − X). Thus, you agree to pay if and only if 1⁄3 L ≤ 1⁄2 (L − X). That is, you pay if and only if X ≤ 1⁄3 L.
Thus, in both cases, you will agree to pay the same amounts.
I understand the argument, but it absolutely depends on your estate being of no value when you’re dead. Ok, that’s simply one of the rules given in the problem, but in reality, people generally care very much) about the posthumous disposal of their assets. The gameshow version makes the problem much clearer, because one can very easily imagine a gameshow run exactly according to those rules.
But then, by making it much clearer, the paradox is reduced: it is easy to understand the equality of the two cases, even if one’s first guess was wrong.
I also reject the claim that C and B are equivalent (unless the utility of survival is 0, +infinity, or -infinity). If I accepted their line of argument, then I would also have to answer the following set of questions with a single answer.
Question E: Given that you’re playing Russian Roulette with a full 100-shooter, how much would you pay to remove all 100 of the bullets?
Question F: Given that you’re playing Russian Roulette with a full 1-shooter, how much would you pay to remove the bullet?
Question G: With 99% certainty, you will be executed. With 1% certainty you will be forced to play Russian Roulette with a full 1-shooter. How much would you pay to remove the bullet?
Question H: Given that you’re playing Russian Roulette with a full 100-shooter, how much would you pay to remove one of the bullets?
You reject the claim, but can you point out a flaw in their argument?
I claim that the answers to E, F, and G should indeed be the same, but H is not equivalent to them. This should be intuitive. Their line of argument does not claim H is equivalent to E/F/G—do the math out and you’ll see.
Actually my revised opinion, as expressed in my reply to Tyrell_McAllister, is that the authors’ analysis is correct given the highly unlikely set-up. In a more realistic scenario, I accept the equivalences A~B and C~D, but not B~C.
I really don’t know what you have in mind here. Do you also claim that cases A, B, C are equivalent to each other but not to D?
Oops, sorry! I misread. My bad. I would agree that they are all equivalent.
What do you make of my argument here?
After further reflection, I want to say that the problem is wrong (and several other commenters have said something similar): the premise that your money buys you no expected utility post mortem is generally incompatible with your survival having finite positive utility.
Your calculation is of course correct insofar as it stays within the scope of the problem. But note that it goes through exactly the same for my cases F and G. There you’ll end up paying iff X ≤ L, and thus you’ll pay the same amount to remove just 1 bullet from a full 100-shooter as to remove all 100 of them.