While Elezier’s argument is still correct (that you should multiply to make decisions based on probabilistic knowledge), I see a perfectly rational and utilitarian explanation for choosing 1A and 2B in the stated problem.
The clue lies in Colin Reid’s comment: “people do not ascribe a low positive utility to winning nothing or close to nothing—they actively fear it”. This fear is explained by Kingreaper: “in scenario 1B if you lose you know it’s your fault you got nothing”.
That makes the two cases, stated as they are, different. In game 1 the utility of U1($0) has negative value: a sense of guilt (or shame) over having made the bad choice, which doesn’t seem possible in game 2 (because game 2 is stated in terms of abstract probabilities, see below).
Note that stating the game with the “switch” rule turns game 2 into one (let’s call it 3) in which the guilt/shame reappears, making U3=U1 -- so a rational player with the described negative U1 would choose A in game 3 and there would be no money pump.
This solution to the paradox is less valid if it is made clear that the subject will be allowed to play the game many times.
Another interesting way to remove this as a possible solution would be to restate case 2 in more concrete terms, to make it clear that you won’t get away not knowing that “it was your fault” if you loose:
4A. If a 100-face dice falls on <=34, win $24,000, otherwise win nothing.
4B. If a 100-face dice falls on <=33, win $27,000, otherwise win nothing.
Just to prevent the subject being pattern-matching and not thinking, we should add the phrase “note that if the dice falls on a 34 and you’ve chosen A, you win 24k, but if you’ve chosen B, you get nothing”.
I believe game 4 is pretty equivalent to game 3 (the one with the switch).
I’ve checked Allais’ document and it suffers the same flaw: it’s not an actual experiment in which people are asked to choose A or B and actually allowed to play the game, but a questionnaire asking subjects what they would choose. This is not the same, among other reasons because it doesn’t force the experimenter or subject to detail the mechanics of the game (and hence it is not stated whether the subject will be given that sense of shame or even allowed to “chase the rabbit”).
It would be interesting to know the result of an actual experiment with this design, possibly with smaller figures to reduce the non-linearity of the utility functions—since that’s not what’s being discussed here --, and with subjects filtered against innumeracy (since those are out of hope anyway).
That makes the two cases, stated as they are, different. In game 1 the utility of U1($0) has negative value: a sense of guilt (or shame) over having made the bad choice, which doesn’t seem possible in game 2 (because game 2 is stated in terms of abstract probabilities, see below).
If you could choose whether or not to have this guilt, would you choose to have it? Does it make you better off?
While Elezier’s argument is still correct (that you should multiply to make decisions based on probabilistic knowledge), I see a perfectly rational and utilitarian explanation for choosing 1A and 2B in the stated problem.
The clue lies in Colin Reid’s comment: “people do not ascribe a low positive utility to winning nothing or close to nothing—they actively fear it”. This fear is explained by Kingreaper: “in scenario 1B if you lose you know it’s your fault you got nothing”.
That makes the two cases, stated as they are, different. In game 1 the utility of U1($0) has negative value: a sense of guilt (or shame) over having made the bad choice, which doesn’t seem possible in game 2 (because game 2 is stated in terms of abstract probabilities, see below).
This makes the inequations compatible:
e.g. 24 > 33⁄34 · 27 + 1⁄34 · −1000
e.g. 0.34 · 24 + 0.66 · 0 < 0.33 · 27 + 0.67 · 0
Note that stating the game with the “switch” rule turns game 2 into one (let’s call it 3) in which the guilt/shame reappears, making U3=U1 -- so a rational player with the described negative U1 would choose A in game 3 and there would be no money pump.
This solution to the paradox is less valid if it is made clear that the subject will be allowed to play the game many times.
Another interesting way to remove this as a possible solution would be to restate case 2 in more concrete terms, to make it clear that you won’t get away not knowing that “it was your fault” if you loose:
Just to prevent the subject being pattern-matching and not thinking, we should add the phrase “note that if the dice falls on a 34 and you’ve chosen A, you win 24k, but if you’ve chosen B, you get nothing”.
I believe game 4 is pretty equivalent to game 3 (the one with the switch).
I’ve checked Allais’ document and it suffers the same flaw: it’s not an actual experiment in which people are asked to choose A or B and actually allowed to play the game, but a questionnaire asking subjects what they would choose. This is not the same, among other reasons because it doesn’t force the experimenter or subject to detail the mechanics of the game (and hence it is not stated whether the subject will be given that sense of shame or even allowed to “chase the rabbit”).
It would be interesting to know the result of an actual experiment with this design, possibly with smaller figures to reduce the non-linearity of the utility functions—since that’s not what’s being discussed here --, and with subjects filtered against innumeracy (since those are out of hope anyway).
If you could choose whether or not to have this guilt, would you choose to have it? Does it make you better off?