The Allais Paradox and the Dilemma of Utility vs. Certainty
Related to: The Allais Paradox, Zut Allais, Allais Malaise, and Pascal’s Mugging
You’ve probably heard the Allais Paradox before, where you choose one of the two options from each set:
Set One:
$24000, with certainty.
97% chance of $27000, 3% chance of nothing.
Set Two:
34% chance of $24000, 66% chance of nothing.
33% chance of $27000, 67% chance of nothing.
From set one, which of the two would you choose? Which of the two is the most intuitively appealing? Which of the two would you choose if your only goal is to maximize the amount of dollars you receive? And most importantly, how do you justify your choice?
From set two, which of the two would you choose? Which of the two is the most intuitively appealing? Which of the two would you choose if your only goal is to maximize the amount of dollars you receive? And most importantly, how do you justify your choice?
The reason this is called a “paradox” is that most people choose 1 from set one and choose 2 from set two, despite set two being the same as a ~33% chance of being able to choose from set one.
This is best seen when we shut up and multiply. When we run some naïve expected utility calculations and make the big assumption of a linear utility for money (this works with the third question), we get:
U(Set One, Choice 1) = 1.00 * U($24000) = 24000
U(Set One, Choice 2) = 0.97 * U($27000) = 26190
U(Set One, Choice 2) = 0.97 * U($27000) = 26190
U(Set Two, Choice 1) = 0.34 * U($24000) = 8160
U(Set Two, Choice 2) = 0.33 * U($27000) = 8910
U(Set Two, Choice 2) = 0.33 * U($27000) = 8910
So to the degree that it is rational to want more money (you can always donate anything you don’t want), it seems like we should want Choice 2 from both sets. But why do people only realize this in Set Two?
The two competing theories is the “people are silly” theory and the “it is perfectly rational to bet on certainty” theory. What if you go for the 97% chance and miss out on such a large sum? It seems like you would intuitively want to just take your $24000 and run, but according to expected utility, you’re just giving up $2190.
~
The Problem With “It is Perfectly Rational to Bet on Certainty”
To put some pressure on this theory, all we have to do is introduce set three right here:
Set Three:
$24000, with certainty
99.99% chance of $24 million, 0.01% chance of nothing.
From set three, which of the two would you choose? Which of the two is the most intuitively appealing? Which of the two would you choose if your only goal is to maximize the amount of dollars you receive? And most importantly, how do you justify your choice?
I think you’d intuitively think that only a fool would cling to certainty so much that he or she wouldn’t be willing to take what is an almost guaranteed 24 million. Why is it okay to give up certainty on some bets and not others, regardless of what expected utility says?
If you had a choice between “$24000 with certainty” and “90% chance of $X”, is there really no value for X that would make you change your mind?
If you had a choice between “$24000 with certainty” and “X% chance of $24001″, what is the smallest value of X that would make you switch?
~
The Problem With “People Are Silly”
However, relying solely on expected utility seems to make you vulnerable to a dilemma very similar to Pascal’s Mugging. Consider set four where the difference is a lot more blatant:
Set Four:
$24000, with certainty
0.0001% chance of $27 billion, 99.9999% chance of nothing.
From set four, which of the two would you choose? Which of the two is the most intuitively appealing? Which of the two would you choose if your only goal is to maximize the amount of dollars you receive? And most importantly, how do you justify your choice?
When we go solely by the expected utility calculations we get:
When we go solely by the expected utility calculations we get:
U(Set Three, Choice 1) = 1.00 * U($24000) = 24000
U(Set Three, Choice 2) = 0.000001 * U($27000000000) = 27000
U(Set Three, Choice 2) = 0.000001 * U($27000000000) = 27000
Shutting up and multiplying tells us that if we go with Set Three, Choice 1 we are forfeiting $3000. Our intuition tells us that if we go with Set Three, Choice 2 we just chose a lottery ticket over $24000.
So here’s the real dilemma: you have to pay $10000 to play the game. The expected utility calculations now say choice 1 yields $14000 and choice 2 yields $17000.
So which choice do you take? And how do you defend your choice as the rational one?
And if your answer is that your utility for money is not linear, check to see if that’s your real rejection. What would you do if you would donate the money? What would you do if you were in the least convenient possible world where your utility function for money is linear?
And if your answer is that your utility for money is not linear, check to see if that’s your real rejection. What would you do if you would donate the money? What would you do if you were in the least convenient possible world where your utility function for money is linear?
This is a very, very, very safe assumption when talking about $27 billion.
Then I would have radically different intuitions and responses to such tradeoffs and the answer would be obvious. This is like asking:
“Would you eat cow manure? No? Well what about in the least convenient possible world where eating cow manure is your sole and ultimate desire?”
What if the problem were phrased like this?
Set Four:
1.) Save 24000 lives, with certainty
2.) 0.0001% chance of saving 27 billion lives, 99.9999% chance of saving no lives.
It’s not obvious that our utility for lives saved is linear, either. For example, I would confidently choose killing 50% of the world’s population over killing everyone with 50% probability, because in the former case humanity is likely to recover.
That said, it seems to be close enough to linear when the numbers are sufficiently small, and I’m ready to accept the conclusion that shutting up and multiplying is better than following my unexamined intuitions.
I’d need to know what the total human population is before making this decision...
For the purposes of this obscure hypothetical, let’s say the total human population is arbitrarily 40 billion.
You mean this?:
1.) 26986000 people die, with certainty.
2.) 0.0001% chance that nobody dies; 99.9999% chance that 27000000 people die.
And of course the answer is obvious. Given a population of 40 billion, you’d have to be a monster to not pick 2. :)
In this case I am much less certain of my answer, but I’m leaning toward 2.
On the other hand, in the $ question, I am quite certain that I would rather have $24000. This makes me quite confident that nonlinear utility of money is my true rejection, thankyouverymuch.
This is a problem, because my intuitions don’t really listen to hypotheticals. So basically you’re assigning my intuitions one problem (nonlinear utility function) and the rest of me another problem, which makes conflict between my intuitions and the math uninformative.
Reminder: the Allais Paradox is not that people prefer 1A>1B, it’s that people prefer 1A>1B and 2B>2A. If you prefer 1A>1B and 2A>2B it could because of having non-linear utility for money, which is perfectly reasonable and non-paradoxical. Neither does “Shut up and multiply” have anything to do with linear utility functions for money.
You’re right and I think I touched on that a bit—people seem to see a larger difference between 100% and 99% than between 67% and 66%. Maybe I didn’t touch on that enough, though.
Just by observation, it seems that 100% probability simply tends to be weighed slightly more heavily—say, an extra 20%. I’d expect that for most people, there’s a point where they’d take the 99% over the 100%.
Sacrificing a guaranteed thing for an uncertain thing also has a different psychological weight, since if you lose, you now know you’re responsible for that loss—whereas with the 66% vs 67%, you can excuse it as “Well, I probably would have lost anyway”. This one is easily resolved by just modifying the problem so that you know what the result was, and thus if it came up 67 you know it’s your own fault.
100% certainty also has certain magical mathematical properties in Bayesian reasoning—it means there’s absolutely no possible way to update to anything less than 100%, whereas a 99% could later get updated by other evidence. And on the flip side of the coin, it requires infinite evidence to establish 100%, so it shouldn’t really exist to begin with.
The problem with set four is that money really, seriously, does not scale at those levels, and my neurology can’t really comprehend what “a million times the utility of $24K” would mean. If I ask myself “what is the smallest thing I would sacrifice $24K for a one-in-a-million chance at it”, then I’ll either get an answer, assign it that utility value, and take the bet, or find out that my neurology is incapable of evaluating utility on that scale. Either way it breaks the question. (For me, I’d sacrifice $24K for a one-in-a-million chance at a Friendly Singularity that leads to a proper Fun Eutopia)
The expected payoff calculations say that. Expected utility calculations say nothing since you haven’t specified a utility function. Neither can you say that choice 2 must be better because of the fact that for any reasonable utility function U($14k)<U($17k), because the utility of the expected payoff is not equal to the expected utility.
EDIT: pretty much every occurrence of “expected utility” in this post should be replaced with “expected payoff”.
You’re right, but I was looking at the question in terms of the (bad) assumption of linear utility for money.
I think that my decision on sets three and four is almost entirely determined by how much faith I have in the fairness of the random number generator. I’ve seen it suggested on LW before, and I think it’s a good model, that the attractiveness of “certainty” reflects disbelief in the stated odds.
In three-card monty, your chances of picking the right card are not one in three.
Is this really the only motivator, though? What stated percentage would you need it to say to think you’ve got a “real” 50%?
Depends on circumstance. If I can verify the RNG directly, pretty close to 50% plus transaction costs. If I think I have an opportunity to punish obvious defection, 100% (implying a non-stochastic outcome rule). If I have no recourse in case of defection, I would not pay anything to play regardless of stated odds.
Random musings:
I do seem to have some level of risk aversion when it comes to making these kinds of choices, but it’s not an extremely large level. The risk aversion seems to kick in most strongly when dealing with small probabilities of high payoffs. (In the “Set One” choice, I’d probably accept the 3% risk, though.)
There are plenty of values of X for which I would change my mind in this case. The expected value of choosing X is X * 0.9. 24000 / 0.9 = 26666 + 1⁄3, so I’d take the riskier option if the payoff was $27,000 or above. (Why $27,000? No particular reason other than it’s convenient to round to.)
The value of X for which the expected utility is equal to $24,000 is 24000 / 24001 = 1 − 1/24001 = 0.999959… which is very close to 1. Possessing mere bounded rationality, I might as well round it up to 1 and say that betting $24,000 against $1, regardless of the offered odds, probably isn’t worth the time to set up and resolve the bet.
Now let’s look at set 4:
Choice 1: $24,000 Choice 2: A one in a million chance of $27 billion
Well… this is where my risk aversion kicks in. The decision-making heuristic that gets invoked is “events with odds of one in a million don’t happen to me”, the probability gets rounded down to zero, and I take the $24,000 and double my current net worth instead of taking the lottery ticket. I don’t know if this makes me silly or not.
This is very close to how I feel about it—I’m really tempted to take Set Four, Choice 1 on the assumption that “events with odds of one in a million don’t happen to me” too, but I’m not sure if that’s just pure scope insensitivity or an actually rational strategy.
Personally, I would take choice one in both sets. But I think loss aversion trivially explains the paradox. In set one choice two outcome two, I would feel like a big loser. In set two choice two outcome two, not really.
Just imagine being sad in a room of 97 happy and 2 other sad people (set 1 choice 2 outcome 2), wishing you were in another room full of happy people. Set 2 choice 2 does not have this repulsiveness, the two rooms (choices) are very similar.
I think socially embedding the decision would actually help us understand the issue.
Say that people were going door-to-door making this offer. Which would you choose? Before you answer, consider this: everyone you know and everyone you will ever meet was given the same choice. Are you willing to be the one person in the room who “missed out on this golden opportunity”?
You probably feel uncomfortable, you’re probably already rubbing the Bayesian keys in your pocket to make your escape from the question. Because you know, even if you know you’re right, that this won’t look good. Talking about bias won’t help you, you have only three seconds to make a reply, just not enough time.
I think this is why even perfectly good, stone-cold rationalists will have trouble with the Allais Paradox. Part of you is making this social calculation as it goes.
But this might also make it meta rational: if you can’t deny the offer was made, it might be better to take the certain route if your social circle is more likely to respect you for it. The $3,000 might not be worth as much as the social reward.
Well yeah, but it’s a different question then. If I suggest you should fast for three days so the Sun God will make your crops flourish (or give you a raise, whatever’s applicable), you’re going to refuse. If you know your peers will stone you if you don’t fast, you’re going to accept, even though you still don’t believe in the Sun God.
It is a different question, which is the main feature of it. The problem seems to be to that the Allais Paradox bothers people. By changing the question we can often get more traction than by throwing ourselves relentlessly at something we’re having difficulty accepting.
In such social situations, you should choose 1A and 2A, and have a consistent preference for certainty; there’s nothing irrational about a preference for certainty. The irrationality is choosing 1A and 2B.
But when you reframe it socially, taking 1A and 2B becomes rational: under 1A you don’t lose socially, under 2B you gain more money but will still have defenders at the party. All that matters in the social situation is whether you’ll meet the defender threshold.
Depends on whether it’s revealed that you lost because of a bad decision or not—if there were public lists of people who took 2B and rolled a 67, thus forfeiting all their winnings, then I think you’d be right back in the same situation. If it’s totally unknown then, yeah, that’s the same reasoning I used to take 1A and 2B internally − 1B doesn’t give me a convenient excuse to say the loss wasn’t really my fault, whereas with 2B I can rationalize that I was going to lose anyways and it therefore doesn’t feel as bad.
The paradox is adequately solved by noting the difference between claimed and actual probabilities. In other words, assume an agent promises to give you money with “97%” likelihood; the real probability is whatever the actual likelihood is, multiplied by the probability that the agent won’t defect on the deal, multiplied by the probability that nothing else will go wrong.
Admittedly, claimed certainty isn’t actual certainty either, but in practice “100%” tends to be much closer to 100% than “97%” to 97%.
Could you elaborate? I don’t see how it solves the paradox. What percentage chance would you need to reasonably approximate a “real” 97%?
I am fairly confident that is my true rejection, considering that my utility is not even remotely close to linear with money on those scales. My intuitions regarding sets one and two are demonstrate certainty bias, but I can acknowledge it as irrational. I give my intuitions a rationality stamp of approval for their successful analysis of set four. The most similar mind to mine that has linear utility with money is not very similar to me at all (I’d imagine it bares more resemblance to Clippy), so I won’t speak for it as “I”, but I assume that it would take option 2.
Edit: It is conceivable that I could find myself in a situation in which I had a better use for a 10^-6 chance of getting $27 billion than a guaranteed $24000. If I was in such a situation and realized it, I would choose option 2.
I disagree. It took me a long time to figure out why I wouldn’t take Pascal’s Mugging, but I eventually did: Rule utilitarianism. If you generalize the rule “pay off Pascal’s mugger” it becomes clear that anyone who recognizes that you operate this way will be able to abuse you—clearly a rule where you accept the mugging has negative consequences for society, because if everyone operated that way, it would lead to a complete collapse of society. Simply put, a society where people accept the mugging is not sustainable.
But this situation is different. Generalizing the rule will not lead to the destruction of societal order, and I think the rational choice is to take the 0.0001% (or whatever it was) chance of $27 billion.
If you think “don’t pay off Pascal’s Muggers” is a rule to follow in rule utilitarianism, then you think it’s a rule that maximizes utility, and therefore you’ve already found a reason why paying off Pascal’s Muggers is bad utility before even considering rule utilitarianism. Therefore, I don’t think “it’s a rule in my rule utilitarianism to not do this” is your true rejection to Pascal’s Mugging.