If someone reports inconsistent preferences in the Allais paradox, they’re violating the axiom of independence and are vulnerable to a Dutch Book. How would you actually do that? What combination of bets should they accept that would yield a guaranteed loss for them?
(Eliezer modified the numbers a bit, compared with other statements of the Allais paradox that I’ve seen. I don’t think this makes a substantial difference to what’s going on.)
The point of the Allais paradox is less about how humans violate the axiom of independence and more about how our utility functions are nonlinear, especially with respect to infinitesimal risk.
There is an existing Dutch Book for eliminating infinitesimal risk, and it’s called insurance.
Yyyyes and no. Our utility functions are nonlinear, especially with respect to infinitesimal risk, but this is not inherently bad. There’s no reason for our utility to be everywhere linear with wealth: in fact, it would be very strange for someone to equally value “Having $1 million” and “Having $2 million with 50% probability, and having no money at all (and starving on the street) otherwise”.
Insurance does take advantage of this, and it’s weird in that both the insurance salesman and the buyers of insurance end up better off in expected utility, but it’s not a Dutch Book in the usual sense: it doesn’t guarantee either side a profit.
The Allais paradox points out that people are not only averse to risk, but also inconsistent about how they are averse about it. The utility function U(X cents) = X is not risk-averse, and it picks gambles 1A and 2A (in Wikipedia’s notation). The utility function U(X cents) = log X is extremely risk-averse, and it picks gambles 1B and 2B. Picking gambles 1A and 2B, on the other hand, cannot be described by any utility function.
There’s a Dutch book for the Allais paradox in this post reading after “money pump”.
I didn’t mean to imply nonlinear functions are bad. It’s just how humans are.
Picking gambles 1A and 2B, on the other hand, cannot be described by any utility function.
Prospect Theory describes this and even has a post here on lesswrong. My understanding is that humans have both a non-linear utility function as well as a non-linear risk function. This seems like a useful safeguard against imperfect risk estimation.
[Insurance is] not a Dutch Book in the usual sense: it doesn’t guarantee either side a profit.
If you setup your books correctly, then it is guaranteed. A dutch book doesn’t need to work with only one participant, and in fact many dutch books only work with on populations rather than individuals, in the same way insurance only guarantees a profit when properly spread across groups.
Insurance makes a profit in expectation, but an insurance salesman does have some tiny chance of bankruptcy, though I agree that this is not important. What is important, however, is that an insurance buyer is not guaranteed a loss, which is what distinguishes it from other Dutch books for me.
Prospect theory and similar ideas are close to an explanation of why the Allais Paradox occurs. (That is, why humans pick gambles 1A and 2B, even though this is inconsistent.) But, to my knowledge, while utility theory is both a (bad) model of humans and a guide to how decisions should be made, prospect theory is a better model of humans but often describes errors in reasoning.
(That is, I’m sure it prevents people from doing really stupid things in some cases. But for small bets, it’s probably a bad idea; Kahneman suggests teaching yourself out of it by making yourself think ahead to how many such bets you’ll make over a lifetime. This is a frame of mind in which the risk thing is less of a factor.)
You get them to pay you for one, in terms of the other. People will pay you for a small chance of a big payoff in units of a medium chance of medium payoff. People will pay you for the certainty of a moderate reward by giving up a higher reward with a small chance of failure. All of the good examples of this I can think of are already well-populated business models, but I didn’t try very hard so you can probably find some unexploited ones.
I get that you can do this in principle, but in the specific case of the Allais Paradox (and going off the Wikipedia setup and terminology), if someone prefers options 1B and 2A, what specific sequence of trades do you offer them? It seems like you’d give them 1A, then go 1A → 1B → (some transformation of 1B formally equivalent to 2B) → 2A → (some transformation of 2A formally equivalent to 1A’) → 1B’ ->… in perpetuity, but what are the “(some transformation of [X] formally equivalent to [Y])” in this case?
You can stagger the bets and offer either a 1A → 1B → 1A circle or a 2B → 2A → 2B circle.
Suppose the bets are implemented in two stages. In stage 1 you have an 89% chance of the independent payoff ($1 million for bets 1A and 1B, nothing for bets 2A and 2B) and an 11% chance of moving to stage 2. In stage 2 you either get $1 million (for bets 1A and 2A) or a 10⁄11 chance of getting $5 million.
Then suppose someone prefers a 10⁄11 chance of 5 million (bet 3B) to a sure $1 million (bet 3A), prefers 2A to 2B, and currently has 2B in this staggered form. You do the following:
Trade them 2A for 2B+$1.
Play stage 1. If they don’t move on to stage 2, they’re down $1 from where they started. If they do move on to stage 2, they now have bet 3A.
Trade them 3B for 3A+$1.
Play stage 2.
The net effect of those trades is that they still played gamble 2B but gave you a dollar or two. If they prefer 3A to 3B and 1B to 1A, you can do the same thing to get them to circle from 1A back to 1A. It’s not the infinite cycle of losses you mention, but it is a guaranteed loss.
“People will pay you,” as in different people, true, but I really doubt that you can get the same person to keep paying you over and over through many cycles. They will remember the history, and that will affect their later behavior.
My cards on the table: Allais was right. The collection of VNM axioms, taken as a whole, is rationally non-binding.
If someone reports inconsistent preferences in the Allais paradox, they’re violating the axiom of independence and are vulnerable to a Dutch Book. How would you actually do that? What combination of bets should they accept that would yield a guaranteed loss for them?
There is a demonstration of exactly this in Eliezer’s post from 2008 about the Allais paradox.
(Eliezer modified the numbers a bit, compared with other statements of the Allais paradox that I’ve seen. I don’t think this makes a substantial difference to what’s going on.)
In Eliezer’s formulation, I pay him two cents, and then he pays me tens of thousands of dollars. That doesn’t sound like a very convincing exploit.
And if we move the payoffs closer to zero, I expect that the paradox disappears.
The point of the Allais paradox is less about how humans violate the axiom of independence and more about how our utility functions are nonlinear, especially with respect to infinitesimal risk.
There is an existing Dutch Book for eliminating infinitesimal risk, and it’s called insurance.
Yyyyes and no. Our utility functions are nonlinear, especially with respect to infinitesimal risk, but this is not inherently bad. There’s no reason for our utility to be everywhere linear with wealth: in fact, it would be very strange for someone to equally value “Having $1 million” and “Having $2 million with 50% probability, and having no money at all (and starving on the street) otherwise”.
Insurance does take advantage of this, and it’s weird in that both the insurance salesman and the buyers of insurance end up better off in expected utility, but it’s not a Dutch Book in the usual sense: it doesn’t guarantee either side a profit.
The Allais paradox points out that people are not only averse to risk, but also inconsistent about how they are averse about it. The utility function U(X cents) = X is not risk-averse, and it picks gambles 1A and 2A (in Wikipedia’s notation). The utility function U(X cents) = log X is extremely risk-averse, and it picks gambles 1B and 2B. Picking gambles 1A and 2B, on the other hand, cannot be described by any utility function.
There’s a Dutch book for the Allais paradox in this post reading after “money pump”.
I didn’t mean to imply nonlinear functions are bad. It’s just how humans are.
Prospect Theory describes this and even has a post here on lesswrong. My understanding is that humans have both a non-linear utility function as well as a non-linear risk function. This seems like a useful safeguard against imperfect risk estimation.
If you setup your books correctly, then it is guaranteed. A dutch book doesn’t need to work with only one participant, and in fact many dutch books only work with on populations rather than individuals, in the same way insurance only guarantees a profit when properly spread across groups.
Insurance makes a profit in expectation, but an insurance salesman does have some tiny chance of bankruptcy, though I agree that this is not important. What is important, however, is that an insurance buyer is not guaranteed a loss, which is what distinguishes it from other Dutch books for me.
Prospect theory and similar ideas are close to an explanation of why the Allais Paradox occurs. (That is, why humans pick gambles 1A and 2B, even though this is inconsistent.) But, to my knowledge, while utility theory is both a (bad) model of humans and a guide to how decisions should be made, prospect theory is a better model of humans but often describes errors in reasoning.
(That is, I’m sure it prevents people from doing really stupid things in some cases. But for small bets, it’s probably a bad idea; Kahneman suggests teaching yourself out of it by making yourself think ahead to how many such bets you’ll make over a lifetime. This is a frame of mind in which the risk thing is less of a factor.)
You get them to pay you for one, in terms of the other. People will pay you for a small chance of a big payoff in units of a medium chance of medium payoff. People will pay you for the certainty of a moderate reward by giving up a higher reward with a small chance of failure. All of the good examples of this I can think of are already well-populated business models, but I didn’t try very hard so you can probably find some unexploited ones.
I get that you can do this in principle, but in the specific case of the Allais Paradox (and going off the Wikipedia setup and terminology), if someone prefers options 1B and 2A, what specific sequence of trades do you offer them? It seems like you’d give them 1A, then go 1A → 1B → (some transformation of 1B formally equivalent to 2B) → 2A → (some transformation of 2A formally equivalent to 1A’) → 1B’ ->… in perpetuity, but what are the “(some transformation of [X] formally equivalent to [Y])” in this case?
You can stagger the bets and offer either a 1A → 1B → 1A circle or a 2B → 2A → 2B circle.
Suppose the bets are implemented in two stages. In stage 1 you have an 89% chance of the independent payoff ($1 million for bets 1A and 1B, nothing for bets 2A and 2B) and an 11% chance of moving to stage 2. In stage 2 you either get $1 million (for bets 1A and 2A) or a 10⁄11 chance of getting $5 million.
Then suppose someone prefers a 10⁄11 chance of 5 million (bet 3B) to a sure $1 million (bet 3A), prefers 2A to 2B, and currently has 2B in this staggered form. You do the following:
Trade them 2A for 2B+$1.
Play stage 1. If they don’t move on to stage 2, they’re down $1 from where they started. If they do move on to stage 2, they now have bet 3A.
Trade them 3B for 3A+$1.
Play stage 2.
The net effect of those trades is that they still played gamble 2B but gave you a dollar or two. If they prefer 3A to 3B and 1B to 1A, you can do the same thing to get them to circle from 1A back to 1A. It’s not the infinite cycle of losses you mention, but it is a guaranteed loss.
“People will pay you,” as in different people, true, but I really doubt that you can get the same person to keep paying you over and over through many cycles. They will remember the history, and that will affect their later behavior.
My cards on the table: Allais was right. The collection of VNM axioms, taken as a whole, is rationally non-binding.