Don Geddis, see addendum above. When you start out by saying, “Assume utility is linear in money,” you beg the question.
There are three major reasons not to like volatility:
1) Not every added dollar is as useful as the last one. (When this rule is violated, you like volatility: If you need $15,000 for a lifesaving operation, you would want to double-or-nothing your $10,000 at 50⁄50 odds.)
2) Your investment activity has a boundary at zero, or at minus $10,000, or wherever—once you lose enough money you can no longer invest. If you random-walk a linear graph, you will eventually hit zero. Random-walking a logarithmic graph never hits zero. This means that the hit from $100 to $0 is much larger than the hit from $200 to $100 because you have nothing left to invest.
Both of these points imply that utility is not linear in money.
3) You can have opportunities to take advance preparations for known events, which changes the expected utility of those events. For example, if you know for certain that you’ll get $24,000 five years later, then you can borrow $18,000 today at 6% interest and be confident of paying back the loan. Note that this action induces a sharp utility gradient in the vicinity of $24,000. It doesn’t generate an Allais Paradox, unless the Allais payoff is far enough in the future that you have an opportunity to take an additional advance action in scenario 1 that is absent in scenario 2.
(Incidentally, the opportunity to take additional advance actions if the Allais payoff is far enough in the future, is by far the strongest argument in favor of trying to attach a normative interpretation to the Allais Paradox that I can think of. And come to think of it, I don’t remember ever hearing it pointed out before.)
Don Geddis, see addendum above. When you start out by saying, “Assume utility is linear in money,” you beg the question.
There are three major reasons not to like volatility:
1) Not every added dollar is as useful as the last one. (When this rule is violated, you like volatility: If you need $15,000 for a lifesaving operation, you would want to double-or-nothing your $10,000 at 50⁄50 odds.)
2) Your investment activity has a boundary at zero, or at minus $10,000, or wherever—once you lose enough money you can no longer invest. If you random-walk a linear graph, you will eventually hit zero. Random-walking a logarithmic graph never hits zero. This means that the hit from $100 to $0 is much larger than the hit from $200 to $100 because you have nothing left to invest.
Both of these points imply that utility is not linear in money.
3) You can have opportunities to take advance preparations for known events, which changes the expected utility of those events. For example, if you know for certain that you’ll get $24,000 five years later, then you can borrow $18,000 today at 6% interest and be confident of paying back the loan. Note that this action induces a sharp utility gradient in the vicinity of $24,000. It doesn’t generate an Allais Paradox, unless the Allais payoff is far enough in the future that you have an opportunity to take an additional advance action in scenario 1 that is absent in scenario 2.
(Incidentally, the opportunity to take additional advance actions if the Allais payoff is far enough in the future, is by far the strongest argument in favor of trying to attach a normative interpretation to the Allais Paradox that I can think of. And come to think of it, I don’t remember ever hearing it pointed out before.)