It’s true that diminishing marginal utility can produce some degree of risk-aversion. But there’s good reason to think that no plausible utility function can produce the risk-aversion we actually see—there are theorems along the lines of “if your utility function makes you prefer X to Y then you must also prefer A to B” where pretty much everyone prefers X to Y and pretty much no one prefers A to B.
[EDITED to add:] Ah, found the specific paper I had in mind: “Diminishing Marginal Utility of Wealth Cannot Explain Risk Aversion” by Matthew Rabin. An example from the paper: if you always turn down a 50⁄50 bet where you could either lose $10 or gain $10.10, and if the only reason is the shape of your utility function, then you should also always turn down a 50⁄50 bet where you could either lose $1000 or gain all the money in the world. (However much money there is in the world.)
I didn’t believe that claim, so I looked at the paper. The key piece is that you must always turn down the 50⁄50 lose 10/gain 10.10 bet, no matter how much wealth you have—i.e. even if you had millions or billions of dollars, you’d still turn down the small bet. Considering that assumption, I think the real-world applicability is somewhat more limited than the paper’s abstract seems to indicate.
That said, there are multiple independent lines of evidence in various contexts suggesting that humans’ degree of risk-aversion is too strong to be accounted for by diminishing marginals alone, so I do still think that’s true.
The paper has some more sophisticated examples that make less stringent assumptions. Here are a couple. “Suppose, for instance, we know a risk-averse person turns down 50-50 lose $100/gain $105 bets for any lifetime wealth level less than (say) $350,000, but know nothing about her utility function for wealth levels above $350,000, except that it is not convex. Then we know that from an initial wealth level of $340,000 the person will turn down a 50-50 bet of losing $4,000 and gaining $635,670. If we only know that a person turns down lose $100/gain $125 bets when her lifetime wealth is below $100,000, we also know she will turn down a 50-50 lose $600/gain $36 billion bet beginning from a lifetime wealth of $90,000.”
Bets have fixed costs to them in addition to the change in utility from the money gained or lost. The smaller the bet, the more those fixed costs dominate. And at some point, even the hassle from just trying to figure out that the bet is a good deal dwarfs the gain in utility from the bet. You may be better off arbitrarily refusing to take all bets below a certain threshhold because you gain from not having overhead. Even if you lose out on some good bets by having such a policy, you also spend less overhead on bad bets, which makes up for that loss.
The fixed costs also change arbitrarily; if I have to go to the ATM to get more money because I lost a $10.00 bet, the disutility from that is probably going to dwarf any utility I get from a $0.10 profit, but whether the ATM trip is necessary is essentially random.
Of course you could model those fixed costs as a reduction in utility, in which case the utility function is indeed no longer logarithmic, but you need to be very careful about what conclusions you draw from that. For instance, you can’t exploit such fixed costs to money pump someone.
Yup, I agree with all that, and I think it is one of the reasons for (at least some instances of) loss aversion. I wonder whether there have been attempts to probe loss aversion in ways that get around this issue, maybe by asking subjects to compare scenarios that somehow both have the same overheads
if you always turn down a 50⁄50 bet where you could either lose $10 or gain $10.10, and if the only reason is the shape of your utility function, then you should also always turn down a 50⁄50 bet where you could either lose $1000 or gain all the money in the world. (However much money there is in the world.)
Is the idea supposed to be that humans always turn down such a bet?
The idea is supposed to be that turning down the first sort of bet looks like ordinary risk aversion, the phenomenon that some people think concave utility functions explain; but that if the explanation is the shape of the utility function, then those same people who turn down the first sort of bet—which I think a lot of people do—should also turn down the second sort of bet, even though it seems clear that a lot of those people would not turn down a bet that gave them a 50% chance of losing $1k and a 50% chance of winning Jeff Bezos’s entire fortune.
(I personally would probably turn down a 50-50 bet between gaining $10.10 and losing $10.00. My consciously-accessible reasons aren’t about losing $10 feeling like a bigger deal than gaining $10.10, they’re about the “overhead” of making the bet, the possibility that my counterparty doesn’t pay up, and the like. And I would absolutely take a 50-50 bet between losing $1k and gaining, say, $1M, again assuming that it had been firmly enough established that no cheating was going on.)
But would you continue turning down such bets no matter how big your bankroll is? A serious investor can have a lot of automated systems in place to reduce the overhead of transactions. For example, running a casino can be seen as an automated system for accepting bets with a small edge.
(Similarly, you might not think of a millionaire as having time to sell you a ball point pen with a tiny profit margin. But a ball point pen company is a system for doing so, and a millionaire might own one.)
If you were playing some kind of stock/betting market, you would be wize to write a script to accept such bets up to the Kelly limit, if you could do so.
My bankroll is already enough bigger than $10.10 that shortage of money isn’t the reason why I would not take that bet.
I might well take a bet composed of 100 separate $10/$10.10 bets (I’d need to think a bit about the actual distribution of wins and losses before deciding) even though I wouldn’t take one of them in isolation, but that’s a different bet.
Yes, many humans exhibit the former betting behavior but not the latter. Rabin argues that an Eu maximizer doing the former will do the latter. Hence, we need to think of humans as something than Eu maximizers.
But humans who work the stock market would write code to vacuum up 1000-to-1010 investments as fast as possible, to take advantage of them before others, so long as they were small enough compared to the bankroll to be approved of by fractional Kelley betting.
Unless the point is that they’re so small that it’s not worth the time spend writing the code. But then the explanation seems to be perfectly reasonable attention allocation. We could model the attention allocation directly, or, we could model them as utility maximizers up to epsilon—like, they don’t reliably pick up expected utility when it’s under $20 or so.
I’m not contesting the overall conclusion that humans aren’t EV maximizers, but this doesn’t seem like a particularly good argument.
It’s true that diminishing marginal utility can produce some degree of risk-aversion. But there’s good reason to think that no plausible utility function can produce the risk-aversion we actually see—there are theorems along the lines of “if your utility function makes you prefer X to Y then you must also prefer A to B” where pretty much everyone prefers X to Y and pretty much no one prefers A to B.
[EDITED to add:] Ah, found the specific paper I had in mind: “Diminishing Marginal Utility of Wealth Cannot Explain Risk Aversion” by Matthew Rabin. An example from the paper: if you always turn down a 50⁄50 bet where you could either lose $10 or gain $10.10, and if the only reason is the shape of your utility function, then you should also always turn down a 50⁄50 bet where you could either lose $1000 or gain all the money in the world. (However much money there is in the world.)
I didn’t believe that claim, so I looked at the paper. The key piece is that you must always turn down the 50⁄50 lose 10/gain 10.10 bet, no matter how much wealth you have—i.e. even if you had millions or billions of dollars, you’d still turn down the small bet. Considering that assumption, I think the real-world applicability is somewhat more limited than the paper’s abstract seems to indicate.
That said, there are multiple independent lines of evidence in various contexts suggesting that humans’ degree of risk-aversion is too strong to be accounted for by diminishing marginals alone, so I do still think that’s true.
The paper has some more sophisticated examples that make less stringent assumptions. Here are a couple. “Suppose, for instance, we know a risk-averse person turns down 50-50 lose $100/gain $105 bets for any lifetime wealth level less than (say) $350,000, but know nothing about her utility function for wealth levels above $350,000, except that it is not convex. Then we know that from an initial wealth level of $340,000 the person will turn down a 50-50 bet of losing $4,000 and gaining $635,670. If we only know that a person turns down lose $100/gain $125 bets when her lifetime wealth is below $100,000, we also know she will turn down a 50-50 lose $600/gain $36 billion bet beginning from a lifetime wealth of $90,000.”
Bets have fixed costs to them in addition to the change in utility from the money gained or lost. The smaller the bet, the more those fixed costs dominate. And at some point, even the hassle from just trying to figure out that the bet is a good deal dwarfs the gain in utility from the bet. You may be better off arbitrarily refusing to take all bets below a certain threshhold because you gain from not having overhead. Even if you lose out on some good bets by having such a policy, you also spend less overhead on bad bets, which makes up for that loss.
The fixed costs also change arbitrarily; if I have to go to the ATM to get more money because I lost a $10.00 bet, the disutility from that is probably going to dwarf any utility I get from a $0.10 profit, but whether the ATM trip is necessary is essentially random.
Of course you could model those fixed costs as a reduction in utility, in which case the utility function is indeed no longer logarithmic, but you need to be very careful about what conclusions you draw from that. For instance, you can’t exploit such fixed costs to money pump someone.
Yup, I agree with all that, and I think it is one of the reasons for (at least some instances of) loss aversion. I wonder whether there have been attempts to probe loss aversion in ways that get around this issue, maybe by asking subjects to compare scenarios that somehow both have the same overheads
Possibly relevant in the context of Kelly betting/ maximazing log wealth.
Is the idea supposed to be that humans always turn down such a bet?
The idea is supposed to be that turning down the first sort of bet looks like ordinary risk aversion, the phenomenon that some people think concave utility functions explain; but that if the explanation is the shape of the utility function, then those same people who turn down the first sort of bet—which I think a lot of people do—should also turn down the second sort of bet, even though it seems clear that a lot of those people would not turn down a bet that gave them a 50% chance of losing $1k and a 50% chance of winning Jeff Bezos’s entire fortune.
(I personally would probably turn down a 50-50 bet between gaining $10.10 and losing $10.00. My consciously-accessible reasons aren’t about losing $10 feeling like a bigger deal than gaining $10.10, they’re about the “overhead” of making the bet, the possibility that my counterparty doesn’t pay up, and the like. And I would absolutely take a 50-50 bet between losing $1k and gaining, say, $1M, again assuming that it had been firmly enough established that no cheating was going on.)
But would you continue turning down such bets no matter how big your bankroll is? A serious investor can have a lot of automated systems in place to reduce the overhead of transactions. For example, running a casino can be seen as an automated system for accepting bets with a small edge.
(Similarly, you might not think of a millionaire as having time to sell you a ball point pen with a tiny profit margin. But a ball point pen company is a system for doing so, and a millionaire might own one.)
If you were playing some kind of stock/betting market, you would be wize to write a script to accept such bets up to the Kelly limit, if you could do so.
Also see my reply to koreindian.
My bankroll is already enough bigger than $10.10 that shortage of money isn’t the reason why I would not take that bet.
I might well take a bet composed of 100 separate $10/$10.10 bets (I’d need to think a bit about the actual distribution of wins and losses before deciding) even though I wouldn’t take one of them in isolation, but that’s a different bet.
Yes, many humans exhibit the former betting behavior but not the latter. Rabin argues that an Eu maximizer doing the former will do the latter. Hence, we need to think of humans as something than Eu maximizers.
OK.
But humans who work the stock market would write code to vacuum up 1000-to-1010 investments as fast as possible, to take advantage of them before others, so long as they were small enough compared to the bankroll to be approved of by fractional Kelley betting.
Unless the point is that they’re so small that it’s not worth the time spend writing the code. But then the explanation seems to be perfectly reasonable attention allocation. We could model the attention allocation directly, or, we could model them as utility maximizers up to epsilon—like, they don’t reliably pick up expected utility when it’s under $20 or so.
I’m not contesting the overall conclusion that humans aren’t EV maximizers, but this doesn’t seem like a particularly good argument.