if you always turn down a 50⁄50 bet where you could either lose $10 or gain $10.10, and if the only reason is the shape of your utility function, then you should also always turn down a 50⁄50 bet where you could either lose $1000 or gain all the money in the world. (However much money there is in the world.)
Is the idea supposed to be that humans always turn down such a bet?
The idea is supposed to be that turning down the first sort of bet looks like ordinary risk aversion, the phenomenon that some people think concave utility functions explain; but that if the explanation is the shape of the utility function, then those same people who turn down the first sort of bet—which I think a lot of people do—should also turn down the second sort of bet, even though it seems clear that a lot of those people would not turn down a bet that gave them a 50% chance of losing $1k and a 50% chance of winning Jeff Bezos’s entire fortune.
(I personally would probably turn down a 50-50 bet between gaining $10.10 and losing $10.00. My consciously-accessible reasons aren’t about losing $10 feeling like a bigger deal than gaining $10.10, they’re about the “overhead” of making the bet, the possibility that my counterparty doesn’t pay up, and the like. And I would absolutely take a 50-50 bet between losing $1k and gaining, say, $1M, again assuming that it had been firmly enough established that no cheating was going on.)
But would you continue turning down such bets no matter how big your bankroll is? A serious investor can have a lot of automated systems in place to reduce the overhead of transactions. For example, running a casino can be seen as an automated system for accepting bets with a small edge.
(Similarly, you might not think of a millionaire as having time to sell you a ball point pen with a tiny profit margin. But a ball point pen company is a system for doing so, and a millionaire might own one.)
If you were playing some kind of stock/betting market, you would be wize to write a script to accept such bets up to the Kelly limit, if you could do so.
My bankroll is already enough bigger than $10.10 that shortage of money isn’t the reason why I would not take that bet.
I might well take a bet composed of 100 separate $10/$10.10 bets (I’d need to think a bit about the actual distribution of wins and losses before deciding) even though I wouldn’t take one of them in isolation, but that’s a different bet.
Yes, many humans exhibit the former betting behavior but not the latter. Rabin argues that an Eu maximizer doing the former will do the latter. Hence, we need to think of humans as something than Eu maximizers.
But humans who work the stock market would write code to vacuum up 1000-to-1010 investments as fast as possible, to take advantage of them before others, so long as they were small enough compared to the bankroll to be approved of by fractional Kelley betting.
Unless the point is that they’re so small that it’s not worth the time spend writing the code. But then the explanation seems to be perfectly reasonable attention allocation. We could model the attention allocation directly, or, we could model them as utility maximizers up to epsilon—like, they don’t reliably pick up expected utility when it’s under $20 or so.
I’m not contesting the overall conclusion that humans aren’t EV maximizers, but this doesn’t seem like a particularly good argument.
Is the idea supposed to be that humans always turn down such a bet?
The idea is supposed to be that turning down the first sort of bet looks like ordinary risk aversion, the phenomenon that some people think concave utility functions explain; but that if the explanation is the shape of the utility function, then those same people who turn down the first sort of bet—which I think a lot of people do—should also turn down the second sort of bet, even though it seems clear that a lot of those people would not turn down a bet that gave them a 50% chance of losing $1k and a 50% chance of winning Jeff Bezos’s entire fortune.
(I personally would probably turn down a 50-50 bet between gaining $10.10 and losing $10.00. My consciously-accessible reasons aren’t about losing $10 feeling like a bigger deal than gaining $10.10, they’re about the “overhead” of making the bet, the possibility that my counterparty doesn’t pay up, and the like. And I would absolutely take a 50-50 bet between losing $1k and gaining, say, $1M, again assuming that it had been firmly enough established that no cheating was going on.)
But would you continue turning down such bets no matter how big your bankroll is? A serious investor can have a lot of automated systems in place to reduce the overhead of transactions. For example, running a casino can be seen as an automated system for accepting bets with a small edge.
(Similarly, you might not think of a millionaire as having time to sell you a ball point pen with a tiny profit margin. But a ball point pen company is a system for doing so, and a millionaire might own one.)
If you were playing some kind of stock/betting market, you would be wize to write a script to accept such bets up to the Kelly limit, if you could do so.
Also see my reply to koreindian.
My bankroll is already enough bigger than $10.10 that shortage of money isn’t the reason why I would not take that bet.
I might well take a bet composed of 100 separate $10/$10.10 bets (I’d need to think a bit about the actual distribution of wins and losses before deciding) even though I wouldn’t take one of them in isolation, but that’s a different bet.
Yes, many humans exhibit the former betting behavior but not the latter. Rabin argues that an Eu maximizer doing the former will do the latter. Hence, we need to think of humans as something than Eu maximizers.
OK.
But humans who work the stock market would write code to vacuum up 1000-to-1010 investments as fast as possible, to take advantage of them before others, so long as they were small enough compared to the bankroll to be approved of by fractional Kelley betting.
Unless the point is that they’re so small that it’s not worth the time spend writing the code. But then the explanation seems to be perfectly reasonable attention allocation. We could model the attention allocation directly, or, we could model them as utility maximizers up to epsilon—like, they don’t reliably pick up expected utility when it’s under $20 or so.
I’m not contesting the overall conclusion that humans aren’t EV maximizers, but this doesn’t seem like a particularly good argument.