But humans who work the stock market would write code to vacuum up 1000-to-1010 investments as fast as possible, to take advantage of them before others, so long as they were small enough compared to the bankroll to be approved of by fractional Kelley betting.
Unless the point is that they’re so small that it’s not worth the time spend writing the code. But then the explanation seems to be perfectly reasonable attention allocation. We could model the attention allocation directly, or, we could model them as utility maximizers up to epsilon—like, they don’t reliably pick up expected utility when it’s under $20 or so.
I’m not contesting the overall conclusion that humans aren’t EV maximizers, but this doesn’t seem like a particularly good argument.
OK.
But humans who work the stock market would write code to vacuum up 1000-to-1010 investments as fast as possible, to take advantage of them before others, so long as they were small enough compared to the bankroll to be approved of by fractional Kelley betting.
Unless the point is that they’re so small that it’s not worth the time spend writing the code. But then the explanation seems to be perfectly reasonable attention allocation. We could model the attention allocation directly, or, we could model them as utility maximizers up to epsilon—like, they don’t reliably pick up expected utility when it’s under $20 or so.
I’m not contesting the overall conclusion that humans aren’t EV maximizers, but this doesn’t seem like a particularly good argument.