It bankrupts you with probability 1 − 0.6^300, but in the other 0.6^300 of cases you get a sweet sweet $25 × 2^300. This nets you an expected $1.42 × 10^25.
Whereas Kelly betting only has an expected value of $25 × (0.6×1.2 + 0.4×0.8)^300 = $3220637.15.
Obviously humans don’t have linear utility functions, but my point is that the Kelly criterion still isn’t the right answer when you make the assumptions more realistic. You actually have to do the calculation with the actual utility function.
So, by optimal, you mean “almost certainly bankrupt you.” Then yes.
My definition of optimal is very different.
Obviously humans don’t have linear utility functions
I don’t think that’s the only reason—if I value something linearly, I still don’t want to play a game that almost certainly bankrupts me.
Obviously humans don’t have linear utility functions, but my point is that the Kelly criterion still isn’t the right answer when you make the assumptions more realistic.
I mean, that’s not obvious—the Kelly criterion gives you, in the example with the game, E(money) = $240, compared to $246.61 with the optimal strategy. That’s really close.
I don’t think that’s the only reason—if I value something linearly, I still don’t want to play a game that almost certainly bankrupts me.
I still think that’s because you intuitively know that bankruptcy is worse-than-linearly bad for you. If your utility function were truly linear then it’s true by definition that you would trade an arbitrary chance of going bankrupt for a tiny chance of a sufficiently large reward.
I mean, that’s not obvious—the Kelly criterion gives you, in the example with the game, E(money) = $240, compared to $246.61 with the optimal strategy. That’s really close.
Yes, but the game is very easy, so a lot of different strategies get you close to the cap.
Yes, but the game is very easy, so a lot of different strategies get you close to the cap.
I’ve been thinking about it, and I’m not sure if this is the case in the sense you mean it—expected money maximization doesn’t reflect human values at all, white Kelly criterion mostly does, so if we make our assumptions more realistic, it should move us away from expected money maximization and towards the Kelly criterion, as opposed to moving us the other way.
It bankrupts you with probability 1 − 0.6^300, but in the other 0.6^300 of cases you get a sweet sweet $25 × 2^300. This nets you an expected $1.42 × 10^25.
Whereas Kelly betting only has an expected value of $25 × (0.6×1.2 + 0.4×0.8)^300 = $3220637.15.
Obviously humans don’t have linear utility functions, but my point is that the Kelly criterion still isn’t the right answer when you make the assumptions more realistic. You actually have to do the calculation with the actual utility function.
So, by optimal, you mean “almost certainly bankrupt you.” Then yes.My definition of optimal is very different.I don’t think that’s the only reason—if I value something linearly, I still don’t want to play a game that almost certainly bankrupts me.
I mean, that’s not obvious—the Kelly criterion gives you, in the example with the game, E(money) = $240, compared to $246.61 with the optimal strategy. That’s really close.
I still think that’s because you intuitively know that bankruptcy is worse-than-linearly bad for you. If your utility function were truly linear then it’s true by definition that you would trade an arbitrary chance of going bankrupt for a tiny chance of a sufficiently large reward.
Yes, but the game is very easy, so a lot of different strategies get you close to the cap.
I’ve been thinking about it, and I’m not sure if this is the case in the sense you mean it—expected money maximization doesn’t reflect human values at all, white Kelly criterion mostly does, so if we make our assumptions more realistic, it should move us away from expected money maximization and towards the Kelly criterion, as opposed to moving us the other way.