Yes, if the game has many opportunities for betting, you should focus on the instrumental use of the money, which is via compounding, thus the instrumental value is geometric, and so you should use the Kelly criterion. In particular, if your edge is small (but can be repeated), the only way you can make a lot of money is by compounding, so you should use the Kelly criterion.
I think this comment is incorrect (in the stated generality). Here is a simple counterexample. Suppose you have a starting endowment of $1, and that you can bet any amount at 0.50001 probability of doubling your bet and 0.49999 probability of losing everything you bet. You can bet whatever amount of your money you want a total of n times. (If you lost everything in some round, we can think of this as you still being allowed to bet 0 in remaining future rounds.) The strategy that maximizes expected linear utility is the one where you bet everything every time.
It depends on n. If n is small, such as n=1, then you should bet a lot. In the limit of n large, you should use the Kelly criterion. The crossover is about n=10^5. Which is why I said that it depends on having many opportunities.
You can prove e.g. by (backwards) induction that you should bet everything every time. With the odds being p>0.5 and 1-p, if the expectation of whatever strategy you are using after n-1 steps is E, then the maximal expectation over all things you could do on the n’th step is p2E (you can see this by writing the expectation as a conditional sum over the outcomes after n-1 steps), which corresponds uniquely to the strategy where you bet everything in any situation on the n’th step. It then follows that the best you can do on the (n-1)th step is also to maximize the expectation after it, and the same argument gives that you should bet everything, and so on.
(Where did you get n=10^5 from? If it came from some computer computation, then I would wager that there was some overflow/numerical issues.)
If you have money x after n-1 steps, then betting a fraction f on the n’th step gives you expected money (1-f)x+f2px. Given p>0.5, this is maximized at f=1, i.e. betting everything, which gives the expectation 2px. So conditional on having money x after n-1 steps, to maximize expectation after n steps, you should bet everything. Letting X_i be the random variable that is the amount of money you have after i steps given your betting strategy. We have E[Xn]=∑xP(Xn−1=x)E[Xn|Xn−1=x] (one could also write down a continuous version of the same conditioning but it is a bit easier to read if we assume that the set of possible amounts of money after n-1 steps is discrete, which is what I did here). From this formula, it follows that for any given strategy up to step n-1, hence given values for P(X_{n-1}=x), the thing to do on step n that maximizes E[X_n] is the same as the thing to do that maximizes E[X_n|X_{n-1}=x] for each x. So to maximize E[X_n], you should bet everything on the n’th step. If you bet everything, then the above formula gives E[Xn]=∑xP(Xn−1=x)E[Xn|Xn−1=x]=∑xP(Xn−1=x)2px=2p∑xP(Xn−1=x)x
=2pE[Xn−1].
To recap what we showed so far: we know that given any strategy for the first n-1 steps, the best thing to do on the last step gives E[X_n]=2pE[X_{n-1}]. It follows that the strategy with maximal E[X_n] is the one with maximal 2pE[X_{n-1}], or equivalently the one with maximal E[X_{n-1}].
Now repeat the same argument for step n-1 to conclude that one should bet everything on step n-1 to maximize the expectation after it, and so on.
Or maybe to state a few things a bit more clearly: we first showed that E[X_n|X_{n-1}=x]<=2px, with equality iff we bet everything on step n. Using this, note thatE[Xn]=∑xP(Xn−1=x)E[Xn|Xn−1=x]≤∑xP(Xn−1=x)2px=2p∑xP(Xn−1=x)x
=2pE[Xn−1], with equality iff we bet everything on step n conditional on any value of X_{n-1}. So regardless of what you do for the first n-1 steps, what you should do on step n is to bet everything, and this gives you the expectation E[X_n]=2pE[X_{n-1}]. Then finish as before.
You start with with 10 bucks, I start with 10 bucks. You wager any amount up to a hundred times, each time doubling it 60% of the time and losing it 40% of the time, until one of us is bankrupt or you stop. If you wager it all, I have a 40% chance to win. If you wager one buck at a time, you win almost certainly.
If you wager one buck at a time, you win almost certainly.
But that isn’t the Kelly criterion! Kelly would say I should open by betting two bucks.
In games of that form, it seems like you should be more-and-more careful as the amount of bets gets larger. The optimal strategy doesn’t tend to Kelly in the limit.
EDIT: In fact my best opening bet is $0.64, leading to expected winnings of $19.561.
EDIT2: I reran my program with higher precision, and got the answer $0.58 instead. This concerned me so I reran again with infinite precision (rational numbers) and got that the best bet is $0.21. The expected utilities were very similar in each case, which explains the precision problems.
EDIT3: If you always use Kelly, the expected utility is only $18.866.
I believe you missed one of the rules of Gurkenglas’ game, which was that there are at most 100 rounds. (Although it’s possible I misunderstood what they were trying to say.)
If you assume that play continues until one of the players is bankrupt then in fact there are lots of winning strategies. In particular betting any constant proportion less than 38.9%. The Kelly criterion isn’t unique among them.
My program doesn’t assume anything about the strategy. It just works backwards from the last round and calculates the optimal bet and expected value for each possible amount of money you could have, on the basis of the expected values in the next round which it has already calculated. (Assuming each bet is a whole number of cents.)
Yes, if the game has many opportunities for betting, you should focus on the instrumental use of the money, which is via compounding, thus the instrumental value is geometric, and so you should use the Kelly criterion. In particular, if your edge is small (but can be repeated), the only way you can make a lot of money is by compounding, so you should use the Kelly criterion.
I think this comment is incorrect (in the stated generality). Here is a simple counterexample. Suppose you have a starting endowment of $1, and that you can bet any amount at 0.50001 probability of doubling your bet and 0.49999 probability of losing everything you bet. You can bet whatever amount of your money you want a total of n times. (If you lost everything in some round, we can think of this as you still being allowed to bet 0 in remaining future rounds.) The strategy that maximizes expected linear utility is the one where you bet everything every time.
It depends on n. If n is small, such as n=1, then you should bet a lot. In the limit of n large, you should use the Kelly criterion. The crossover is about n=10^5. Which is why I said that it depends on having many opportunities.
You can prove e.g. by (backwards) induction that you should bet everything every time. With the odds being p>0.5 and 1-p, if the expectation of whatever strategy you are using after n-1 steps is E, then the maximal expectation over all things you could do on the n’th step is p2E (you can see this by writing the expectation as a conditional sum over the outcomes after n-1 steps), which corresponds uniquely to the strategy where you bet everything in any situation on the n’th step. It then follows that the best you can do on the (n-1)th step is also to maximize the expectation after it, and the same argument gives that you should bet everything, and so on.
(Where did you get n=10^5 from? If it came from some computer computation, then I would wager that there was some overflow/numerical issues.)
Sorry, I’m confused.
I got 10^5 from 1/(p-1/2).
If you have money x after n-1 steps, then betting a fraction f on the n’th step gives you expected money (1-f)x+f2px. Given p>0.5, this is maximized at f=1, i.e. betting everything, which gives the expectation 2px. So conditional on having money x after n-1 steps, to maximize expectation after n steps, you should bet everything. Letting X_i be the random variable that is the amount of money you have after i steps given your betting strategy. We have E[Xn]=∑xP(Xn−1=x)E[Xn|Xn−1=x] (one could also write down a continuous version of the same conditioning but it is a bit easier to read if we assume that the set of possible amounts of money after n-1 steps is discrete, which is what I did here). From this formula, it follows that for any given strategy up to step n-1, hence given values for P(X_{n-1}=x), the thing to do on step n that maximizes E[X_n] is the same as the thing to do that maximizes E[X_n|X_{n-1}=x] for each x. So to maximize E[X_n], you should bet everything on the n’th step. If you bet everything, then the above formula gives E[Xn]=∑xP(Xn−1=x)E[Xn|Xn−1=x]=∑xP(Xn−1=x)2px=2p∑xP(Xn−1=x)x
=2pE[Xn−1].
To recap what we showed so far: we know that given any strategy for the first n-1 steps, the best thing to do on the last step gives E[X_n]=2pE[X_{n-1}]. It follows that the strategy with maximal E[X_n] is the one with maximal 2pE[X_{n-1}], or equivalently the one with maximal E[X_{n-1}].
Now repeat the same argument for step n-1 to conclude that one should bet everything on step n-1 to maximize the expectation after it, and so on.
Or maybe to state a few things a bit more clearly: we first showed that E[X_n|X_{n-1}=x]<=2px, with equality iff we bet everything on step n. Using this, note thatE[Xn]=∑xP(Xn−1=x)E[Xn|Xn−1=x]≤∑xP(Xn−1=x)2px=2p∑xP(Xn−1=x)x
=2pE[Xn−1], with equality iff we bet everything on step n conditional on any value of X_{n-1}. So regardless of what you do for the first n-1 steps, what you should do on step n is to bet everything, and this gives you the expectation E[X_n]=2pE[X_{n-1}]. Then finish as before.
Can you give a concrete example of such a game?
You start with with 10 bucks, I start with 10 bucks. You wager any amount up to a hundred times, each time doubling it 60% of the time and losing it 40% of the time, until one of us is bankrupt or you stop. If you wager it all, I have a 40% chance to win. If you wager one buck at a time, you win almost certainly.
But that isn’t the Kelly criterion! Kelly would say I should open by betting two bucks.
In games of that form, it seems like you should be more-and-more careful as the amount of bets gets larger. The optimal strategy doesn’t tend to Kelly in the limit.
EDIT: In fact my best opening bet is $0.64, leading to expected winnings of $19.561.
EDIT2: I reran my program with higher precision, and got the answer $0.58 instead. This concerned me so I reran again with infinite precision (rational numbers) and got that the best bet is $0.21. The expected utilities were very similar in each case, which explains the precision problems.
EDIT3: If you always use Kelly, the expected utility is only $18.866.
Does your program assume that the Kelly bet stays a fixed size, rather than changing?
Here’s a program you can paste in your browser that finds the expected value from following Kelly in Gurkenglas’ game (it finds EV to be 20)
https://pastebin.com/iTDK7jX6
(You can also fiddle with the first argument to
experiment
to see some of the effects when 4 doesn’t hold)I believe you missed one of the rules of Gurkenglas’ game, which was that there are at most 100 rounds. (Although it’s possible I misunderstood what they were trying to say.)
If you assume that play continues until one of the players is bankrupt then in fact there are lots of winning strategies. In particular betting any constant proportion less than 38.9%. The Kelly criterion isn’t unique among them.
My program doesn’t assume anything about the strategy. It just works backwards from the last round and calculates the optimal bet and expected value for each possible amount of money you could have, on the basis of the expected values in the next round which it has already calculated. (Assuming each bet is a whole number of cents.)
I did indeed! So I guess this game fails (5) out of Zvi’s criteria.