I think it’s fine to bite the bullet that an unlimited linear utility function has the property of preferring an infinitesimal chance of a ludicrous payout. Jane can be sad at a particular outcome without regretting her decisions. Kelly optimizes Logan’s utility function, and NOT Jane’s.
I think you need to be a bit more formal with your termination conditions—infinity doesn’t exist, and rounding things off means you’re making incorrect inferences. An example is when you say
According to my usage of the term, one bets Kelly when one wants to “rank-optimize” one’s wealth, i.e. to become richer with probability 1 than anyone who doesn’t bet Kelly, over a long enough time period.
this is simply incorrect, and contradicts your above analysis of Jane’s preferences. In fact, Kelly is only richer than “bet it all, every time” with a probability equal to the “bet it all” strategy’s likelihood of ruin. Kelly is poorer than “bet it all” otherwise. This probability is never 1, though it can get arbitrarily close. But the value of the (tiny) probability of continued wins is so much larger than Kelly’s outcome in that situation that the mean of ALL outcomes still favors the risk. Reasonable termination conditions are “until the player dies or goes broke”, “until the player meets a threshold or goes broke”, “until the casino can’t cover the bet (or the player goes broke)”. If you’re feeling silly, “until the heat-death of the universe”, but it’s hard to really think about such utility functions, and we probably don’t have enough time or compute capacity to handle the calculations cheaply.
I think a lot of confusion comes from considering a utility function that “seems reasonable”, in a fairly narrow range of situations, and then extending them to unreasonable lengths and being surprised that it contradicts our intuitions. Along with your observations that it’s never the case that we have all this much knowledge about our own utility, or about the actual bets on offer. In the real world, it pays to be very suspicious of probability or amount calculations that are very large or very small—the unknowns and outlier events come to dominate those decisions.
Kelly optimizes Logan’s utility function, and NOT Jane’s.
Kelly doesn’t optimize either of those things. When offered the bets that ruin Linda, we see that she doesn’t optimize Linda’s utility function (she bets like Logan in that situation); and when offered the bets that ruin Logan, we see that she doesn’t optimize Logan’s utility function (this is explored in the final section). A large part of the point of the previous post is that Kelly betting isn’t about optimizing a utility function.
this is simply incorrect
I’m not sure what you think is incorrect. I assume you don’t mean I’m wrong about how I use the term. I guess you mean “no, the strategy that you describe as betting Kelly does not have that effect in this situation”? (And I assume by that strategy, you’re thinking of the fractional-betting thing, with unlimited subdivisions allowed?)
I also guess you misunderstand what I mean by rank-optimizing. I gave a technical definition in the linked post as
A strategy λ is rank-optimal if for all strategies μ,
limn→∞P(Vn(λ)≥Vn(μ))=1.
(And we can also talk about a strategy being “equally rank-optimal” as or “more rank-optimal” than another, in the obvious ways. I’m pretty sure this will be a partial order in general, and I suspect a total order among strategy spaces we care about.)
And it seems clear to me that under this definition, fractional betting (with unlimited subdivisions) is indeed more rank-optimal than betting everything every time.
Perhaps my non-technical definition made you think the technical definition was something else? Maybe “with probability tending to 1” would have been clearer.
Yes, my objection is solved with “probability tending to 1”. At any finite point, the probability is less than 1, and the magnitude of win in those cases tends to infinity.
I think it’s fine to bite the bullet that an unlimited linear utility function has the property of preferring an infinitesimal chance of a ludicrous payout. Jane can be sad at a particular outcome without regretting her decisions. Kelly optimizes Logan’s utility function, and NOT Jane’s.
I think you need to be a bit more formal with your termination conditions—infinity doesn’t exist, and rounding things off means you’re making incorrect inferences. An example is when you say
this is simply incorrect, and contradicts your above analysis of Jane’s preferences. In fact, Kelly is only richer than “bet it all, every time” with a probability equal to the “bet it all” strategy’s likelihood of ruin. Kelly is poorer than “bet it all” otherwise. This probability is never 1, though it can get arbitrarily close. But the value of the (tiny) probability of continued wins is so much larger than Kelly’s outcome in that situation that the mean of ALL outcomes still favors the risk. Reasonable termination conditions are “until the player dies or goes broke”, “until the player meets a threshold or goes broke”, “until the casino can’t cover the bet (or the player goes broke)”. If you’re feeling silly, “until the heat-death of the universe”, but it’s hard to really think about such utility functions, and we probably don’t have enough time or compute capacity to handle the calculations cheaply.
I think a lot of confusion comes from considering a utility function that “seems reasonable”, in a fairly narrow range of situations, and then extending them to unreasonable lengths and being surprised that it contradicts our intuitions. Along with your observations that it’s never the case that we have all this much knowledge about our own utility, or about the actual bets on offer. In the real world, it pays to be very suspicious of probability or amount calculations that are very large or very small—the unknowns and outlier events come to dominate those decisions.
By Jane, do you mean Linda?
Kelly doesn’t optimize either of those things. When offered the bets that ruin Linda, we see that she doesn’t optimize Linda’s utility function (she bets like Logan in that situation); and when offered the bets that ruin Logan, we see that she doesn’t optimize Logan’s utility function (this is explored in the final section). A large part of the point of the previous post is that Kelly betting isn’t about optimizing a utility function.
I’m not sure what you think is incorrect. I assume you don’t mean I’m wrong about how I use the term. I guess you mean “no, the strategy that you describe as betting Kelly does not have that effect in this situation”? (And I assume by that strategy, you’re thinking of the fractional-betting thing, with unlimited subdivisions allowed?)
I also guess you misunderstand what I mean by rank-optimizing. I gave a technical definition in the linked post as
And it seems clear to me that under this definition, fractional betting (with unlimited subdivisions) is indeed more rank-optimal than betting everything every time.
Perhaps my non-technical definition made you think the technical definition was something else? Maybe “with probability tending to 1” would have been clearer.
Yes, my objection is solved with “probability tending to 1”. At any finite point, the probability is less than 1, and the magnitude of win in those cases tends to infinity.