Well, no. As the number of (possible) bets approaches infinity, the probability of winning APPROACHES zero, while the payout approaches infinity. You can’t round these things off arbitrarily, and you certainly can’t round them off in your evaluation and then laugh at the players for not rounding them off.
The problem with trying to apply any of this to the real world is that there’s a lot of uncertainty in the actual bets being made, which at the extremes completely overwhelms any calculations you want to do.
So I’ve read that in the past, and started to read it again but got distracted. Can I check what you’re trying to contribute with it?
E.g. do you think it disagrees with me about something? Clarifies something I’m confused about? Adds support to something I’m trying to say? Is mostly just tangential?
The post explains Kelly betting and why you might want to bet “as though” you had a logarithmic utility of money instead of a linear one when faced with the opportunity to bet a percentage of your bankroll over and over, even if you don’t actually have logarithmic utility in money. (Basically, what it comes down to is, do you really want your decisions to depend on the payoff you can get from events that have a literally zero probability of happening?)
This sounds like the sort of thing I reply to in these two paragraphs?
I suppose one thing you could do here is pretend you can fit infinite rounds of the game into a finite time. Then Linda has a choice to make: she can either maximize expected wealth at tn for all finite n, or she can maximize expected wealth at tω, the timestep immediately after all finite timesteps. We can wave our hands a lot and say that making her own bets would do the former and making Logan’s bets would do the latter, though I don’t endorse the way we’re treating infinties here.
Even then, I think what we’re saying is that Linda is underspecified. Suppose she’s offered a loan, “I’ll give you £1 now and you give me £2 in a week”. Will she accept? I can imagine a Linda who’d accept and a Linda who’d reject, both of whom would still be expected-money maximizers, just taking the expectation at different times and/or expanding “money” to include debts. So you could imagine a Linda who makes short-term sacrifices in her expected-money in exchange for long-term gains, and (again, waving your hands harder than doctors recommend) you could imagine her taking Logan’s bets. But this is more about delayed gratification than about Logan’s utility function being better for Linda than her own, or anything like that.
I might be wrong about how you think Sarah’s post relates to mine. But if you think it brings up something that contradicts my post, or that my post ought to respond to but doesn’t, or something like that, are you able to point at it more specifically?
The problem with Linda’s betting strategy is that as the number of bets approaches infinity, the worlds where she wins end up with probability zero.
Well, no. As the number of (possible) bets approaches infinity, the probability of winning APPROACHES zero, while the payout approaches infinity. You can’t round these things off arbitrarily, and you certainly can’t round them off in your evaluation and then laugh at the players for not rounding them off.
The problem with trying to apply any of this to the real world is that there’s a lot of uncertainty in the actual bets being made, which at the extremes completely overwhelms any calculations you want to do.
So I’ve read that in the past, and started to read it again but got distracted. Can I check what you’re trying to contribute with it?
E.g. do you think it disagrees with me about something? Clarifies something I’m confused about? Adds support to something I’m trying to say? Is mostly just tangential?
The post explains Kelly betting and why you might want to bet “as though” you had a logarithmic utility of money instead of a linear one when faced with the opportunity to bet a percentage of your bankroll over and over, even if you don’t actually have logarithmic utility in money. (Basically, what it comes down to is, do you really want your decisions to depend on the payoff you can get from events that have a literally zero probability of happening?)
This sounds like the sort of thing I reply to in these two paragraphs?
I might be wrong about how you think Sarah’s post relates to mine. But if you think it brings up something that contradicts my post, or that my post ought to respond to but doesn’t, or something like that, are you able to point at it more specifically?
That’s very bad, but maybe not as bad as you think, after all we can be faced with probability 0 events and still succeed.