Edges is p—q (your probability minus the odds implied probability 1/r in your notation).
I don’t think pro-gamblers do think in those terms. I think most pro-gamblers fall into several camps:
1. Serious whales. Have more capital than the market can bet. Bet sizing is not an issue, so they bet as much as they can every time. 2. Fixed stake bettors. They usually pick a number 1%, 2% are common and bet this every time. (There are variants of this, eg staking to win a fixed X) 3. (Fractional) Kelly bettors. I think this is closest to your example. A 1/2-Kelly bettor would see 51% vs 49% and say bet 0.5 * (51-49)/(100 − 49) ~ 0.5 * 4% = 2%
Edges is p—q (your probability minus the odds implied probability 1/r in your notation).
I just want to note for the record that this doesn’t agree with Dave Orr’s calculation, nor with the rumor that Kelly betting is “betting your edge”. So perhaps Dave Orr has some different formula in mind.
As for myself, I take this as evidence that “edge” really is an intuitive concept which people inconsistently put math to.
At this point I will admit that my gambling days were focused on poker, and Kelly isn’t very useful for that.
But here’s the formula as I understand it: EV/odds = edge, where odds is expressed as a multiple of one. So for the coinflip case we’re disagreeing about, EV is .02, odds are 1, so you bet .02.
If instead you had a coinflip with a fair coin where you were paid $2 on a win and lose $1 on a loss, your EV is .5/flip, odds are 2, so bet 25%.
Right, so this is basically the “net expected value divided by net win value” formula which I examine in the post, because “odds” is the same as b, net winnings per dollar invested. (Double-or-nothing means odds of 1, triple-or-nothing means odds of 2, etc.)
At this point I will admit that my gambling days were focused on poker, and Kelly isn’t very useful for that.
Yeah? How do you decide how much you’re willing to bet on a particular hand? Kelly isn’t relevant at all?
It’s too cumbersome and only addresses part of the issue. Kelly more or less assumes that you make a bet, it gets resolved, now you can make the next bet. But in poker, with multiple streets, you have to think about a sequence of bets based on some distribution of opponent actions and new information.
Also with Kelly you don’t usually have to think about how the size of your bet influences your likelihood to win, but in poker the amount that you bluff both changes the probability of the bluff being successful (people call less when you bet more) but also the amount you lose if you’re wrong. Or if you value bet (meaning you want to get called) then if you bet more they call less but you win more when they call. Again, vanilla Kelly doesn’t really work.
I imagine it could be extended, but instead people have built more specialized frameworks for thinking about it that combine game theory with various stats/probability tools like Kelly.
The Math of Poker, written by a couple of friends of mine, might be a fun read if you’re interested. It probably won’t help you to become a better poker player, but the math is good fun.
Edges is p—q (your probability minus the odds implied probability 1/r in your notation).
I don’t think pro-gamblers do think in those terms. I think most pro-gamblers fall into several camps:
1. Serious whales. Have more capital than the market can bet. Bet sizing is not an issue, so they bet as much as they can every time.
2. Fixed stake bettors. They usually pick a number 1%, 2% are common and bet this every time. (There are variants of this, eg staking to win a fixed X)
3. (Fractional) Kelly bettors. I think this is closest to your example. A 1/2-Kelly bettor would see 51% vs 49% and say bet 0.5 * (51-49)/(100 − 49) ~ 0.5 * 4% = 2%
I just want to note for the record that this doesn’t agree with Dave Orr’s calculation, nor with the rumor that Kelly betting is “betting your edge”. So perhaps Dave Orr has some different formula in mind.
As for myself, I take this as evidence that “edge” really is an intuitive concept which people inconsistently put math to.
At this point I will admit that my gambling days were focused on poker, and Kelly isn’t very useful for that.
But here’s the formula as I understand it: EV/odds = edge, where odds is expressed as a multiple of one. So for the coinflip case we’re disagreeing about, EV is .02, odds are 1, so you bet .02.
If instead you had a coinflip with a fair coin where you were paid $2 on a win and lose $1 on a loss, your EV is .5/flip, odds are 2, so bet 25%.
Right, so this is basically the “net expected value divided by net win value” formula which I examine in the post, because “odds” is the same as b, net winnings per dollar invested. (Double-or-nothing means odds of 1, triple-or-nothing means odds of 2, etc.)
Yeah? How do you decide how much you’re willing to bet on a particular hand? Kelly isn’t relevant at all?
It’s too cumbersome and only addresses part of the issue. Kelly more or less assumes that you make a bet, it gets resolved, now you can make the next bet. But in poker, with multiple streets, you have to think about a sequence of bets based on some distribution of opponent actions and new information.
Also with Kelly you don’t usually have to think about how the size of your bet influences your likelihood to win, but in poker the amount that you bluff both changes the probability of the bluff being successful (people call less when you bet more) but also the amount you lose if you’re wrong. Or if you value bet (meaning you want to get called) then if you bet more they call less but you win more when they call. Again, vanilla Kelly doesn’t really work.
I imagine it could be extended, but instead people have built more specialized frameworks for thinking about it that combine game theory with various stats/probability tools like Kelly.
The Math of Poker, written by a couple of friends of mine, might be a fun read if you’re interested. It probably won’t help you to become a better poker player, but the math is good fun.
Thanks!