I love the spirit of this post, but all the focus on expected value raised some alarms in my head.
Maximizing the expected value in ordinary (financial) betting leads to bad decisions (St. Petersburg paradox), and it can do the same in other areas of life. I can see you know this intuitively, because you mentioned Pascal’s Mugging. Just letting you know that there’s math that accounts for this, too:
To avoid wasting life on Pascal’s Mugging (or going broke on bad bets), we maximize the expected logarithm of value (Kelly Criterion), because we get diminishing utility from higher amounts of the same thing.
Thanks! Yeah, I definitely agree that literally maximising for EV can be bad. The reason I heavily emphasised that is to convey the key point that you’re trying to make decisions under uncertainty, uncertainty is an inherent fact of life, and that, at least for me, thinking about EV leads to systematically better decisions. Because I’m sufficiently bad by default at accounting for uncertainty, that a focus on EV pushes me in the right direction.
In practice, the decision theory would be along the lines of “try estimating EV. If the answer is obviously, massively positive then do it, otherwise think harder”. (In which case maximise E(log(X)) and E(X) should give the same answer). And the post was similarly aimed at people who have such a strong bias, that thinking about EV is a nudge in the right direction.
Would you have preferred the post if framed around E(log(X))?
I love the spirit of this post, but all the focus on expected value raised some alarms in my head.
Maximizing the expected value in ordinary (financial) betting leads to bad decisions (St. Petersburg paradox), and it can do the same in other areas of life. I can see you know this intuitively, because you mentioned Pascal’s Mugging. Just letting you know that there’s math that accounts for this, too:
To avoid wasting life on Pascal’s Mugging (or going broke on bad bets), we maximize the expected logarithm of value (Kelly Criterion), because we get diminishing utility from higher amounts of the same thing.
Thanks! Yeah, I definitely agree that literally maximising for EV can be bad. The reason I heavily emphasised that is to convey the key point that you’re trying to make decisions under uncertainty, uncertainty is an inherent fact of life, and that, at least for me, thinking about EV leads to systematically better decisions. Because I’m sufficiently bad by default at accounting for uncertainty, that a focus on EV pushes me in the right direction.
In practice, the decision theory would be along the lines of “try estimating EV. If the answer is obviously, massively positive then do it, otherwise think harder”. (In which case maximise E(log(X)) and E(X) should give the same answer). And the post was similarly aimed at people who have such a strong bias, that thinking about EV is a nudge in the right direction.
Would you have preferred the post if framed around E(log(X))?
Technically yes, but I know it’d be harder to use as a mental model in everyday life. And anyway, I have the same initial bias as you