I spent some time trying to fight these results, but have failed!
Specifically, my intuition said we should just be able to look at the flattened distributions-over-outcomes. Then obviously the rewriting makes no difference, and the question is whether we can still provide a reasonable decision criterion when the probabilities and utilities don’t line up exactly. To do so we need some defined order or limiting process for comparing these infinite lotteries.
My thought was to use something like “choose the lottery whose samples look better”. For instance, examine limn→∞P[∑0≤i<nAi−Bi≤0], where Ai and Bi are samples from two lotteries A and B. This should prioritize more extreme low-probability events only as n grows larger. This works for comparing things against a standard St Petersburg lottery.
But the problem I’m facing is the lottery 121−144+1816−11664+⋯. If you try to compare it to any finite payoff like 500, when the samples get big enough it will wildly swing one way or the other, so we can’t tell with my method which we “prefer”. (In real life I’d take the $500, fwiw.)
My intuition complains that this lottery is rather unfair, but nonetheless it is sinking my brilliant idea! So my current guess is that I can handle things when probability-times-utility is eventually bounded. But I don’t know how to cope with utilities growing faster than probabilities shrink.
I spent some time trying to fight these results, but have failed!
Specifically, my intuition said we should just be able to look at the flattened distributions-over-outcomes. Then obviously the rewriting makes no difference, and the question is whether we can still provide a reasonable decision criterion when the probabilities and utilities don’t line up exactly. To do so we need some defined order or limiting process for comparing these infinite lotteries.
My thought was to use something like “choose the lottery whose samples look better”. For instance, examine limn→∞P[∑0≤i<nAi−Bi≤0], where Ai and Bi are samples from two lotteries A and B. This should prioritize more extreme low-probability events only as n grows larger. This works for comparing things against a standard St Petersburg lottery.
But the problem I’m facing is the lottery 121−144+1816−11664+⋯. If you try to compare it to any finite payoff like 500, when the samples get big enough it will wildly swing one way or the other, so we can’t tell with my method which we “prefer”. (In real life I’d take the $500, fwiw.)
My intuition complains that this lottery is rather unfair, but nonetheless it is sinking my brilliant idea! So my current guess is that I can handle things when probability-times-utility is eventually bounded. But I don’t know how to cope with utilities growing faster than probabilities shrink.