Replace expected utility by expected utility minus some multiple of the standard deviation, making that “some multiple” go to zero for oft repeated situations.
The mugger won’t be able to stand against that, as the standard deviation of his setup is huge.
Then you would turn down free money. Suppose you try to maximize EU—k*SD.
I’ll pick p < 1⁄2 * min(1, k^2), and offer you a bet in which you can receive 1 util with probability p, or 0 utils with probability (1-p). This bet has mean payout p and standard deviation sqrt[p(1-p)] You have nothing to lose, but you would turn down this bet.
Proof:
p < 1⁄2, so (1-p) > 1⁄2, so p < k^2/2 < k^2(1-p)
Divide both sides by (1-p): p / (1-p) < k^2
Take the square root of both sides: sqrt[p / (1-p)] < k
Multiply both sides by sqrt[p(1-p)]: p < k*sqrt[p(1-p)]
If k is tiny, this is only a minute chance of free money. I agree that it seems absurd to turn down that deal, but if the only cost of solving Pascal’s mugger is that we avoid advantageous lotteries with such minute payoffs, it seems a cost worth paying.
But recall—k is not a constant, it is a function of how often the “situation” is repeated. In this context, “repeated situation” means another lottery with larger standard deviation. I’d guess I’ve faced over a million implicit lotteries with SD higher than k = 0.1 in my life so far.
We can even get more subtle about the counting. For any SD we have faced that is n times greater than the SD of this lottery, we add n to 1/k.
In that setup, it may be impossible for you to actually propose that free money deal to me (I’ll have to check the maths—it certainly is impossible if we add n^3 to 1/k). Basically, the problem is that k depends on the SD, and the SD depends on k. As you diminish the SD to catch up with k, you further decrease k, and hence p, and hence the SD, and hence k, etc...
Interesting example, though; and I’ll try and actually formalise an example of a sensible “SD adjusted EU” so we can have proper debates about it.
That seems pretty arbitrary. You can make the mugging go away by simply penalizing his promise of n utils with a probability of 1/n (or less); but just making him go away is not a justification for such a procedure—what if you live in a universe where a eccentric god will give you that many utilons if you win his cosmic lottery?
Replace expected utility by expected utility minus some multiple of the standard deviation, making that “some multiple” go to zero for oft repeated situations.
The mugger won’t be able to stand against that, as the standard deviation of his setup is huge.
Then you would turn down free money. Suppose you try to maximize EU—k*SD.
I’ll pick p < 1⁄2 * min(1, k^2), and offer you a bet in which you can receive 1 util with probability p, or 0 utils with probability (1-p). This bet has mean payout p and standard deviation sqrt[p(1-p)] You have nothing to lose, but you would turn down this bet.
Proof:
p < 1⁄2, so (1-p) > 1⁄2, so p < k^2/2 < k^2(1-p)
Divide both sides by (1-p): p / (1-p) < k^2
Take the square root of both sides: sqrt[p / (1-p)] < k
Multiply both sides by sqrt[p(1-p)]: p < k*sqrt[p(1-p)]
Which is equivalent to: EU < k * SD
So EU—k*SD < 0
If k is tiny, this is only a minute chance of free money. I agree that it seems absurd to turn down that deal, but if the only cost of solving Pascal’s mugger is that we avoid advantageous lotteries with such minute payoffs, it seems a cost worth paying.
But recall—k is not a constant, it is a function of how often the “situation” is repeated. In this context, “repeated situation” means another lottery with larger standard deviation. I’d guess I’ve faced over a million implicit lotteries with SD higher than k = 0.1 in my life so far.
We can even get more subtle about the counting. For any SD we have faced that is n times greater than the SD of this lottery, we add n to 1/k.
In that setup, it may be impossible for you to actually propose that free money deal to me (I’ll have to check the maths—it certainly is impossible if we add n^3 to 1/k). Basically, the problem is that k depends on the SD, and the SD depends on k. As you diminish the SD to catch up with k, you further decrease k, and hence p, and hence the SD, and hence k, etc...
Interesting example, though; and I’ll try and actually formalise an example of a sensible “SD adjusted EU” so we can have proper debates about it.
That seems pretty arbitrary. You can make the mugging go away by simply penalizing his promise of n utils with a probability of 1/n (or less); but just making him go away is not a justification for such a procedure—what if you live in a universe where a eccentric god will give you that many utilons if you win his cosmic lottery?