What’s the rational reason not to be vulnerable to Pascal’s Mugging?
Roughly the same reason to one box on Newcomb’s Problem—rationalists win.
I thought the whole problem with Pascal’s Mugging is that being mugged has a higher expected value—and so those who get mugged “win” more. Obviously we’re not precise enough to be vulnerable to it, but the hypothetical super-AI could be.
The reason Pascal’s Mugging is a challenge is that expected utility calculations say to get mugged, but really strong intuitions say not to.
I thought the whole problem with Pascal’s Mugging is that being mugged has a higher expected value—and so those who get mugged “win” more. Obviously we’re not precise enough to be vulnerable to it, but the hypothetical super-AI could be.
The reason Pascal’s Mugging is a challenge is that expected utility calculations say to get mugged, but really strong intuitions say not to.