What’s the rational reason not to be vulnerable to Pascal’s Mugging? Please correct me if I am wrong but it seems that Eliezer does simply choose to believe, i.e. trust his intuition, that it would be wrong to give in to the demands of such a mugger. So what if calcsam says that he is vulnerable to Pascal’s Mugging, does it make him more or less rational to not trust his intuition in this case?
I ask because I hypothesize that a rational theist/religious person almost definitely has to be vulnerable to Pascal’s Mugging.
I don’t see why they’d be any more vulnerable then a rationalist atheist.
Keep in mind we don’t even know how to describe a rational agent that’s not vulnerable to Pascal’s mugging.
The way we currently get around this problem is by having a rule that temporarily suspends our decision theory when we pattern match the situation to resemble Pascal’s mugging.
I ask because I hypothesize that a rational theist/religious person almost definitely has to be vulnerable to Pascal’s Mugging.
A weird conclusion. I’d think that most theists would be likely to believe that such a huge disutility couldn’t be allowed (by God) to exist; atleast not on the basis of some superdimensional prankster asking you for 5 dollars.
What’s the rational reason not to be vulnerable to Pascal’s Mugging?
Roughly the same reason to one box on Newcomb’s Problem—rationalists win.
I thought the whole problem with Pascal’s Mugging is that being mugged has a higher expected value—and so those who get mugged “win” more. Obviously we’re not precise enough to be vulnerable to it, but the hypothetical super-AI could be.
The reason Pascal’s Mugging is a challenge is that expected utility calculations say to get mugged, but really strong intuitions say not to.
Are you vulnerable to Pascal’s Mugging?
What’s the rational reason not to be vulnerable to Pascal’s Mugging? Please correct me if I am wrong but it seems that Eliezer does simply choose to believe, i.e. trust his intuition, that it would be wrong to give in to the demands of such a mugger. So what if calcsam says that he is vulnerable to Pascal’s Mugging, does it make him more or less rational to not trust his intuition in this case?
Here is the technical reason:
If you use a Solomonoff prior nearly any utility function will not have a well defined expected value, i.e., trying to calculate it will give ∞ − ∞.
Or basically trying to take all possible versions of Pascal’s mugging into account makes expected utility calculations mathematically incoherent.
This article has the basics.
It basically consists of calling BS on the promised high utility—under most circumstances.
Roughly the same reason to one box on Newcomb’s Problem—rationalists win.
I ask because I hypothesize that a rational theist/religious person almost definitely has to be vulnerable to Pascal’s Mugging.
I don’t see why they’d be any more vulnerable then a rationalist atheist.
Keep in mind we don’t even know how to describe a rational agent that’s not vulnerable to Pascal’s mugging.
The way we currently get around this problem is by having a rule that temporarily suspends our decision theory when we pattern match the situation to resemble Pascal’s mugging.
A weird conclusion. I’d think that most theists would be likely to believe that such a huge disutility couldn’t be allowed (by God) to exist; atleast not on the basis of some superdimensional prankster asking you for 5 dollars.
I thought the whole problem with Pascal’s Mugging is that being mugged has a higher expected value—and so those who get mugged “win” more. Obviously we’re not precise enough to be vulnerable to it, but the hypothetical super-AI could be.
The reason Pascal’s Mugging is a challenge is that expected utility calculations say to get mugged, but really strong intuitions say not to.