I only found out about the formalized version of that dilemma around a week ago. As far as I can tell it has not been shown that giving in to a Pascal’s mugging scenario would be irrational. It is merely our intuition that makes us believe that something is wrong with it. I am currently far too uneducated to talk about this in detail. What I am worried about is that basically all probability/utility calculations could be put into the same category (e.g. working to mitigate low-probability existential risks), where do you draw the line? You can be your own mugger if you weigh in enough expected utility to justify taking extreme risks.
What I am worried about is that basically all probability/utility calculations could be put into the same category (e.g. working to mitigate low-probability existential risks), where do you draw the line?
There’s a formalization I gave earlier that distinguishes Pascal’s Mugging from problems that just have big numbers in them. It’s not enough to have a really big utility; a Pascal’s Mugging is when you have a statement provided by another agent, such that just saying a bigger number (without providing additional evidence) increases what you think your expected utility is for some action, without bound.
This question has resurfaced enough times that I’m starting to think I ought to expand that into an article.
I only found out about the formalized version of that dilemma around a week ago. As far as I can tell it has not been shown that giving in to a Pascal’s mugging scenario would be irrational. It is merely our intuition that makes us believe that something is wrong with it. I am currently far too uneducated to talk about this in detail. What I am worried about is that basically all probability/utility calculations could be put into the same category (e.g. working to mitigate low-probability existential risks), where do you draw the line? You can be your own mugger if you weigh in enough expected utility to justify taking extreme risks.
There’s a formalization I gave earlier that distinguishes Pascal’s Mugging from problems that just have big numbers in them. It’s not enough to have a really big utility; a Pascal’s Mugging is when you have a statement provided by another agent, such that just saying a bigger number (without providing additional evidence) increases what you think your expected utility is for some action, without bound.
This question has resurfaced enough times that I’m starting to think I ought to expand that into an article.