My own response is that all Pascal’s muggings are not worth worrying about.
I’m curious why you only take into consideration scenarios that someone informs you of. That is, suppose a fourth person sits in their control center and decides that every time MichealOS refuses to give money to a Pascal’s Mugger, they will simulate m^^^m people and give them fantastically happy eternal lives—but they don’t inform you of that decision.
The probability of this is vanishingly small, of course, but it’s only marginally lower than the probability of your other proposed muggings. So presumably you have to take it into account along with everything else, right?
That’s a good point. Let me see if I understand the conclusion correctly:
I should consider that there is a opposing Pascal’s Anti-Mugging for any Pascal’s Mugging, and it seems reasonable that I don’t have any reason to consider an Unknown Anti-Mugging more likely than a Unknown Mugging before someone tells me which is occurring.
Once the mugger asserts that there is a mugging, I can ask “What evidence can you show me that gives you reason to believe that the mugging scenario is more likely than the anti-mugging scenario?” If this is a fake mugging (which seems likely), he won’t have any evidence he can show me, which means there is no reason to adjust the priors between the mugging and the anti-mugging so I can continue not worrying about the mugging.
If I understood you correctly, that sounds like a pretty good way of thinking about it that I hadn’t thought of.
If it sounds like I haven’t gotten it, please explain in more detail.
So, this is correct enough, but I would recommend generalizing the principle.
The (nominally) interesting thing about Pascal’s Mugging scenarios (and also about the original Pascal’s Wager, which inspired them) is that we can posit hypothetical scenarios that involve utility shifts so vast that even if they are vanishingly unlikely scenarios, the result of multiplying the probability of the scenario by the magnitude of the utility shift should it come to pass is still substantial. This allows a decision system that operates based on the expected value of a scenario (that is, the expected value of the scenario times its likelihood) to be manipulated by presenting it with carefully tailored scenarios of this sort (e.g., Pascal’s mugging).
It’s conceivable that a well-calibrated decision system would not be subject to such manipulation, because it would assign each scenario a probability that reflected such things… e.g., it would estimate the likelihood of there actually existing an Omega capable of creating 2N units of disutility as no more than .5 the likelihood of an Omega capable of creating only N units.
But I’ve never met any decision system that well calibrated. So, as bounded systems running on inadequate corrupted hardware, we have to come up with other tactics that keep us from driving off cliffs.
In general, one such tactic is to maintain a broader perspective than just the specific problem I’ve been invited to think about.
So when the Mugger asserts that there is a mugging, I can ask “Why should I care? What other things do I have roughly the same reason to care about, and why is my attention being directed to this particular choice within that set?”
The same thing goes when Pascal himself argues that I ought to worship the Christian God, for example, because no matter how unlikely I consider His existence, the sheer magnitude of the stakes (Heaven and Hell) dwarf that unlikelihood. If I find that compelling, I should find a vast number of competing Gods’ claims equally compelling.
The same thing goes (on a smaller scale) when someone tries to sell me insurance against some specific bad thing happening.
My own response is that all Pascal’s muggings are not worth worrying about.
I’m curious why you only take into consideration scenarios that someone informs you of. That is, suppose a fourth person sits in their control center and decides that every time MichealOS refuses to give money to a Pascal’s Mugger, they will simulate m^^^m people and give them fantastically happy eternal lives—but they don’t inform you of that decision.
The probability of this is vanishingly small, of course, but it’s only marginally lower than the probability of your other proposed muggings. So presumably you have to take it into account along with everything else, right?
That’s a good point. Let me see if I understand the conclusion correctly:
I should consider that there is a opposing Pascal’s Anti-Mugging for any Pascal’s Mugging, and it seems reasonable that I don’t have any reason to consider an Unknown Anti-Mugging more likely than a Unknown Mugging before someone tells me which is occurring.
Once the mugger asserts that there is a mugging, I can ask “What evidence can you show me that gives you reason to believe that the mugging scenario is more likely than the anti-mugging scenario?” If this is a fake mugging (which seems likely), he won’t have any evidence he can show me, which means there is no reason to adjust the priors between the mugging and the anti-mugging so I can continue not worrying about the mugging.
If I understood you correctly, that sounds like a pretty good way of thinking about it that I hadn’t thought of. If it sounds like I haven’t gotten it, please explain in more detail.
Either way, thank you for the explanation!
So, this is correct enough, but I would recommend generalizing the principle.
The (nominally) interesting thing about Pascal’s Mugging scenarios (and also about the original Pascal’s Wager, which inspired them) is that we can posit hypothetical scenarios that involve utility shifts so vast that even if they are vanishingly unlikely scenarios, the result of multiplying the probability of the scenario by the magnitude of the utility shift should it come to pass is still substantial. This allows a decision system that operates based on the expected value of a scenario (that is, the expected value of the scenario times its likelihood) to be manipulated by presenting it with carefully tailored scenarios of this sort (e.g., Pascal’s mugging).
It’s conceivable that a well-calibrated decision system would not be subject to such manipulation, because it would assign each scenario a probability that reflected such things… e.g., it would estimate the likelihood of there actually existing an Omega capable of creating 2N units of disutility as no more than .5 the likelihood of an Omega capable of creating only N units.
But I’ve never met any decision system that well calibrated. So, as bounded systems running on inadequate corrupted hardware, we have to come up with other tactics that keep us from driving off cliffs.
In general, one such tactic is to maintain a broader perspective than just the specific problem I’ve been invited to think about.
So when the Mugger asserts that there is a mugging, I can ask “Why should I care? What other things do I have roughly the same reason to care about, and why is my attention being directed to this particular choice within that set?”
The same thing goes when Pascal himself argues that I ought to worship the Christian God, for example, because no matter how unlikely I consider His existence, the sheer magnitude of the stakes (Heaven and Hell) dwarf that unlikelihood. If I find that compelling, I should find a vast number of competing Gods’ claims equally compelling.
The same thing goes (on a smaller scale) when someone tries to sell me insurance against some specific bad thing happening.