I have an idea for how this problem could be approached:
any sufficiently powerful being with any random utility-function may or may not exist. It is perfectly possible that our reality is actually overseen by a god that rewards and punishes us for whether we say an even or odd number of words in our life or something equally arbitrary. The likelyhood of the existence of each of these possible beings can be approximated using Solomonoff induction.
I assume that most simulations run by such hypothetical beings wherein we could find ourselves in such a situation would be run by beings who either (1) have no interest in us at all (in which case the Mugger would most likely be a human), (2) are interested in an entirely unpredictable thing resulting from their alien culture, or (3) are interacting with us purely to run social experiments. After all, they would have nothing in common with us and we would have nothing they could possibly want. It would therefore, in any case, be virtually impossible to guess at their possible motivations, as it would be a poorly run social experiment if we could (assuming option three is true).
I would now argue that the existence of Pascal’s Mugger does not influence the probability of the existence of a being that would react negatively (for us) to us not giving the 5$ anymore than it influences the probability of the existence of a being with an opposite motivation. The Mugger is equally likely to punish you for being so gullible as he is to punish you for not giving money to someone who threatens you.
Of course none of this takes into consideration how likely the various possible beings are to actually carry out their threat, but that doesn’t change anything important about this argument, I think.
In essence, my argument is that such powerful hypothetical beings can be ignored because we have no real reason to assume they have a certain motivation rather than the opposite. Giving the Mugger 5$ is just as likely to save us as shooting the Mugger in the face is. Incidentally, adopting the later strategy will greatly reduce the chance that somebody actually tries to do this.
I realize that this argument seems kind of flawed because it assumes that it really is impossible to guess at the being’s motivation but I can’t see how this could be done. It’s always possible that the being just wants you to think that it wants x, after all. Who can tell what might motivate a mind that is large enough to simulate our universe?
It would therefore, in any case, be virtually impossible to guess at their possible motivations
Virtually impossible is not the same as actually impossible. It’s not a question of being 50.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000001% sure. If you’re 50+1/sqrt(3^^^^3)% sure, that’s enough to dominate your decision. You can’t be that unsure.
I have an idea for how this problem could be approached:
any sufficiently powerful being with any random utility-function may or may not exist. It is perfectly possible that our reality is actually overseen by a god that rewards and punishes us for whether we say an even or odd number of words in our life or something equally arbitrary. The likelyhood of the existence of each of these possible beings can be approximated using Solomonoff induction.
I assume that most simulations run by such hypothetical beings wherein we could find ourselves in such a situation would be run by beings who either (1) have no interest in us at all (in which case the Mugger would most likely be a human), (2) are interested in an entirely unpredictable thing resulting from their alien culture, or (3) are interacting with us purely to run social experiments. After all, they would have nothing in common with us and we would have nothing they could possibly want. It would therefore, in any case, be virtually impossible to guess at their possible motivations, as it would be a poorly run social experiment if we could (assuming option three is true).
I would now argue that the existence of Pascal’s Mugger does not influence the probability of the existence of a being that would react negatively (for us) to us not giving the 5$ anymore than it influences the probability of the existence of a being with an opposite motivation. The Mugger is equally likely to punish you for being so gullible as he is to punish you for not giving money to someone who threatens you.
Of course none of this takes into consideration how likely the various possible beings are to actually carry out their threat, but that doesn’t change anything important about this argument, I think.
In essence, my argument is that such powerful hypothetical beings can be ignored because we have no real reason to assume they have a certain motivation rather than the opposite. Giving the Mugger 5$ is just as likely to save us as shooting the Mugger in the face is. Incidentally, adopting the later strategy will greatly reduce the chance that somebody actually tries to do this.
I realize that this argument seems kind of flawed because it assumes that it really is impossible to guess at the being’s motivation but I can’t see how this could be done. It’s always possible that the being just wants you to think that it wants x, after all. Who can tell what might motivate a mind that is large enough to simulate our universe?
Virtually impossible is not the same as actually impossible. It’s not a question of being 50.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000001% sure. If you’re 50+1/sqrt(3^^^^3)% sure, that’s enough to dominate your decision. You can’t be that unsure.