Pascal’s wager type arguments fail due to their symmetry (which is preserved in finite cases).
Even if our priors are symmetric for equally complex religious hypotheses, our posteriors almost certainly won’t be. There’s too much evidence in the world, and too many strong claims about these matters, for me to imagine that posteriors would come out even. Besides, even if two religions are equally probable, there may be certainly be non-epistemic reasons to prefer one over the other.
However, if after chugging through the math, it didn’t balance out and still the expected disutility from the existance of the disutility threat was greater, then perhaps allowing oneself to be vulnerable to such threats is genuinely the correct outcome, however counterintuitive and absurd it would seem to us.
I agree. If we really trust the AI doing the computations and don’t have reason to think that it’s biased, and if the AI has considered all of the points that have been raised about the future consequences of showing oneself vulnerable to Pascalian muggings, then I feel we should go along with the AI’s conclusion. 3^^^^3 people is too many to get wrong, and if the probabilities come out asymmetric, so be it.
Maybe the origin of the paradox is that we are extending the principle of maximizing expected return beyond its domain of applicability.
In addition to a frequency argument, one can in some cases make a different argument for maximizing expected value even in one-time-only scenarios. For instance, if you knew you would become a randomly selected person in the universe, and if your only goal was to avoid being murdered, then minimizing the expected number of people murdered would also minimize the probability that you personally would be murdered. Unfortunately, arguments like this make the assumption that your utility function on outcomes takes only one of two values (“good,” i.e., not murdered, and “bad,” i.e., murdered); it doesn’t capture the fact that being murdered in one way may be twice as bad as being murdered in another way.
Pascal’s wager type arguments fail due to their symmetry (which is preserved in finite cases).
Even if our priors are symmetric for equally complex religious hypotheses, our posteriors almost certainly won’t be. There’s too much evidence in the world, and too many strong claims about these matters, for me to imagine that posteriors would come out even. Besides, even if two religions are equally probable, there may be certainly be non-epistemic reasons to prefer one over the other.
However, if after chugging through the math, it didn’t balance out and still the expected disutility from the existance of the disutility threat was greater, then perhaps allowing oneself to be vulnerable to such threats is genuinely the correct outcome, however counterintuitive and absurd it would seem to us.
I agree. If we really trust the AI doing the computations and don’t have reason to think that it’s biased, and if the AI has considered all of the points that have been raised about the future consequences of showing oneself vulnerable to Pascalian muggings, then I feel we should go along with the AI’s conclusion. 3^^^^3 people is too many to get wrong, and if the probabilities come out asymmetric, so be it.
Maybe the origin of the paradox is that we are extending the principle of maximizing expected return beyond its domain of applicability.
In addition to a frequency argument, one can in some cases make a different argument for maximizing expected value even in one-time-only scenarios. For instance, if you knew you would become a randomly selected person in the universe, and if your only goal was to avoid being murdered, then minimizing the expected number of people murdered would also minimize the probability that you personally would be murdered. Unfortunately, arguments like this make the assumption that your utility function on outcomes takes only one of two values (“good,” i.e., not murdered, and “bad,” i.e., murdered); it doesn’t capture the fact that being murdered in one way may be twice as bad as being murdered in another way.