The problem arises only if one assumes that “a model not obviously wrong” shouldn’t have a probability below some threshold, which is independent of the model. Hence to reconcile the things one should drop this assumption. Alternatively, one may question the “not obviously wrong” part.
Remarks:
Existence of a threshold p0 of minimal probability of any statement being true is clearly inconsistent, since for any value of p0 there are more than 1/p0 incompatible statements. Therefore some qualifier as “not obviously wrong” is added as a requirement for the statement. But our detection of obvious wrongness is far from reliable. It is far easier to generate an unlikely good sounding theory than to justify assigning it the low probability it deserves (the former requires making up one theory, the latter entails creating all incompatible theories from the same plausibility class). I believe that most people who accept the example argument would still find more than 1/p0 incompatible statements “not obviously wrong”. That makes such people susceptible to being Pascal-wagered.
I reject the example argument because I don’t care a bit about simulated universes, and I even don’t feel a need to pretend that I do (even on LW). But I could be, in principle, Pascal-wagered by some other similar argument. For example, I would care about hell, if it existed.
It seems to me that the only reliable defense against Pascal wagers is to have either bounded utility or to let the utility influence the probability estimates. Bounded utility sounds less weird.
Replies to questions:
Yes.
Yes.
The problem arises only if one assumes that “a model not obviously wrong” shouldn’t have a probability below some threshold, which is independent of the model. Hence to reconcile the things one should drop this assumption. Alternatively, one may question the “not obviously wrong” part.
Remarks:
Existence of a threshold p0 of minimal probability of any statement being true is clearly inconsistent, since for any value of p0 there are more than 1/p0 incompatible statements. Therefore some qualifier as “not obviously wrong” is added as a requirement for the statement. But our detection of obvious wrongness is far from reliable. It is far easier to generate an unlikely good sounding theory than to justify assigning it the low probability it deserves (the former requires making up one theory, the latter entails creating all incompatible theories from the same plausibility class). I believe that most people who accept the example argument would still find more than 1/p0 incompatible statements “not obviously wrong”. That makes such people susceptible to being Pascal-wagered.
I reject the example argument because I don’t care a bit about simulated universes, and I even don’t feel a need to pretend that I do (even on LW). But I could be, in principle, Pascal-wagered by some other similar argument. For example, I would care about hell, if it existed.
It seems to me that the only reliable defense against Pascal wagers is to have either bounded utility or to let the utility influence the probability estimates. Bounded utility sounds less weird.