I think it’s also a case of us (or at least me) not yet being convinced that the probability is ⇐ 10^-6. Especially with something as uncertain as this. My credence in such a scenario happening has, too, decreased a fair bit with this thread but I remain unconvinced overall.
And even then, 1 in a million isn’t *that* unlikely—it’s massive compared to the likelihood that a mugger is actually a God. I’m not entirely sure how low it would have to be for me to dismiss it as “Pascalian”, but 1 in a million still feels far too high.
If a mugger actually came up to me and said “I am God and will torture 3^^^3 people unless you pay me $5”, if you then forced me to put a probability on it, I would in fact say something like 1 in a million. I still wouldn’t pay the mugger.
Like, can I actually make a million statements of the same type as that one, and be correct about all but one of them? It’s hard to get that kind of accuracy.
(Here I’m trying to be calibrated with my probabilities, as opposed to saying the thing that would reflect my decision process under expected utility maximization.)
The mugger scenario triggers strong game theoretical intuitions (eg “it’s bad to be the sort of agent that other agents can benefit from making threats against”) and the corresponding evolved decision-making processes. Therefore, when reasoning about scenarios that do not involve game theoretical dynamics (as is the case here), it may be better to use other analogies.
(For the same reason, “Pascal’s mugging” is IMO a bad name for that concept, and “finite Pascal’s wager” would have been better.)
I’d do the same thing for the version about religion (infinite utility from heaven / infinite disutility from hell), where I’m not being exploited, I simply have different beliefs from the person making the argument.
(Note also that the non-exploitability argument isn’t sufficient.)
I think it’s also a case of us (or at least me) not yet being convinced that the probability is ⇐ 10^-6. Especially with something as uncertain as this. My credence in such a scenario happening has, too, decreased a fair bit with this thread but I remain unconvinced overall.
And even then, 1 in a million isn’t *that* unlikely—it’s massive compared to the likelihood that a mugger is actually a God. I’m not entirely sure how low it would have to be for me to dismiss it as “Pascalian”, but 1 in a million still feels far too high.
If a mugger actually came up to me and said “I am God and will torture 3^^^3 people unless you pay me $5”, if you then forced me to put a probability on it, I would in fact say something like 1 in a million. I still wouldn’t pay the mugger.
Like, can I actually make a million statements of the same type as that one, and be correct about all but one of them? It’s hard to get that kind of accuracy.
(Here I’m trying to be calibrated with my probabilities, as opposed to saying the thing that would reflect my decision process under expected utility maximization.)
The mugger scenario triggers strong game theoretical intuitions (eg “it’s bad to be the sort of agent that other agents can benefit from making threats against”) and the corresponding evolved decision-making processes. Therefore, when reasoning about scenarios that do not involve game theoretical dynamics (as is the case here), it may be better to use other analogies.
(For the same reason, “Pascal’s mugging” is IMO a bad name for that concept, and “finite Pascal’s wager” would have been better.)
I’d do the same thing for the version about religion (infinite utility from heaven / infinite disutility from hell), where I’m not being exploited, I simply have different beliefs from the person making the argument.
(Note also that the non-exploitability argument isn’t sufficient.)