First¸ I didn’t read all of the above comments, though I read a large part of it.
Regarding the intuition that makes one question Pascals mugging: I think it would be likely that there was a strong survival value in the ancestral environment to being able to detect and disregard statements that would cause you to pay money to someone else without there being any way to detect if these statements were true. Anyone without that ability would have been mugged to extinction long ago. This makes more sense if we regard the origin of our builtin utility function as a /very/ coarse approximation of our genes’ survival fitness.
Regarding what the FAI is to do, I think the mistake made is assuming that the prior utility of doing ritual X is exactly zero, so that a very small change in our probabilities would make the expected utility of X positive. (Where X is “give the Pascal mugger the money”).
A sufficiently smart FAI would have thought about the possibility of being Pascal-mugged long before that actually happens, and would in fact consider it a likely event to sometimes happen. I am not saying that this actually happening is not a tiny sliver of evidence in favor of the mugger telling the truth, but it is very tiny. The FAI would (assuming it had enough resources) compute for every possible Matrix scenario the appropriate probabilities and utilities for every possible action, taking the scenario’s complexity into account. There is no reason to assume the prior expected utility for any religious ritual (such as paying Pascal muggers, whose statements you can’t check) is exactly zero. Maybe the FAI finds that there is a sufficiently simple scenario in which a god exists and in which it is extremely utillious to worship that god, more so than any alternative scenarios. Or in which one should give in to (specific forms of) Pascal mugging.
However, the problem as presented in this blogpost implicitly assumes that the prior probabilities the FAI holds are such that the tiny sliver of probability provided by one more instance of Pascal’s mugging happening, is enough to push the probability of the scenario of ‘Extra-Matrix deity killing lots of people if I don’t pay’ over that of ‘Extra-Matrix deity killing lots of people if I do pay’. Since these two scenarios need not have the exact same Kolmogorov complexity this is unlikely.
In short, either the FAI is already religious, (which may include as a ritual ‘give money to people who speak a certain passphrase’) or it is not, but the event of a Pascal mugging happening is unlikely to change its beliefs.
Now, the question becomes if we should accept the FAI doing things that are expected to favor a huge number of extra-matrix people at a cost to a smaller number of inside-matrix people. If we actually count every human life as equal, and we accept what Solomonoff-inducted bayesian probability theory has to say about huge payoff-tiny probability events and dutch books, the FAI’s choice of religion would be the rational thing to do. Else, we could add a term to the AI’s utility function to favor inside-matrix people over outside-matrix people, or we could make it favor certainty (of benefitting people known to actually exist) over uncertainty (of outside-matrix people not known to actually exist).
First¸ I didn’t read all of the above comments, though I read a large part of it.
Regarding the intuition that makes one question Pascals mugging: I think it would be likely that there was a strong survival value in the ancestral environment to being able to detect and disregard statements that would cause you to pay money to someone else without there being any way to detect if these statements were true. Anyone without that ability would have been mugged to extinction long ago. This makes more sense if we regard the origin of our builtin utility function as a /very/ coarse approximation of our genes’ survival fitness.
Regarding what the FAI is to do, I think the mistake made is assuming that the prior utility of doing ritual X is exactly zero, so that a very small change in our probabilities would make the expected utility of X positive. (Where X is “give the Pascal mugger the money”). A sufficiently smart FAI would have thought about the possibility of being Pascal-mugged long before that actually happens, and would in fact consider it a likely event to sometimes happen. I am not saying that this actually happening is not a tiny sliver of evidence in favor of the mugger telling the truth, but it is very tiny. The FAI would (assuming it had enough resources) compute for every possible Matrix scenario the appropriate probabilities and utilities for every possible action, taking the scenario’s complexity into account. There is no reason to assume the prior expected utility for any religious ritual (such as paying Pascal muggers, whose statements you can’t check) is exactly zero. Maybe the FAI finds that there is a sufficiently simple scenario in which a god exists and in which it is extremely utillious to worship that god, more so than any alternative scenarios. Or in which one should give in to (specific forms of) Pascal mugging.
However, the problem as presented in this blogpost implicitly assumes that the prior probabilities the FAI holds are such that the tiny sliver of probability provided by one more instance of Pascal’s mugging happening, is enough to push the probability of the scenario of ‘Extra-Matrix deity killing lots of people if I don’t pay’ over that of ‘Extra-Matrix deity killing lots of people if I do pay’. Since these two scenarios need not have the exact same Kolmogorov complexity this is unlikely.
In short, either the FAI is already religious, (which may include as a ritual ‘give money to people who speak a certain passphrase’) or it is not, but the event of a Pascal mugging happening is unlikely to change its beliefs.
Now, the question becomes if we should accept the FAI doing things that are expected to favor a huge number of extra-matrix people at a cost to a smaller number of inside-matrix people. If we actually count every human life as equal, and we accept what Solomonoff-inducted bayesian probability theory has to say about huge payoff-tiny probability events and dutch books, the FAI’s choice of religion would be the rational thing to do. Else, we could add a term to the AI’s utility function to favor inside-matrix people over outside-matrix people, or we could make it favor certainty (of benefitting people known to actually exist) over uncertainty (of outside-matrix people not known to actually exist).