This is one of those cases where it helps to be a human, because we’re dumb enough that we can’t possibly calculate the true probabilities involved, and so the expected utilities sum to zero in any reasonable approximation of the situation, by human standards.
Unfortunately, a superintelligent AI would be able to get a much better calculation out of something like this, and while a .0000000000000001 probability might round down to 0 for us lowly humans, an AI wouldn’t round that down. (After all, why should it? Unlike us, it has no reason to doubt its capabilities for calculation.) And with enormous utilities like 3^^^^3, even a .0000000000000001 difference in probability is too much. The problem isn’t with us, directly, but with the behavior a hypothetical AI agent might take. We certainly don’t want our newly-built FAI to suddenly decide to devote all of humanity’s resources to serving the first person who comes up with the bright idea of Pascal’s Mugging it.
This is one of those cases where it helps to be a human, because we’re dumb enough that we can’t possibly calculate the true probabilities involved, and so the expected utilities sum to zero in any reasonable approximation of the situation, by human standards.
Unfortunately, a superintelligent AI would be able to get a much better calculation out of something like this, and while a .0000000000000001 probability might round down to 0 for us lowly humans, an AI wouldn’t round that down. (After all, why should it? Unlike us, it has no reason to doubt its capabilities for calculation.) And with enormous utilities like 3^^^^3, even a .0000000000000001 difference in probability is too much. The problem isn’t with us, directly, but with the behavior a hypothetical AI agent might take. We certainly don’t want our newly-built FAI to suddenly decide to devote all of humanity’s resources to serving the first person who comes up with the bright idea of Pascal’s Mugging it.