if for some reason we’re taking absurdly low-probability hypotheses into account
Generally you use the probability times the utility. It would seem reasonable to take absurdly low-probability hypotheses into account if the difference in utility is absurdly high. That being said, refusing to take into account probabilities below a given value regardless of utility is a perfectly acceptable answer. I can’t assert that you take them into account any more than I can assert you’re a utilitarian in the doctor example.
the idea that religion will prevent us from using the Force to live forever seems more likely to me than any deity who could offer us eternity.
I don’t know if the Force counts as a religion, but even if it doesn’t there are a few things that are not religions that would work. You are still missing the point, though. Lets say that Omega also gives an upper bound for the absolute value of utility you will have if catholicism isn’t true.
I know you’ve seen the Pascal’s Mugging problem—that’s what I meant to refer to. An upper bound to utility elsewhere doesn’t matter if P(Catholicism) gets a sufficient leverage penalty (and the same again for all stronger claims). Are you saying that according to Omega, Hansonian leverage penalties are unsalvageable and this upper bound is the solution? (On its face, the claim “Catholicism is true” does not logically rule out the Mugger’s claim, but of course we could go further.) I’d be more skeptical about this than I would be if Omega told me P=NP and also self-modifying AI is impossible by Godel’s Incompleteness. But of course if I accepted it, this would change the equation.
Generally you use the probability times the utility. It would seem reasonable to take absurdly low-probability hypotheses into account if the difference in utility is absurdly high. That being said, refusing to take into account probabilities below a given value regardless of utility is a perfectly acceptable answer. I can’t assert that you take them into account any more than I can assert you’re a utilitarian in the doctor example.
I don’t know if the Force counts as a religion, but even if it doesn’t there are a few things that are not religions that would work. You are still missing the point, though. Lets say that Omega also gives an upper bound for the absolute value of utility you will have if catholicism isn’t true.
I know you’ve seen the Pascal’s Mugging problem—that’s what I meant to refer to. An upper bound to utility elsewhere doesn’t matter if P(Catholicism) gets a sufficient leverage penalty (and the same again for all stronger claims). Are you saying that according to Omega, Hansonian leverage penalties are unsalvageable and this upper bound is the solution? (On its face, the claim “Catholicism is true” does not logically rule out the Mugger’s claim, but of course we could go further.) I’d be more skeptical about this than I would be if Omega told me P=NP and also self-modifying AI is impossible by Godel’s Incompleteness. But of course if I accepted it, this would change the equation.