The idea that someone could destroy a universe worth of utils is more plausible than destroying 3^^^^3 utils, and it’s not at all obvious there that the low probability cancels out the high risk.
Well, it may not be obvious what to do in that case! But the original formulation of the Pascal’s Mugging problem, as I understand it, was to formally explain why it is obvious in the case of large numbers like 3^^^^3:
Given that, in general, a Turing machine can increase in utility vastly faster than it increases in complexity, how should an Occam-abiding mind avoid being dominated by tiny probabilities of vast utilities?
The answer proposed here is that a “friendly” utility function does not in fact allow utility to increase faster than complexity increases.
I don’t claim this tells us what to do about the LHC.
Well, it may not be obvious what to do in that case! But the original formulation of the Pascal’s Mugging problem, as I understand it, was to formally explain why it is obvious in the case of large numbers like 3^^^^3:
The answer proposed here is that a “friendly” utility function does not in fact allow utility to increase faster than complexity increases.
I don’t claim this tells us what to do about the LHC.