As I understand it, komponisto’s idea is that we don’t have to worry about Pascal’s Mugging because the probability of anyone being able to control 3^^^^3 utils is even lower than one would expect simply looking at the number 3^^^^3, and is therefore low enough to cancel out even this large a number.
What I am trying to respond is that there are formulations of Pascal’s Mugging which do not depend on the number 3^^^^3. The idea that someone could destroy a universe worth of utils is more plausible than destroying 3^^^^3 utils, and it’s not at all obvious there that the low probability cancels out the high risk.
The idea that someone could destroy a universe worth of utils is more plausible than destroying 3^^^^3 utils, and it’s not at all obvious there that the low probability cancels out the high risk.
Well, it may not be obvious what to do in that case! But the original formulation of the Pascal’s Mugging problem, as I understand it, was to formally explain why it is obvious in the case of large numbers like 3^^^^3:
Given that, in general, a Turing machine can increase in utility vastly faster than it increases in complexity, how should an Occam-abiding mind avoid being dominated by tiny probabilities of vast utilities?
The answer proposed here is that a “friendly” utility function does not in fact allow utility to increase faster than complexity increases.
I don’t claim this tells us what to do about the LHC.
As I understand it, komponisto’s idea is that we don’t have to worry about Pascal’s Mugging because the probability of anyone being able to control 3^^^^3 utils is even lower than one would expect simply looking at the number 3^^^^3, and is therefore low enough to cancel out even this large a number.
What I am trying to respond is that there are formulations of Pascal’s Mugging which do not depend on the number 3^^^^3. The idea that someone could destroy a universe worth of utils is more plausible than destroying 3^^^^3 utils, and it’s not at all obvious there that the low probability cancels out the high risk.
Well, it may not be obvious what to do in that case! But the original formulation of the Pascal’s Mugging problem, as I understand it, was to formally explain why it is obvious in the case of large numbers like 3^^^^3:
The answer proposed here is that a “friendly” utility function does not in fact allow utility to increase faster than complexity increases.
I don’t claim this tells us what to do about the LHC.