I can’t, really, because it’s context dependent. If the question was “What is the probability that a program which selects one atom at random from all those in the universe (and is guaranteed by Omega genuinely random) picks this particular phosphorous atom on here the tip of my finger”, then my probability would be much less than 3E22.
Likewise, “destroy the Earth” is a relatively simple occurrence—it just needs a big enough burst of energy or mass or something. If it’s “What is the probability that the LHC will create a hamster in a tutu on top of Big Ben on noon at Christmas Day singing ‘Greensleeves’ while fighting a lightsaber duel with the ghost of Alexander the Great”, then my probability would again be less than 3E22 (at least before I formed this thought—I don’t know if having said it aloud makes the probability that malevolent aliens will enact it go above 1/3E22 or not).
Thanks for the clarification; that’s quite reasonable.
I’ll note, however, that your own arguments (the world’s greatest physicist certified sane by the world’s greatest psychiatrist...) still apply!
The point being that our “counterintuitiveness detector” shouldn’t get to automatically override calculated probabilities, especially in situations that intuition wasn’t designed to handle.
As for the LHC, it’s worth pointing out that potential benefits also have to be factored into the expected utility calculation, a fact which I don’t think I’ve seen mentioned in the current discussion.
Yvain: [...] “What is the probability that the LHC will create a hamster in a tutu on top of Big Ben on noon at Christmas Day singing ‘Greensleeves’ while fighting a lightsaber duel with the ghost of Alexander the Great”, then my probability would again be less than 3E22 (at least before I formed this thought—I don’t know if having said it aloud makes the probability that malevolent aliens will enact it go above 1/3E22 or not).
komponisto: Thanks for the clarification; that’s quite reasonable.
I can’t, really, because it’s context dependent. If the question was “What is the probability that a program which selects one atom at random from all those in the universe (and is guaranteed by Omega genuinely random) picks this particular phosphorous atom on here the tip of my finger”, then my probability would be much less than 3E22.
Likewise, “destroy the Earth” is a relatively simple occurrence—it just needs a big enough burst of energy or mass or something. If it’s “What is the probability that the LHC will create a hamster in a tutu on top of Big Ben on noon at Christmas Day singing ‘Greensleeves’ while fighting a lightsaber duel with the ghost of Alexander the Great”, then my probability would again be less than 3E22 (at least before I formed this thought—I don’t know if having said it aloud makes the probability that malevolent aliens will enact it go above 1/3E22 or not).
Thanks for the clarification; that’s quite reasonable.
I’ll note, however, that your own arguments (the world’s greatest physicist certified sane by the world’s greatest psychiatrist...) still apply!
The point being that our “counterintuitiveness detector” shouldn’t get to automatically override calculated probabilities, especially in situations that intuition wasn’t designed to handle.
As for the LHC, it’s worth pointing out that potential benefits also have to be factored into the expected utility calculation, a fact which I don’t think I’ve seen mentioned in the current discussion.
Yvain: [...] “What is the probability that the LHC will create a hamster in a tutu on top of Big Ben on noon at Christmas Day singing ‘Greensleeves’ while fighting a lightsaber duel with the ghost of Alexander the Great”, then my probability would again be less than 3E22 (at least before I formed this thought—I don’t know if having said it aloud makes the probability that malevolent aliens will enact it go above 1/3E22 or not).
komponisto: Thanks for the clarification; that’s quite reasonable.
^Awesome :-)