Oops, I fail! I thought F >= S meant “F is larger than S”. But looking at the definitions of terms, Fail >= Survival must mean “Fail subset_of Survival”. (I do protest that this is an odd symbol to use.)
Okay, looking back at the original argument, and going back to definitions...
If you’ve got two sets of universes side-by-side, one where the LHC destroys the world, and one where it doesn’t, then indeed observing a long string of failures doesn’t help tell you which universe you’re in. However, after a while, nearly all the observers will be concentrated into the non-dangerous universe. In other words, if you’re going to start running the LHC, then, conditioning on your own survival, you are nearly certain to be in the non-dangerous universe. Then further conditioning on the long string of failures, you are equally likely to be in either universe. If you start out by conditioning on the long string of failures, then conditioning on your own survival indeed doesn’t tell you anything more.
But under anthropic reasoning, the argument doesn’t play out like this; the way anthropic reasoning works, particularly under the Quantum Suicide or Quantum Immortality versions, is something along the lines of, “You are never surprised by your own survival”.
From the above, we can see that we need something like:
Initial probability of Danger: 50%
Initial probability of subjective Survival: 100%
Probability of Failure given Danger and Survival: 100%
Probability of Failure given ~Danger and Survival: 1%
Probability of Danger given Survival and Failure: ~1%
So to comment through Simon’s logic vs. anthropic logic step by step:
First thing to note is that since F ⇒ S, we have P(W|F) = P(W|F,S), so we can just work out P(W|F)
still holds technically true
Bayes:
P(W|F) = P(F|W)P(W)/P(F)
Still technically true; but once you condition on survival, as anthropics does in effect require, then P(Fail|Danger) is very high.
Note that none of these probabilities are conditional on survival. So unless in the absence of any selection effects the probability of failure still depends on whether the LHC would destroy Earth, P(F|W) = P(F), and thus P(W|F) = P(W).
Here we depart from anthropic reasoning. As you might expect, quantum suicide says that P(Fail|Danger) != P(Fail). That’s the whole point of raising the possibility of, “given that the LHC might destroy the world, how unusual that it seems to have failed 50 times in a row”
In effect what Eliezer and many commenters are doing is substituting P(F|W,S) for P(F|W). These probabilities are not the same and so this substitution is illegitimate.
...but as stated originally, conditioning on the existence of “observers” is what anthropics is all about. It’s not that we’re substituting, but just that all our calculations were conditioned on survival in the first place.
Oops, I fail! I thought F >= S meant “F is larger than S”. But looking at the definitions of terms, Fail >= Survival must mean “Fail subset_of Survival”. (I do protest that this is an odd symbol to use.)
Okay, looking back at the original argument, and going back to definitions...
If you’ve got two sets of universes side-by-side, one where the LHC destroys the world, and one where it doesn’t, then indeed observing a long string of failures doesn’t help tell you which universe you’re in. However, after a while, nearly all the observers will be concentrated into the non-dangerous universe. In other words, if you’re going to start running the LHC, then, conditioning on your own survival, you are nearly certain to be in the non-dangerous universe. Then further conditioning on the long string of failures, you are equally likely to be in either universe. If you start out by conditioning on the long string of failures, then conditioning on your own survival indeed doesn’t tell you anything more.
But under anthropic reasoning, the argument doesn’t play out like this; the way anthropic reasoning works, particularly under the Quantum Suicide or Quantum Immortality versions, is something along the lines of, “You are never surprised by your own survival”.
From the above, we can see that we need something like:
Initial probability of Danger: 50%
Initial probability of subjective Survival: 100%
Probability of Failure given Danger and Survival: 100%
Probability of Failure given ~Danger and Survival: 1%
Probability of Danger given Survival and Failure: ~1%
So to comment through Simon’s logic vs. anthropic logic step by step:
still holds technically true
Still technically true; but once you condition on survival, as anthropics does in effect require, then P(Fail|Danger) is very high.
Here we depart from anthropic reasoning. As you might expect, quantum suicide says that P(Fail|Danger) != P(Fail). That’s the whole point of raising the possibility of, “given that the LHC might destroy the world, how unusual that it seems to have failed 50 times in a row”
...but as stated originally, conditioning on the existence of “observers” is what anthropics is all about. It’s not that we’re substituting, but just that all our calculations were conditioned on survival in the first place.