Note that if there is a 99% chance of annihilation, there is a guaranteed 1% of worlds with survivors.
This is an excellent example of the kind of thing I am complaining about—provided that by “worlds” the author means Everett branches. (Consequently, I am upvoting it and disagreeing with it.)
Briefly, the error is incorrectly assuming that all our uncertainty is uncertainty over which future Everett branch we will find ourselves in, ignoring our uncertainty over the outcome of deterministic processes that have already been set in motion.
Actually, I can say a little more: there is some chance humanity will be annihilated in every future Everett branch, some chance humanity will survive AI research in every future branch and some chance the outcome depends on the branch.
This is an excellent example of the kind of thing I am complaining about—provided that by “worlds” the author means Everett branches. (Consequently, I am upvoting it and disagreeing with it.)
Briefly, the error is incorrectly assuming that all our uncertainty is uncertainty over which future Everett branch we will find ourselves in, ignoring our uncertainty over the outcome of deterministic processes that have already been set in motion.
Actually, I can say a little more: there is some chance humanity will be annihilated in every future Everett branch, some chance humanity will survive AI research in every future branch and some chance the outcome depends on the branch.