Specifically: although it is certainly possible that we will emerge relatively unscathed from the present dangerous situation caused by AI research, that does not mean that if things go badly for us, there will be any descendant-branches of the our branch in which even a single human survives.
There won’t be any iff there is a 100.0000% probability of annihilation. That is higher than EY’s estimate. Note that if there is a 99% chance of annihilation, there is a guaranteed 1% of worlds with survivors.
Bayesian probability (which is the kind Yudkowsky is using when he gives the probability of AI doom) is subjective, referring to one’s degree of belief in a proposition, and cannot be 0% or 100%. If you’re using probability to refer to the objective proportion of future Everett branches something occurs in, you are using it in a very different way than most, and probabilities in that system cannot be compared to Yudkowsky’s probabilities.
If you’re talking about Everett branches, you are talking about objective probability. What I am talking about doesn’t come into it, because I don’t use “Everett branch” to mean “probable outcome”.
What are you talking about then? It seems like you’re talking about probabilities as being the objective proportion of worlds something happen in in some sort of multiverse theory, even if it’s not the Everett multiverse. And when you said “There won’t be any iff there is a 100.0000% probability of annihilation” you were replying to a comment talking about whether there will be any Everett branches where humans survive, so it was reasonable for me to think you were talking about Everett branches.
Note that if there is a 99% chance of annihilation, there is a guaranteed 1% of worlds with survivors.
This is an excellent example of the kind of thing I am complaining about—provided that by “worlds” the author means Everett branches. (Consequently, I am upvoting it and disagreeing with it.)
Briefly, the error is incorrectly assuming that all our uncertainty is uncertainty over which future Everett branch we will find ourselves in, ignoring our uncertainty over the outcome of deterministic processes that have already been set in motion.
Actually, I can say a little more: there is some chance humanity will be annihilated in every future Everett branch, some chance humanity will survive AI research in every future branch and some chance the outcome depends on the branch.
There won’t be any iff there is a 100.0000% probability of annihilation. That is higher than EY’s estimate. Note that if there is a 99% chance of annihilation, there is a guaranteed 1% of worlds with survivors.
Bayesian probability (which is the kind Yudkowsky is using when he gives the probability of AI doom) is subjective, referring to one’s degree of belief in a proposition, and cannot be 0% or 100%. If you’re using probability to refer to the objective proportion of future Everett branches something occurs in, you are using it in a very different way than most, and probabilities in that system cannot be compared to Yudkowsky’s probabilities.
If you’re talking about Everett branches, you are talking about objective probability. What I am talking about doesn’t come into it, because I don’t use “Everett branch” to mean “probable outcome”.
What are you talking about then? It seems like you’re talking about probabilities as being the objective proportion of worlds something happen in in some sort of multiverse theory, even if it’s not the Everett multiverse. And when you said “There won’t be any iff there is a 100.0000% probability of annihilation” you were replying to a comment talking about whether there will be any Everett branches where humans survive, so it was reasonable for me to think you were talking about Everett branches.
If I’m not talking about objective probabilities, I’m talking about subjective probabilities. Or both.
This is an excellent example of the kind of thing I am complaining about—provided that by “worlds” the author means Everett branches. (Consequently, I am upvoting it and disagreeing with it.)
Briefly, the error is incorrectly assuming that all our uncertainty is uncertainty over which future Everett branch we will find ourselves in, ignoring our uncertainty over the outcome of deterministic processes that have already been set in motion.
Actually, I can say a little more: there is some chance humanity will be annihilated in every future Everett branch, some chance humanity will survive AI research in every future branch and some chance the outcome depends on the branch.