Epistemic Status: First read. Moderately endorsed.
I appreciate this post and I think it’s generally good for this sort of clarification to be made.
One distinction is between dying (“extinction risk”) and having a bad future (“existential risk”). I think there’s a good chance of bad futures without extinction, e.g. that AI systems take over but don’t kill everyone.
This still seems ambiguous to me. Does “dying” here mean literally everyone? Does it mean “all animals,” all mammals,” “all humans,” or just “most humans? If it’s all humans dying, do all humans have to be killed by the AI? Or is it permissible that (for example) the AI leaves N people alive, and N is low enough that human extinction follows at the end of these people’s natural lifespan?
I think I understand your sentence to mean “literally zero humans exist X years after the deployment of the AI as a direct causal effect of the AI’s deployment.”
It’s possible that this specific distinction is just not a big deal, but I thought it’s worth noting.
I think these questions are all still ambiguous, just a little bit less ambiguous.
I gave a probability for “most” humans killed, and I intended P(>50% of humans killed). This is fairly close to my estimate for E[fraction of humans killed].
I think if humans die it is very likely that many non-human animals die as well. I don’t have a strong view about the insects and really haven’t thought about it.
In the final bullet I implicitly assumed that the probability of most humans dying for non-takeover reasons shortly after building AI was very similar to the probability of human extinction; I was being imprecise, I think that’s kind of close to true but am not sure exactly what my view is.
Epistemic Status: First read. Moderately endorsed.
I appreciate this post and I think it’s generally good for this sort of clarification to be made.
This still seems ambiguous to me. Does “dying” here mean literally everyone? Does it mean “all animals,” all mammals,” “all humans,” or just “most humans? If it’s all humans dying, do all humans have to be killed by the AI? Or is it permissible that (for example) the AI leaves N people alive, and N is low enough that human extinction follows at the end of these people’s natural lifespan?
I think I understand your sentence to mean “literally zero humans exist X years after the deployment of the AI as a direct causal effect of the AI’s deployment.”
It’s possible that this specific distinction is just not a big deal, but I thought it’s worth noting.
I think these questions are all still ambiguous, just a little bit less ambiguous.
I gave a probability for “most” humans killed, and I intended P(>50% of humans killed). This is fairly close to my estimate for E[fraction of humans killed].
I think if humans die it is very likely that many non-human animals die as well. I don’t have a strong view about the insects and really haven’t thought about it.
In the final bullet I implicitly assumed that the probability of most humans dying for non-takeover reasons shortly after building AI was very similar to the probability of human extinction; I was being imprecise, I think that’s kind of close to true but am not sure exactly what my view is.