“Probability that most humans die because of an AI takeover: 11%” should actually read as “Probability that most humans die [within 10 years of building powerful AI] because of an AI takeover: 11%” since it is defined as a sub-set of the 20% of scenarios in which “most humans die within 10 years of building powerful AI”.
This means that there is a scenario with unspecified probability taking up some of the remaining 11% of the 22% of AI takeover scenarios that corresponds to the “Probability that most humans die because of an AI takeover more than 10 years after building powerful AI”.
In other words, Paul’s P(most humans die because of an AI takeover | AI takeover) is not 11%/22%=50%, as a quick reading of his post or a quick look at my visualization seems to imply, but is actually undefined, and is actually >11%/22% = >50%.
For example, perhaps Paul thinks that there is a 3% chance that there is an AI takeover that causes most humans to die more than 10 years after powerful AI is developed. In this case, Paul’s P(most humans die because of an AI takeover | AI takeover) would be equal to (11%+3%)/22%=64%.
I don’t know if Paul himself noticed this. But worth flagging this when revising these estimates later, or meta-updating on them.
Something I noticed:
“Probability that most humans die because of an AI takeover: 11%” should actually read as “Probability that most humans die [within 10 years of building powerful AI] because of an AI takeover: 11%” since it is defined as a sub-set of the 20% of scenarios in which “most humans die within 10 years of building powerful AI”.
This means that there is a scenario with unspecified probability taking up some of the remaining 11% of the 22% of AI takeover scenarios that corresponds to the “Probability that most humans die because of an AI takeover more than 10 years after building powerful AI”.
In other words, Paul’s P(most humans die because of an AI takeover | AI takeover) is not 11%/22%=50%, as a quick reading of his post or a quick look at my visualization seems to imply, but is actually undefined, and is actually >11%/22% = >50%.
For example, perhaps Paul thinks that there is a 3% chance that there is an AI takeover that causes most humans to die more than 10 years after powerful AI is developed. In this case, Paul’s P(most humans die because of an AI takeover | AI takeover) would be equal to (11%+3%)/22%=64%.
I don’t know if Paul himself noticed this. But worth flagging this when revising these estimates later, or meta-updating on them.