I think Eliezer phrases these things as “if we do X, then everybody dies” rather than “if we do X, then with substantial probability everyone dies” because it’s shorter, it’s more vivid, and it doesn’t differ substantially in what we need to do (i.e., make X not happen, or break the link between X and everyone dying).
It’s possible that he also thinks that the probability is more like 99.99% than like 50% (e.g., because there are so many ways in which such a hypothetical AI might end up destroying approximately everything we value), but it doesn’t seem to me that the consequences of “if we continue on our present trajectory, then some time in the next 3-100 years something will emerge that will certainly destroy everything we care about” and “if we continue on our present trajectory, then some time in the next 3-100 years something will emerge that with 50% probability will destroy everything we care about” are very different.
I think Eliezer phrases these things as “if we do X, then everybody dies” rather than “if we do X, then with substantial probability everyone dies” because it’s shorter, it’s more vivid, and it doesn’t differ substantially in what we need to do (i.e., make X not happen, or break the link between X and everyone dying).
It’s possible that he also thinks that the probability is more like 99.99% than like 50% (e.g., because there are so many ways in which such a hypothetical AI might end up destroying approximately everything we value), but it doesn’t seem to me that the consequences of “if we continue on our present trajectory, then some time in the next 3-100 years something will emerge that will certainly destroy everything we care about” and “if we continue on our present trajectory, then some time in the next 3-100 years something will emerge that with 50% probability will destroy everything we care about” are very different.