In general, I agree with you: we can’t prove with certainty that AI will kill everyone. We can only establish a significant probability (which we also can’t measure precisely).
My point is that some AI catastrophe scenarios don’t require AI motivation. For example: - A human could use narrow AI to develop a biological virus - An Earth-scale singleton AI could suffer from a catastrophic error - An AI arms race could lead to a world war
In general, I agree with you: we can’t prove with certainty that AI will kill everyone. We can only establish a significant probability (which we also can’t measure precisely).
My point is that some AI catastrophe scenarios don’t require AI motivation. For example:
- A human could use narrow AI to develop a biological virus
- An Earth-scale singleton AI could suffer from a catastrophic error
- An AI arms race could lead to a world war