Good points. I would imagine that all of these scenarios are made possible by an intelligence advantage, but I did not make that explicit here.
Your point about multitasking (if I understood it correctly) is important too. We can imagine an unfriendly-AI pursuing all 3 paths to existential catastrophe simultaneously. The question becomes, are there prevention strategies for combinations of existential-risk-paths which work better than simply trying to prevent individual paths? I have to think on that more.
Good points. I would imagine that all of these scenarios are made possible by an intelligence advantage, but I did not make that explicit here.
Your point about multitasking (if I understood it correctly) is important too. We can imagine an unfriendly-AI pursuing all 3 paths to existential catastrophe simultaneously. The question becomes, are there prevention strategies for combinations of existential-risk-paths which work better than simply trying to prevent individual paths? I have to think on that more.