I see it differently: AI will create many new risks, not just one—the same as nuclear weapons.
I’m not sure how this is different from what the OP says. Do you have a link to the close to 100 new risks you counted? Would you organize them into larger categories differently from how the OP does?
The difference is that OP presents it as a problem: there are many arguments for importance of AI safety. However, it all could be compressed in one argument: there is a new technology (AI) which could create many different global catastrophic risks.
Newer version is in the article “Classification of Global Catastrophic Risks Connected with Artificial Intelligence”; the main difference between the article and my post is that in the article the 2-dimensional classification of such risks is suggested: based on AI power and type of agency, but the number of mentioned risks is smaller.
I’m not sure how this is different from what the OP says. Do you have a link to the close to 100 new risks you counted? Would you organize them into larger categories differently from how the OP does?
The difference is that OP presents it as a problem: there are many arguments for importance of AI safety. However, it all could be compressed in one argument: there is a new technology (AI) which could create many different global catastrophic risks.
My list of such risks is presented in the LW post and map its end.
Newer version is in the article “Classification of Global Catastrophic Risks Connected with Artificial Intelligence”; the main difference between the article and my post is that in the article the 2-dimensional classification of such risks is suggested: based on AI power and type of agency, but the number of mentioned risks is smaller.