I see it differently: AI will create many new risks, not just one—the same as nuclear weapons. However, some of them are more probable and-or significant then another. E.g. “a nuclear chain reaction” which consumes the whole Earth is the worst possible outcome. The second one—nuclear winter—was unpredictable at the time.
AI opens a whole pandora box of new risks, I once counted them and came close to 100. Thus there is no just one solution for all such risks.
I see it differently: AI will create many new risks, not just one—the same as nuclear weapons.
I’m not sure how this is different from what the OP says. Do you have a link to the close to 100 new risks you counted? Would you organize them into larger categories differently from how the OP does?
The difference is that OP presents it as a problem: there are many arguments for importance of AI safety. However, it all could be compressed in one argument: there is a new technology (AI) which could create many different global catastrophic risks.
Newer version is in the article “Classification of Global Catastrophic Risks Connected with Artificial Intelligence”; the main difference between the article and my post is that in the article the 2-dimensional classification of such risks is suggested: based on AI power and type of agency, but the number of mentioned risks is smaller.
I see it differently: AI will create many new risks, not just one—the same as nuclear weapons. However, some of them are more probable and-or significant then another. E.g. “a nuclear chain reaction” which consumes the whole Earth is the worst possible outcome. The second one—nuclear winter—was unpredictable at the time.
AI opens a whole pandora box of new risks, I once counted them and came close to 100. Thus there is no just one solution for all such risks.
I’m not sure how this is different from what the OP says. Do you have a link to the close to 100 new risks you counted? Would you organize them into larger categories differently from how the OP does?
The difference is that OP presents it as a problem: there are many arguments for importance of AI safety. However, it all could be compressed in one argument: there is a new technology (AI) which could create many different global catastrophic risks.
My list of such risks is presented in the LW post and map its end.
Newer version is in the article “Classification of Global Catastrophic Risks Connected with Artificial Intelligence”; the main difference between the article and my post is that in the article the 2-dimensional classification of such risks is suggested: based on AI power and type of agency, but the number of mentioned risks is smaller.