The difference is that OP presents it as a problem: there are many arguments for importance of AI safety. However, it all could be compressed in one argument: there is a new technology (AI) which could create many different global catastrophic risks.
Newer version is in the article “Classification of Global Catastrophic Risks Connected with Artificial Intelligence”; the main difference between the article and my post is that in the article the 2-dimensional classification of such risks is suggested: based on AI power and type of agency, but the number of mentioned risks is smaller.
The difference is that OP presents it as a problem: there are many arguments for importance of AI safety. However, it all could be compressed in one argument: there is a new technology (AI) which could create many different global catastrophic risks.
My list of such risks is presented in the LW post and map its end.
Newer version is in the article “Classification of Global Catastrophic Risks Connected with Artificial Intelligence”; the main difference between the article and my post is that in the article the 2-dimensional classification of such risks is suggested: based on AI power and type of agency, but the number of mentioned risks is smaller.