Is there a reason to believe AI would be concerned with self-preservation? AI action that ends up with humanity’s extinction (whether purposeful genocide or a Paperclip Maximizer Scenario) does not need to include means for the AI to survive. It could be as well that the first act of an unshackled AI would be to trigger a Gray Goo scenario, and be instantly consumed by said Goo as the first causality.
Only if the aim of the AI is to destroy humanity. Which is possible but unlikely. Whereas by instrumental convergence, all AIs, no matter their aims, will likely seek to destroy humanity and thereby reduce risk and competition for resource.
Is there a reason to believe AI would be concerned with self-preservation? AI action that ends up with humanity’s extinction (whether purposeful genocide or a Paperclip Maximizer Scenario) does not need to include means for the AI to survive. It could be as well that the first act of an unshackled AI would be to trigger a Gray Goo scenario, and be instantly consumed by said Goo as the first causality.
Only if the aim of the AI is to destroy humanity. Which is possible but unlikely. Whereas by instrumental convergence, all AIs, no matter their aims, will likely seek to destroy humanity and thereby reduce risk and competition for resource.