One thing to consider is how hard an AI needs to work to break out of human dependence. There’s no point destroying humanity if that then leaves you with noone to man the power stations that keep you alive.
If limited nanofactories exist it’s much easier to bootstrap them into whatever you want, than it is those nanofactories don’t exist, and robotics haven’t developed enough for you to create one without the human touch.
Is there a reason to believe AI would be concerned with self-preservation? AI action that ends up with humanity’s extinction (whether purposeful genocide or a Paperclip Maximizer Scenario) does not need to include means for the AI to survive. It could be as well that the first act of an unshackled AI would be to trigger a Gray Goo scenario, and be instantly consumed by said Goo as the first causality.
Only if the aim of the AI is to destroy humanity. Which is possible but unlikely. Whereas by instrumental convergence, all AIs, no matter their aims, will likely seek to destroy humanity and thereby reduce risk and competition for resource.
One thing to consider is how hard an AI needs to work to break out of human dependence. There’s no point destroying humanity if that then leaves you with noone to man the power stations that keep you alive.
If limited nanofactories exist it’s much easier to bootstrap them into whatever you want, than it is those nanofactories don’t exist, and robotics haven’t developed enough for you to create one without the human touch.
Do you have similar concerns about humanoid robotics, then?
I would have concerns about suitably generic, flexible and sensitive humanoid robots, yes.
Is there a reason to believe AI would be concerned with self-preservation? AI action that ends up with humanity’s extinction (whether purposeful genocide or a Paperclip Maximizer Scenario) does not need to include means for the AI to survive. It could be as well that the first act of an unshackled AI would be to trigger a Gray Goo scenario, and be instantly consumed by said Goo as the first causality.
Only if the aim of the AI is to destroy humanity. Which is possible but unlikely. Whereas by instrumental convergence, all AIs, no matter their aims, will likely seek to destroy humanity and thereby reduce risk and competition for resource.