At least half of that reluctance is due to concerns about how nanotech will affect the risks associated with AI. Having powerful nanotech around when AI becomes more competent than humans will make it somewhat easier for AIs to take control of the world.
Doesn’t progress in nanotech now empower humans far more than it empowers ASI, which was already going to figure it out without us?
Broadly, any increase in human industrial capacity pre-ASI hardens the world against ASI and brings us closer to having a bargaining position when it arrives. EG, once we have the capacity to put cheap genomic pathogen screeners everywhere → harder for it to infect us with anything novel without getting caught.
One thing to consider is how hard an AI needs to work to break out of human dependence. There’s no point destroying humanity if that then leaves you with noone to man the power stations that keep you alive.
If limited nanofactories exist it’s much easier to bootstrap them into whatever you want, than it is those nanofactories don’t exist, and robotics haven’t developed enough for you to create one without the human touch.
Is there a reason to believe AI would be concerned with self-preservation? AI action that ends up with humanity’s extinction (whether purposeful genocide or a Paperclip Maximizer Scenario) does not need to include means for the AI to survive. It could be as well that the first act of an unshackled AI would be to trigger a Gray Goo scenario, and be instantly consumed by said Goo as the first causality.
Only if the aim of the AI is to destroy humanity. Which is possible but unlikely. Whereas by instrumental convergence, all AIs, no matter their aims, will likely seek to destroy humanity and thereby reduce risk and competition for resource.
My guess is that ASI will be faster to adapt to novel weapons and military strategies. Nanotech is likely to speed up the rate at which new weapons are designed and fabricated.
Imagine a world in which a rogue AI can replicate a billion drones, of a somewhat novel design, in a week or so.
Existing human institutions aren’t likely to adapt fast enough to react competently to that.
Doesn’t progress in nanotech now empower humans far more than it empowers ASI, which was already going to figure it out without us?
Broadly, any increase in human industrial capacity pre-ASI hardens the world against ASI and brings us closer to having a bargaining position when it arrives. EG, once we have the capacity to put cheap genomic pathogen screeners everywhere → harder for it to infect us with anything novel without getting caught.
One thing to consider is how hard an AI needs to work to break out of human dependence. There’s no point destroying humanity if that then leaves you with noone to man the power stations that keep you alive.
If limited nanofactories exist it’s much easier to bootstrap them into whatever you want, than it is those nanofactories don’t exist, and robotics haven’t developed enough for you to create one without the human touch.
Do you have similar concerns about humanoid robotics, then?
I would have concerns about suitably generic, flexible and sensitive humanoid robots, yes.
Is there a reason to believe AI would be concerned with self-preservation? AI action that ends up with humanity’s extinction (whether purposeful genocide or a Paperclip Maximizer Scenario) does not need to include means for the AI to survive. It could be as well that the first act of an unshackled AI would be to trigger a Gray Goo scenario, and be instantly consumed by said Goo as the first causality.
Only if the aim of the AI is to destroy humanity. Which is possible but unlikely. Whereas by instrumental convergence, all AIs, no matter their aims, will likely seek to destroy humanity and thereby reduce risk and competition for resource.
My guess is that ASI will be faster to adapt to novel weapons and military strategies. Nanotech is likely to speed up the rate at which new weapons are designed and fabricated.
Imagine a world in which a rogue AI can replicate a billion drones, of a somewhat novel design, in a week or so. Existing human institutions aren’t likely to adapt fast enough to react competently to that.