I was treating it as part of the premises of the discussion that the AI is at least indifferent to doing so: it needs only enough infrastructure left for it to continue to exist and be able to rebuild under its own total control.
How many humans does it take to keep the infrastructure running that is necessary to create new and better CPU’s etc.? I am highly confident that it takes more than the random patches of civilization left over after deploying a bioweapon on a global scale.
Surely we can imagine a science fiction world in which the AI has access to nanoassemblers, or in which the world’s infrastructure is maintained by robot drones. But then, what do we have? We have a completely artificial scenario designed to yield the desired conclusion. An AI with some set of vague abilities, and circumstances under which these abilities suffice to take over the world.
As I wrote several times in the past. If your AI requires nanotechnology, bioweapons, or a fragile world, then superhuman AI is our least worry, because long before we will create it, the tools necessary to create it will allow unfriendly humans to do the same.
Bioweapons: If an AI can use bioweapons to blackmail the world into submission, then some group of people will be able to do that before this AI is created (dispatch members in random places around the world).
Nanotechnology: It seems likely to me that narrow AI precursors will suffice in order for humans to create nanotechnology. Which makes it a distinct risk.
A fragile world: I suspect that a bunch of devastating cyber-attacks and wars will be fought before the first general AI capable of doing the same. Governments will realize that their most important counterstrike resources need to be offline. In other words, it seems very unlikely that an open confrontation with humans would be a viable strategy for a fragile high-tech product such as the first general AI. And taking over a bunch of refrigerators, mobile phones and cars is only a catastrophic risk, not an existential one.
I really don’t think we have to posit nanoassemblers for this particular scenario to work. Robot drones are needed, but I think they fall out as a consequence of currently existing robots and the all-singing all-dancing AI we’ve imagined in the first place. There are shedloads of robots around at the moment—the OP mentioned the existence of Internet-connected robot-controlled cars, but there are plenty of others, including most high tech manufacturer. Sure, those robots aren’t autonomous, but they don’t need to be if we’ve assumed an all-singing all-dancing AI in the first place. I think that might be enough to keep the power and comms on in a few select areas with a bit of careful planning.
Rebuilding/restarting enough infrastructure to be able to make new and better CPUs (and new and better robot extensions of the AI) would take an awfully long time, granted, but the AI is free of human threat at that point.
How many humans does it take to keep the infrastructure running that is necessary to create new and better CPU’s etc.? I am highly confident that it takes more than the random patches of civilization left over after deploying a bioweapon on a global scale.
Surely we can imagine a science fiction world in which the AI has access to nanoassemblers, or in which the world’s infrastructure is maintained by robot drones. But then, what do we have? We have a completely artificial scenario designed to yield the desired conclusion. An AI with some set of vague abilities, and circumstances under which these abilities suffice to take over the world.
As I wrote several times in the past. If your AI requires nanotechnology, bioweapons, or a fragile world, then superhuman AI is our least worry, because long before we will create it, the tools necessary to create it will allow unfriendly humans to do the same.
Bioweapons: If an AI can use bioweapons to blackmail the world into submission, then some group of people will be able to do that before this AI is created (dispatch members in random places around the world).
Nanotechnology: It seems likely to me that narrow AI precursors will suffice in order for humans to create nanotechnology. Which makes it a distinct risk.
A fragile world: I suspect that a bunch of devastating cyber-attacks and wars will be fought before the first general AI capable of doing the same. Governments will realize that their most important counterstrike resources need to be offline. In other words, it seems very unlikely that an open confrontation with humans would be a viable strategy for a fragile high-tech product such as the first general AI. And taking over a bunch of refrigerators, mobile phones and cars is only a catastrophic risk, not an existential one.
I really don’t think we have to posit nanoassemblers for this particular scenario to work. Robot drones are needed, but I think they fall out as a consequence of currently existing robots and the all-singing all-dancing AI we’ve imagined in the first place. There are shedloads of robots around at the moment—the OP mentioned the existence of Internet-connected robot-controlled cars, but there are plenty of others, including most high tech manufacturer. Sure, those robots aren’t autonomous, but they don’t need to be if we’ve assumed an all-singing all-dancing AI in the first place. I think that might be enough to keep the power and comms on in a few select areas with a bit of careful planning.
Rebuilding/restarting enough infrastructure to be able to make new and better CPUs (and new and better robot extensions of the AI) would take an awfully long time, granted, but the AI is free of human threat at that point.