I couldn″t have said better. I’ll think about it if I ever have to explain the issue to laypeople. The key point I take is, it makes little matter if the AI has no limbs, as long as it can have humans do its bidding.
By the way, your scenario sounds both vastly more probable than a fully fledged hard take off, and nearly as scary. To take over the world, one doesn’t need superhuman intelligence, nor self modification, nor faster thoughts, nor even nanotech or other SciFi technology. No, one just needs to be around the 90th human percentile in various domains (typically those relevant to take over the Roman Empire), and be able to duplicate oneself.
This is as weak as a “human-level” AI one could think of. Yet it sounds like it could probably set up a singleton before we could stop it (that would mean something like shutting down the Internet, or building another AI before the first takes over the entire network). And the way I see it, it is even worse:
If an AI demonstrate “human-level” optimization power on a single computer, I have no reason to think it will not be able to think much faster when unleashed on the network. This effect could be amplified if it additionally takes over (or collaborate with) a major chip manufacturer, and Moore’s law somehow still applies.
The exact same scenario can apply to a group of human uploads.
Now just a caveat: I assumed the AI (or upload) would start right away with enough processing power to demonstrate human-level abilities in “real time”. We could on the other hand imagine an AI for which we can demonstrate that if it ran a couple of orders of magnitude faster, then it would be as capable as a human mind. That would delay a hard take-off, and make it more predictable (assuming no self-modification). It may also let us prevent the rise of a Singleton.
This is as weak as a “human-level” AI one could think of. Yet it sounds like it could probably set up a singleton before we could stop it (that would mean something like shutting down the Internet, or building another AI before the first takes over the entire network).
I’m thinking the second is probable. A single AI seems unlikely.
I couldn″t have said better. I’ll think about it if I ever have to explain the issue to laypeople. The key point I take is, it makes little matter if the AI has no limbs, as long as it can have humans do its bidding.
By the way, your scenario sounds both vastly more probable than a fully fledged hard take off, and nearly as scary. To take over the world, one doesn’t need superhuman intelligence, nor self modification, nor faster thoughts, nor even nanotech or other SciFi technology. No, one just needs to be around the 90th human percentile in various domains (typically those relevant to take over the Roman Empire), and be able to duplicate oneself.
This is as weak as a “human-level” AI one could think of. Yet it sounds like it could probably set up a singleton before we could stop it (that would mean something like shutting down the Internet, or building another AI before the first takes over the entire network). And the way I see it, it is even worse:
If an AI demonstrate “human-level” optimization power on a single computer, I have no reason to think it will not be able to think much faster when unleashed on the network. This effect could be amplified if it additionally takes over (or collaborate with) a major chip manufacturer, and Moore’s law somehow still applies.
The exact same scenario can apply to a group of human uploads.
Now just a caveat: I assumed the AI (or upload) would start right away with enough processing power to demonstrate human-level abilities in “real time”. We could on the other hand imagine an AI for which we can demonstrate that if it ran a couple of orders of magnitude faster, then it would be as capable as a human mind. That would delay a hard take-off, and make it more predictable (assuming no self-modification). It may also let us prevent the rise of a Singleton.
I’m thinking the second is probable. A single AI seems unlikely.