>then it just needs to find one poor schmuck to accept deliveries and help it put together its doomsday weapon.
Yes, but do I take it for granted that an AI will be able to manipulate the human into creating a virus that will kill literally everyone on Earth, or at least a sufficient number to allow the AI to enact some secondary plans to take over the world? Without being detected? Not with anywhere near 100% probability. I just think these sorts of arguments should be subject to Drake equation-style reasonings that will dilute the likelihood of doom under most circumstances.
This isn’t an argument for being complacent. But it does allow us to push back against the idea that “we only have one shot at this.”
I agree that there seems to be a lot of handwaving about the nanotech argument, but I can’t say that I agree here:
>But for the sake of argument, let’s say that the AGI does manage to create a nanotech factory, retain control, and still remain undetected by the humans.
>It doesn’t stay undetected long enough to bootstrap and mass produce human replacement infrastructure.
It seems like the idea is that the AI would create nanomachines that it could host itself on while starting to grey goo enough of the Earth to overtake humanity. While humans would notice this at an early stage I could see it being possible that the AI would disperse itself quickly enough that it would be impossible to suppress totally, and thus humanity losing against a grey goo wave would be inevitable.
The alternative story that I’ve seen is that the AI engineers a dormant virus that is transmitted to most of humanity without generating alarm, and then suddenly activates to kill every human. Also seems handwavey but it does skip the “AI would need to establish its own nation” phase.