I think a non-friendly AI would only need to be 20 years or so more advanced than the rest of humanity to pose a major threat, especially if self-replicating nanomachines are possible.
Did you mean 20 human-years more advanced? Because the intelligence that could process information much faster than humans could possibly reach this level in a week or in a minute. Depends on how much faster would it be. We might underestimate its speed, if it would be somewhat clumsy at the beginning, and then learn better. Also if it escapes, it can gather resources to build more copies, thus accelerating itself more.
Yes, I meant human years. I’m just imagining how long it would take us to build defenses against nanotechnology and nanotech itself were the AI not around.
Did you mean 20 human-years more advanced? Because the intelligence that could process information much faster than humans could possibly reach this level in a week or in a minute. Depends on how much faster would it be. We might underestimate its speed, if it would be somewhat clumsy at the beginning, and then learn better. Also if it escapes, it can gather resources to build more copies, thus accelerating itself more.
Yes, I meant human years. I’m just imagining how long it would take us to build defenses against nanotechnology and nanotech itself were the AI not around.