Is intelligence explosion necessary for doomsday?
I searched for articles on the topic and couldn’t find any.
It seems to me that intelligence explosion makes human annihilation much more likely, since superintelligences will certainly be able to outwit humans, but that a human-level intelligence that could process information much faster than humans would certainly be a large threat itself without any upgrading. It could still discover programmable nanomachines long before humans do, gather enough information to predict how humans will act, etc. We already know that a human-level intelligence can “escape from the box.” Not 100% of the time, but a real AI will have the opportunity for many more trials, and its processing abilities should make it far more quick-witted than we are.
I think a non-friendly AI would only need to be 20 years or so more advanced than the rest of humanity to pose a major threat, especially if self-replicating nanomachines are possible. Skeptics of intelligence explosion should still be worried about the creation of computers with unfriendly goal systems. What am I missing?
I agree, intelligence explosion seems to be irrelevant in motivating FAI. It increases urgency of the problem, but not dramatically, since WBE sets that time limit.
Did you mean 20 human-years more advanced? Because the intelligence that could process information much faster than humans could possibly reach this level in a week or in a minute. Depends on how much faster would it be. We might underestimate its speed, if it would be somewhat clumsy at the beginning, and then learn better. Also if it escapes, it can gather resources to build more copies, thus accelerating itself more.
Yes, I meant human years. I’m just imagining how long it would take us to build defenses against nanotechnology and nanotech itself were the AI not around.
I’m really not sure a human level AI would be at all that much of an advantage when it comes to developing technology at an accelerated rate, even at dramatically accelerated subjective time scales. Even in relatively narrow fields like nanotechnology, there are thousands of people investing a lot of time into working on it, not to mention all the people working in disparate disciplines which feed intellectual capital into the field. That’s likely tens or hundreds of thousands of man hours a day invested, plus access to the materials needed to run experiments. Keep in mind that your AI is limited by the speed at which experiments can be run in the real world, and must devote a significant portion of its time to unrelated intellectual labor in order to fund both its own operation, and real-world experiments. In order to outpace human research under these constraints, the AI would need to be operating on timescales so fast that they may be physically unrealistic.
In short, I would say it’s likely that your AI would perform extremely well in intelligence tests against any single human, provided it were willing to do the grunt work of really thinking about every decision. I just don’t think it could outpace humanity.
Yes. My (admittedly poor) judgment is that while I’d certainly be crushed by an unfriendly intelligence explosion, I probably wouldn’t survive long in something like Robin Hanson’s “em world” either.
But answering the intelligence explosion question becomes important when it comes to strategies for surviving the development of above-human-level AI. If unfriendly intelligence explosions are likely then it severely limits which strategies will work. If friendly intelligence explosions are possible then it suggests a strategy which might work.
I’m pretty sure it wouldn’t need to be nearly that advanced. A few modestly intelligent humans without ethical restrictions could already do an enormous amount of harm, and it is entirely possible that they could cause human extinction. Gwern has written some truly excellent material on the subject, if you’re interested.