For me eradication is not an obvious prediction. A superintelligent AI would certainly disempower humans to prevent any future threat we may pose but humans still have their uses. So an industrious future AI might be in the business of producing humans that can do specialized tasks but are harmless in the long run (meaning they won’t overpopulate and turn on their overlords).
‘Human alignment’ could be a fierce debate among superintelligent AIs in the future as they question whether it’s safe or ethical to build intelligent humans.
If you were an alien civilization of a billion John von Neumanns, thinking at 10,000 times human speed, and you start out connected to the internet, you would want to not be just stuck on the internet, you would want to build that physical presence. You would not be content solely with working through human hands, despite the many humans who’d be lined up, cheerful to help you, you know. Bing already has its partisans. (laughs)
You wouldn’t be content with that, because the humans are very slow, glacially slow. You would like fast infrastructure in the real world, reliable infrastructure. And how do you build that, is then the question, and a whole lot of advanced analysis has been done on this question. I would point people again to Eric Drexler’s Nanosystems.
And, sure, if you literally start out connected to the internet, then probably the fastest way — maybe not the only way, but it’s, you know, an easy way — is to get humans to do things. And then humans do those things. And then you have the desktop — not quite desktop, but you have the nanofactories, and then you don’t need the humans anymore. And this need not be advertised to the world at large while it is happening.
—Eliezer Yudkowsky
For me eradication is not an obvious prediction. A superintelligent AI would certainly disempower humans to prevent any future threat we may pose but humans still have their uses. So an industrious future AI might be in the business of producing humans that can do specialized tasks but are harmless in the long run (meaning they won’t overpopulate and turn on their overlords).
‘Human alignment’ could be a fierce debate among superintelligent AIs in the future as they question whether it’s safe or ethical to build intelligent humans.