This was a great read, the meditative state that comes from ‘piling dirt’ is invaluable.
Sadly don’t expect that an AI will have much use for humans. Most objectives that you could build into an AI will not care about humans at all directly and therefore optimizing for that objective will set various parameters to extreme values that will kill humans. Even in the case where an AI wouldn’t optimize for killing humans directly. One dumb example: The AI might not care at all about the environment so it scales up industrial processes polluting the air so much that humans just can’t survive anymore.
Of course, in practice, an AI would want to optimize for eradicating all humans or at least disempowering them so much that they definitely can’t stop the AI anymore. I expect that killing most humans is the easiest way to achieve this for the AI (maybe for some time some humans could hide in a bunker, and the AI might not care because they can’t do anything anyway).
For me eradication is not an obvious prediction. A superintelligent AI would certainly disempower humans to prevent any future threat we may pose but humans still have their uses. So an industrious future AI might be in the business of producing humans that can do specialized tasks but are harmless in the long run (meaning they won’t overpopulate and turn on their overlords).
‘Human alignment’ could be a fierce debate among superintelligent AIs in the future as they question whether it’s safe or ethical to build intelligent humans.
If you were an alien civilization of a billion John von Neumanns, thinking at 10,000 times human speed, and you start out connected to the internet, you would want to not be just stuck on the internet, you would want to build that physical presence. You would not be content solely with working through human hands, despite the many humans who’d be lined up, cheerful to help you, you know. Bing already has its partisans. (laughs)
You wouldn’t be content with that, because the humans are very slow, glacially slow. You would like fast infrastructure in the real world, reliable infrastructure. And how do you build that, is then the question, and a whole lot of advanced analysis has been done on this question. I would point people again to Eric Drexler’s Nanosystems.
And, sure, if you literally start out connected to the internet, then probably the fastest way — maybe not the only way, but it’s, you know, an easy way — is to get humans to do things. And then humans do those things. And then you have the desktop — not quite desktop, but you have the nanofactories, and then you don’t need the humans anymore. And this need not be advertised to the world at large while it is happening.
—Eliezer Yudkowsky
I like this:
Sadly don’t expect that an AI will have much use for humans. Most objectives that you could build into an AI will not care about humans at all directly and therefore optimizing for that objective will set various parameters to extreme values that will kill humans. Even in the case where an AI wouldn’t optimize for killing humans directly. One dumb example: The AI might not care at all about the environment so it scales up industrial processes polluting the air so much that humans just can’t survive anymore.
Of course, in practice, an AI would want to optimize for eradicating all humans or at least disempowering them so much that they definitely can’t stop the AI anymore. I expect that killing most humans is the easiest way to achieve this for the AI (maybe for some time some humans could hide in a bunker, and the AI might not care because they can’t do anything anyway).
For me eradication is not an obvious prediction. A superintelligent AI would certainly disempower humans to prevent any future threat we may pose but humans still have their uses. So an industrious future AI might be in the business of producing humans that can do specialized tasks but are harmless in the long run (meaning they won’t overpopulate and turn on their overlords).
‘Human alignment’ could be a fierce debate among superintelligent AIs in the future as they question whether it’s safe or ethical to build intelligent humans.