Unfriendly AI will not be very much interested to kill humans for atoms, as atoms have very small instrumental value, and living humans have larger instrumental value on all stages of AI’s evolution
Humans could be a threat though—ie. by building another AI with different values.
There is a very short period of time when humans are a threat and thus are needed to be exterminated: it is before AI reach the level of superintellignet omnipotence, but after that AI is already capable to cause a human extinction.
Superintelligent AI could prevent creation of other AIs by surveillance via some nanotech. So if AI mastered nanotech, it doesn’t need to exterminate humans for own safety. So only an AI before nanotech may need to exterminate humans. But how? It could create a biological virus, which is simpler than nanotech, but the problem is that such Young AI depends yet on human-built infrastructure, like electricity, so exterminating humans before nanotech is not a good idea.
I am not trying to show innate AI safety here, I just want to point that extermination of humans is not a convergent goal for AI. There are still many ways how AI could go wrong and kill all us.
At the nanotech stage, the AI can turn any atoms into really good robots. Self replication ⇒ Exponential growth ⇒ Limiting factor quickly becomes atoms and energy. If the AI is just doing self replication and paperclip production, humans aren’t useful workers compared to nanotech robots. (Also, the AI will probably disassemble the earth. At this stage, it has to make O’Neil cylinders, nanotech food production etc to avoid wiping out humanity.)
I think that there is an common fallacy that superintelligent AI risks are perceived as grey goo risks.
The main difference is that AI thinks strategically on very long distances and takes even small possibilities into account.
If AI is going to create as much paperclip as possible, then what it cares about is only its chances of colonise the whole universe and even survive the end of the universe. These chances negligibly affected by the amount of atoms on Earth, but strongly depend on AI’s chances to meet other aliens eventually. Other aliens may have different values systems and some of them will be friendly to their creators. Such future AIs will be not happy to learn that Paperclipper destroyed humans and will not agree to make more paperclips. Bostrom explored similar ideas in “Hail Mary and Value Porosity”
TL;DR: it is instrumentally reasonable to preserve humans as they could be traded with alien AIs. Human atoms have very small instrumental value.
Humans could be a threat though—ie. by building another AI with different values.
There is a very short period of time when humans are a threat and thus are needed to be exterminated: it is before AI reach the level of superintellignet omnipotence, but after that AI is already capable to cause a human extinction.
Superintelligent AI could prevent creation of other AIs by surveillance via some nanotech. So if AI mastered nanotech, it doesn’t need to exterminate humans for own safety. So only an AI before nanotech may need to exterminate humans. But how? It could create a biological virus, which is simpler than nanotech, but the problem is that such Young AI depends yet on human-built infrastructure, like electricity, so exterminating humans before nanotech is not a good idea.
I am not trying to show innate AI safety here, I just want to point that extermination of humans is not a convergent goal for AI. There are still many ways how AI could go wrong and kill all us.
At the nanotech stage, the AI can turn any atoms into really good robots. Self replication ⇒ Exponential growth ⇒ Limiting factor quickly becomes atoms and energy. If the AI is just doing self replication and paperclip production, humans aren’t useful workers compared to nanotech robots. (Also, the AI will probably disassemble the earth. At this stage, it has to make O’Neil cylinders, nanotech food production etc to avoid wiping out humanity.)
I think that there is an common fallacy that superintelligent AI risks are perceived as grey goo risks.
The main difference is that AI thinks strategically on very long distances and takes even small possibilities into account.
If AI is going to create as much paperclip as possible, then what it cares about is only its chances of colonise the whole universe and even survive the end of the universe. These chances negligibly affected by the amount of atoms on Earth, but strongly depend on AI’s chances to meet other aliens eventually. Other aliens may have different values systems and some of them will be friendly to their creators. Such future AIs will be not happy to learn that Paperclipper destroyed humans and will not agree to make more paperclips. Bostrom explored similar ideas in “Hail Mary and Value Porosity”
TL;DR: it is instrumentally reasonable to preserve humans as they could be traded with alien AIs. Human atoms have very small instrumental value.