At the nanotech stage, the AI can turn any atoms into really good robots. Self replication ⇒ Exponential growth ⇒ Limiting factor quickly becomes atoms and energy. If the AI is just doing self replication and paperclip production, humans aren’t useful workers compared to nanotech robots. (Also, the AI will probably disassemble the earth. At this stage, it has to make O’Neil cylinders, nanotech food production etc to avoid wiping out humanity.)
I think that there is an common fallacy that superintelligent AI risks are perceived as grey goo risks.
The main difference is that AI thinks strategically on very long distances and takes even small possibilities into account.
If AI is going to create as much paperclip as possible, then what it cares about is only its chances of colonise the whole universe and even survive the end of the universe. These chances negligibly affected by the amount of atoms on Earth, but strongly depend on AI’s chances to meet other aliens eventually. Other aliens may have different values systems and some of them will be friendly to their creators. Such future AIs will be not happy to learn that Paperclipper destroyed humans and will not agree to make more paperclips. Bostrom explored similar ideas in “Hail Mary and Value Porosity”
TL;DR: it is instrumentally reasonable to preserve humans as they could be traded with alien AIs. Human atoms have very small instrumental value.
At the nanotech stage, the AI can turn any atoms into really good robots. Self replication ⇒ Exponential growth ⇒ Limiting factor quickly becomes atoms and energy. If the AI is just doing self replication and paperclip production, humans aren’t useful workers compared to nanotech robots. (Also, the AI will probably disassemble the earth. At this stage, it has to make O’Neil cylinders, nanotech food production etc to avoid wiping out humanity.)
I think that there is an common fallacy that superintelligent AI risks are perceived as grey goo risks.
The main difference is that AI thinks strategically on very long distances and takes even small possibilities into account.
If AI is going to create as much paperclip as possible, then what it cares about is only its chances of colonise the whole universe and even survive the end of the universe. These chances negligibly affected by the amount of atoms on Earth, but strongly depend on AI’s chances to meet other aliens eventually. Other aliens may have different values systems and some of them will be friendly to their creators. Such future AIs will be not happy to learn that Paperclipper destroyed humans and will not agree to make more paperclips. Bostrom explored similar ideas in “Hail Mary and Value Porosity”
TL;DR: it is instrumentally reasonable to preserve humans as they could be traded with alien AIs. Human atoms have very small instrumental value.