I think that there is an common fallacy that superintelligent AI risks are perceived as grey goo risks.
The main difference is that AI thinks strategically on very long distances and takes even small possibilities into account.
If AI is going to create as much paperclip as possible, then what it cares about is only its chances of colonise the whole universe and even survive the end of the universe. These chances negligibly affected by the amount of atoms on Earth, but strongly depend on AI’s chances to meet other aliens eventually. Other aliens may have different values systems and some of them will be friendly to their creators. Such future AIs will be not happy to learn that Paperclipper destroyed humans and will not agree to make more paperclips. Bostrom explored similar ideas in “Hail Mary and Value Porosity”
TL;DR: it is instrumentally reasonable to preserve humans as they could be traded with alien AIs. Human atoms have very small instrumental value.
I think that there is an common fallacy that superintelligent AI risks are perceived as grey goo risks.
The main difference is that AI thinks strategically on very long distances and takes even small possibilities into account.
If AI is going to create as much paperclip as possible, then what it cares about is only its chances of colonise the whole universe and even survive the end of the universe. These chances negligibly affected by the amount of atoms on Earth, but strongly depend on AI’s chances to meet other aliens eventually. Other aliens may have different values systems and some of them will be friendly to their creators. Such future AIs will be not happy to learn that Paperclipper destroyed humans and will not agree to make more paperclips. Bostrom explored similar ideas in “Hail Mary and Value Porosity”
TL;DR: it is instrumentally reasonable to preserve humans as they could be traded with alien AIs. Human atoms have very small instrumental value.