(1) The AI will value or want the resources used by humans. Perhaps. Or, perhaps the AI will conclude that being on a relatively hot planet in a high-oxygen atmosphere with lots of water isn’t optimal and leave the planet entirely.
(2) The AI will view humans as a threat. The superhuman AI that those on Less Wrong usually posit, one so powerful that it can cause human extinction with ease, can’t be turned off or reprogrammed and can manipulate humans as easily as I can type can’t effectively be threatened by human beings.
(3) An AI which just somewhat cares about humans is insufficient for human survival. Why? Marginal utility is a thing.
This alone isn’t enough, and in the past I didn’t believe the conclusion. The additional argument that leads to the conclusion is path-dependence of preferred outcomes. The fact that human civilization currently already exists is a strong argument for it being valuable to let it continue existing in some form, well above the motivation to bring it into existence if it didn’t already exist. Bringing it into existence might fail to make the cut, as there are many other things that a strongly optimized outcome could contain, if its choice wasn’t influenced by the past.
relatively hot planet in a high-oxygen atmosphere with lots of water
But atoms? More seriously, the greatest cost is probably starting expansion a tiny bit later, not making the most effective use of what’s immediately at hand.
You are making a number of assumptions here.
(1) The AI will value or want the resources used by humans. Perhaps. Or, perhaps the AI will conclude that being on a relatively hot planet in a high-oxygen atmosphere with lots of water isn’t optimal and leave the planet entirely.
(2) The AI will view humans as a threat. The superhuman AI that those on Less Wrong usually posit, one so powerful that it can cause human extinction with ease, can’t be turned off or reprogrammed and can manipulate humans as easily as I can type can’t effectively be threatened by human beings.
(3) An AI which just somewhat cares about humans is insufficient for human survival. Why? Marginal utility is a thing.
This alone isn’t enough, and in the past I didn’t believe the conclusion. The additional argument that leads to the conclusion is path-dependence of preferred outcomes. The fact that human civilization currently already exists is a strong argument for it being valuable to let it continue existing in some form, well above the motivation to bring it into existence if it didn’t already exist. Bringing it into existence might fail to make the cut, as there are many other things that a strongly optimized outcome could contain, if its choice wasn’t influenced by the past.
But atoms? More seriously, the greatest cost is probably starting expansion a tiny bit later, not making the most effective use of what’s immediately at hand.
“The greatest cost is probably starting expansion a tiny bit later, not making the most effective use of what’s immediately at hand.”
Possible, but not definitely so. We don’t really know all the relevant variables.