The fact that P(humans will make another AI) > 0 does not justify paying arbitrary costs up front, no matter how long our view is. If humans did create this second AI (presumably built out of twigs), would that even be a problem for our maximizer?
It’s still more efficient to kill all humans than to think about which ones need killing
That is not a trivial claim and it depends on many things. And that’s all assuming that some people do actually need to be killed.
If destroying all (macroscopic) life on earth is easy, e.g. maybe pumping some gas into the atmosphere could be enough, then you’re right, the AI would just do that.
If disassembling human infrastructure is not an efficient way to extract iron, then you’re mostly right, the AI might find itself willing to nuke the major population centers, killing most, though not all people.
But if the AI does disassemble infrastructure, then it is going to be visiting and reviewing many things about the population centers, so identifying the important humans should be a minor cost on top of that, and I should be right.
Then again, if the AI finds it efficient to go through every square meter of the planet’s surface, and to dig it up looking for every iron rich rock, it would destroy many things in the process, possibly fatally damaging earth’s ecosystems, although humans could move to live in oceans, which might remain relatively undisturbed.
Note also, that this is all a short term discussion. In the long term, of course, all the reasonable sources of paperclip will be exhausted, and silly things, like extracting paperclips from people, will be the most efficient ways to use the available energy.
The fact that P(humans will make another AI) > 0 does not justify paying arbitrary costs up front, no matter how long our view is. If humans did create this second AI (presumably built out of twigs), would that even be a problem for our maximizer?
That is not a trivial claim and it depends on many things. And that’s all assuming that some people do actually need to be killed.
If destroying all (macroscopic) life on earth is easy, e.g. maybe pumping some gas into the atmosphere could be enough, then you’re right, the AI would just do that.
If disassembling human infrastructure is not an efficient way to extract iron, then you’re mostly right, the AI might find itself willing to nuke the major population centers, killing most, though not all people.
But if the AI does disassemble infrastructure, then it is going to be visiting and reviewing many things about the population centers, so identifying the important humans should be a minor cost on top of that, and I should be right.
Then again, if the AI finds it efficient to go through every square meter of the planet’s surface, and to dig it up looking for every iron rich rock, it would destroy many things in the process, possibly fatally damaging earth’s ecosystems, although humans could move to live in oceans, which might remain relatively undisturbed.
Note also, that this is all a short term discussion. In the long term, of course, all the reasonable sources of paperclip will be exhausted, and silly things, like extracting paperclips from people, will be the most efficient ways to use the available energy.