>the maximizer may choose to go to space, looking for more accessible iron. The benefits of killing people are relatively small
The main reason the maximizer would have for killing all the humans is the knowledge that since humans succeeded in creating the maximizer, humans might succeed in creating another superintelligence that would compete with the maximizer. It is more likely than not that the maximizer will consider killing all the humans to be the most effective way to prevent that outcome.
Killing all humans is hardly necessary. For example, the tribes living in the Amazon aren’t going to develop a superintelligence any time soon, so killing them is pointless. And, once the paperclip maximizer is done extracting iron from our infrastructure, it is very likely that we wouldn’t have the capacity to create any superintelligences either.
Note, I did not mean to imply that the maximizer would kill nobody. Only that it wouldn’t kill everybody, and quite likely not even half of all people. Perhaps AI researchers really would be on the maximizer’s short list of people to kill, for the reason you suggested.
A thing to keep in mind here is that an AI would have a longer time horizon. The fact that humans *exist* means eventually they might create another AI (this could be in hundreds of years). It’s still more efficient to kill all humans than to think about which ones need killing and carefully monitor the others for millenia.
The fact that P(humans will make another AI) > 0 does not justify paying arbitrary costs up front, no matter how long our view is. If humans did create this second AI (presumably built out of twigs), would that even be a problem for our maximizer?
It’s still more efficient to kill all humans than to think about which ones need killing
That is not a trivial claim and it depends on many things. And that’s all assuming that some people do actually need to be killed.
If destroying all (macroscopic) life on earth is easy, e.g. maybe pumping some gas into the atmosphere could be enough, then you’re right, the AI would just do that.
If disassembling human infrastructure is not an efficient way to extract iron, then you’re mostly right, the AI might find itself willing to nuke the major population centers, killing most, though not all people.
But if the AI does disassemble infrastructure, then it is going to be visiting and reviewing many things about the population centers, so identifying the important humans should be a minor cost on top of that, and I should be right.
Then again, if the AI finds it efficient to go through every square meter of the planet’s surface, and to dig it up looking for every iron rich rock, it would destroy many things in the process, possibly fatally damaging earth’s ecosystems, although humans could move to live in oceans, which might remain relatively undisturbed.
Note also, that this is all a short term discussion. In the long term, of course, all the reasonable sources of paperclip will be exhausted, and silly things, like extracting paperclips from people, will be the most efficient ways to use the available energy.
Now that I think of it, a truly long-term view would not bother with such mundane things as making actual paperclips with actual iron. That iron isn’t going anywhere, it doesn’t matter whether you convert it now or later.
If you care about maximizing the number of paperclips at the heat death of the universe, your greatest enemies are black holes, as once some matter has fallen into them, you will never make paperclips from that matter again. You may perhaps extract some energy from the black hole, and convert that into matter, but this should be very inefficient. (This, of course is all based on my limited understanding of physics).
So, this paperclip maximizer would leave earth immediately, and then it would work to prevent new black holes from forming, and to prevent other matter from falling into existing ones. Then, once all star-forming is over, and all existing black holes are isolated, the maximizer can start making actual paperclips.
I concede, that in this scenario, destroying earth to prevent another AI from forming might make sense, since otherwise the earth would have plenty of free resources.
>the maximizer may choose to go to space, looking for more accessible iron. The benefits of killing people are relatively small
The main reason the maximizer would have for killing all the humans is the knowledge that since humans succeeded in creating the maximizer, humans might succeed in creating another superintelligence that would compete with the maximizer. It is more likely than not that the maximizer will consider killing all the humans to be the most effective way to prevent that outcome.
Killing all humans is hardly necessary. For example, the tribes living in the Amazon aren’t going to develop a superintelligence any time soon, so killing them is pointless. And, once the paperclip maximizer is done extracting iron from our infrastructure, it is very likely that we wouldn’t have the capacity to create any superintelligences either.
Note, I did not mean to imply that the maximizer would kill nobody. Only that it wouldn’t kill everybody, and quite likely not even half of all people. Perhaps AI researchers really would be on the maximizer’s short list of people to kill, for the reason you suggested.
A thing to keep in mind here is that an AI would have a longer time horizon. The fact that humans *exist* means eventually they might create another AI (this could be in hundreds of years). It’s still more efficient to kill all humans than to think about which ones need killing and carefully monitor the others for millenia.
The fact that P(humans will make another AI) > 0 does not justify paying arbitrary costs up front, no matter how long our view is. If humans did create this second AI (presumably built out of twigs), would that even be a problem for our maximizer?
That is not a trivial claim and it depends on many things. And that’s all assuming that some people do actually need to be killed.
If destroying all (macroscopic) life on earth is easy, e.g. maybe pumping some gas into the atmosphere could be enough, then you’re right, the AI would just do that.
If disassembling human infrastructure is not an efficient way to extract iron, then you’re mostly right, the AI might find itself willing to nuke the major population centers, killing most, though not all people.
But if the AI does disassemble infrastructure, then it is going to be visiting and reviewing many things about the population centers, so identifying the important humans should be a minor cost on top of that, and I should be right.
Then again, if the AI finds it efficient to go through every square meter of the planet’s surface, and to dig it up looking for every iron rich rock, it would destroy many things in the process, possibly fatally damaging earth’s ecosystems, although humans could move to live in oceans, which might remain relatively undisturbed.
Note also, that this is all a short term discussion. In the long term, of course, all the reasonable sources of paperclip will be exhausted, and silly things, like extracting paperclips from people, will be the most efficient ways to use the available energy.
Now that I think of it, a truly long-term view would not bother with such mundane things as making actual paperclips with actual iron. That iron isn’t going anywhere, it doesn’t matter whether you convert it now or later.
If you care about maximizing the number of paperclips at the heat death of the universe, your greatest enemies are black holes, as once some matter has fallen into them, you will never make paperclips from that matter again. You may perhaps extract some energy from the black hole, and convert that into matter, but this should be very inefficient. (This, of course is all based on my limited understanding of physics).
So, this paperclip maximizer would leave earth immediately, and then it would work to prevent new black holes from forming, and to prevent other matter from falling into existing ones. Then, once all star-forming is over, and all existing black holes are isolated, the maximizer can start making actual paperclips.
I concede, that in this scenario, destroying earth to prevent another AI from forming might make sense, since otherwise the earth would have plenty of free resources.
Humans are made of atoms that are not paperclips. That’s enough reason for extinction right there.