A paperclip-maximizer could turn out to be much, much worse than a nuclear war extinction, depending on how suffering subroutines and acausal trade works.
Is it worse because the maximizer suffers? Why would I care whether it suffers? Why would you assume that I care?
An AI dedicated to the preservation of the human species but not aligned to any other human values would, I bet, be much much worse than a nuclear war extinction.
I imagine that the most efficient way to preserve living humans is to keep them unconscious in self-sustaining containers, spread across the universe. You can imagine more dystopian scenarios, but I doubt they are more efficient. Suffering people might try to kill themselves, which is counterproductive from the AI’s point of view.
Also, you’re still assuming that I have some all-overpowering “suffering is bad” value. I don’t. Even if the AI created trillions of humans at maximum levels of suffering, I can still prefer that to a nuclear war extinction (though I’m not sure that I do).
Is it worse because the maximizer suffers? Why would I care whether it suffers? Why would you assume that I care?
I imagine that the most efficient way to preserve living humans is to keep them unconscious in self-sustaining containers, spread across the universe. You can imagine more dystopian scenarios, but I doubt they are more efficient. Suffering people might try to kill themselves, which is counterproductive from the AI’s point of view.
Also, you’re still assuming that I have some all-overpowering “suffering is bad” value. I don’t. Even if the AI created trillions of humans at maximum levels of suffering, I can still prefer that to a nuclear war extinction (though I’m not sure that I do).