A paperclip-maximizer could turn out to be much, much worse than a nuclear war extinction, depending on how suffering subroutines and acausal trade works.
An AI dedicated to the preservation of the human species but not aligned to any other human values would, I bet, be much much worse than a nuclear war extinction. At least please throw in some sort of ”...in good health and happiness” condition! (And that would not be nearly enough in my opinion)
A paperclip-maximizer could turn out to be much, much worse than a nuclear war extinction, depending on how suffering subroutines and acausal trade works.
Is it worse because the maximizer suffers? Why would I care whether it suffers? Why would you assume that I care?
An AI dedicated to the preservation of the human species but not aligned to any other human values would, I bet, be much much worse than a nuclear war extinction.
I imagine that the most efficient way to preserve living humans is to keep them unconscious in self-sustaining containers, spread across the universe. You can imagine more dystopian scenarios, but I doubt they are more efficient. Suffering people might try to kill themselves, which is counterproductive from the AI’s point of view.
Also, you’re still assuming that I have some all-overpowering “suffering is bad” value. I don’t. Even if the AI created trillions of humans at maximum levels of suffering, I can still prefer that to a nuclear war extinction (though I’m not sure that I do).
A paperclip-maximizer could turn out to be much, much worse than a nuclear war extinction, depending on how suffering subroutines and acausal trade works.
An AI dedicated to the preservation of the human species but not aligned to any other human values would, I bet, be much much worse than a nuclear war extinction. At least please throw in some sort of ”...in good health and happiness” condition! (And that would not be nearly enough in my opinion)
Is it worse because the maximizer suffers? Why would I care whether it suffers? Why would you assume that I care?
I imagine that the most efficient way to preserve living humans is to keep them unconscious in self-sustaining containers, spread across the universe. You can imagine more dystopian scenarios, but I doubt they are more efficient. Suffering people might try to kill themselves, which is counterproductive from the AI’s point of view.
Also, you’re still assuming that I have some all-overpowering “suffering is bad” value. I don’t. Even if the AI created trillions of humans at maximum levels of suffering, I can still prefer that to a nuclear war extinction (though I’m not sure that I do).