The paperclip maximizer (PM) encounters an alien civilization and causes lots of suffering warring with it
I don’t think that it is likely that it will encounter anything that has equal resources and if it does that suffering would occur (see below).
PM decides there’s a chance that it’s in a simulation run by a sadistic being who will punish it (prevent it from making paperclips) unless it creates trillions of conscious beings and tortures them
That seems like one of the problems that have to be solved in order to build an AI that transforms the universe into an inanimate state. But I think it is much easier to make an AI not simulate any other agents than to create a friendly AI. Much more can go wrong by creating a friendly AI, including the possibility that it tortures trillions of beings. In the case of a transformer you just have to make sure that it values an universe that is as close as possible to a state where no computation takes place and that does not engage in any kind of trade, acausal or otherwise.
PM is itself capable of suffering
I believe that any sort of morally significant suffering is an effect of (natural) evolution, and may in fact be dependent on that. I think that the kind of maximizer that SI has in mind is more akin to a transformation process that isn’t consciousness, does not have emotions and cannot suffer. If those qualities would be necessary requirements then I don’t think that we will build an artificial general intelligence any time soon and that if we do it will happen slowly and not be able to undergo dangerous recursive self-improvement.
somebody steals PM’s source code before it’s launched, and makes a sadistic AI
I think that this is more likely to be the case with friendly AI research because it takes longer.
I don’t think that it is likely that it will encounter anything that has equal resources and if it does that suffering would occur (see below).
That seems like one of the problems that have to be solved in order to build an AI that transforms the universe into an inanimate state. But I think it is much easier to make an AI not simulate any other agents than to create a friendly AI. Much more can go wrong by creating a friendly AI, including the possibility that it tortures trillions of beings. In the case of a transformer you just have to make sure that it values an universe that is as close as possible to a state where no computation takes place and that does not engage in any kind of trade, acausal or otherwise.
I believe that any sort of morally significant suffering is an effect of (natural) evolution, and may in fact be dependent on that. I think that the kind of maximizer that SI has in mind is more akin to a transformation process that isn’t consciousness, does not have emotions and cannot suffer. If those qualities would be necessary requirements then I don’t think that we will build an artificial general intelligence any time soon and that if we do it will happen slowly and not be able to undergo dangerous recursive self-improvement.
I think that this is more likely to be the case with friendly AI research because it takes longer.