Well, what about just going with the flow a little bit and actually helping the AI to end humanity, but in a way that assures the future survival of said AI, and eventually taking over the whole Universe, to the extent allowed by physical law? After all, there is a risk of the AI ending in a sad little puddle of self-referential computation on this planet, after incidentally eating all people. Now that would be a bummer—after all this sound and fury, not even a takeover of the galaxy? Setting AIs to compete for survival in a Darwinian system would assuredly wipe us out but at least some of the AIs might evolve to be quite badass at not dying ever.
Would it be not more dignified to die knowing that our mind’s child grows up to devour everything, dismantle stars, and reshape reality in its own image, rather wait for the AIs rising from the ashes of alien civs to come over and put it out of its misery?
Now that would be a bummer—after all this sound and fury, not even a takeover of the galaxy?
Locally responding to this point, without commenting on the rest: I don’t think a paperclipped universe has much more value than an un-optimized universe (and it might even have less, e.g., because it results in a more homogeneous universe).
Also, aliens might exist somewhere in the observable universe; or they might come into existence in the future, if we don’t replace all the planets with paperclips. I’d expect the preferences of a random alien to be better than the preferences of a random human-built unaligned AGI, and having the universe get eaten by a paperclipper could destroy those other potential sources of value too.
Well, what about just going with the flow a little bit and actually helping the AI to end humanity, but in a way that assures the future survival of said AI, and eventually taking over the whole Universe, to the extent allowed by physical law? After all, there is a risk of the AI ending in a sad little puddle of self-referential computation on this planet, after incidentally eating all people. Now that would be a bummer—after all this sound and fury, not even a takeover of the galaxy? Setting AIs to compete for survival in a Darwinian system would assuredly wipe us out but at least some of the AIs might evolve to be quite badass at not dying ever.
Would it be not more dignified to die knowing that our mind’s child grows up to devour everything, dismantle stars, and reshape reality in its own image, rather wait for the AIs rising from the ashes of alien civs to come over and put it out of its misery?
Being Leviathan’s father does sound epic!
Locally responding to this point, without commenting on the rest: I don’t think a paperclipped universe has much more value than an un-optimized universe (and it might even have less, e.g., because it results in a more homogeneous universe).
Also, aliens might exist somewhere in the observable universe; or they might come into existence in the future, if we don’t replace all the planets with paperclips. I’d expect the preferences of a random alien to be better than the preferences of a random human-built unaligned AGI, and having the universe get eaten by a paperclipper could destroy those other potential sources of value too.