It’s a perfectly reasonable position when you consider that humanity is not going to survive long-term anyway. We’re either going extinct and leaving nothing behind, evolving into something completely new and alien, or getting destroyed by our intelligent creations. The first possibility is undesirable. The second and third are indistinguishable from the point of view of the present (if you assume that AI will be developed far enough into the future that no current humans will suffer any pain or sudden death because of it).
I never realized how many people there are who say “it’s a good thing if AI obliterates humanity, it deserves to live more than we do”.
On some level, the question really comes down to what kind of successors we want to create; they aren’t going to be us, either way.
That depends on whether you plan to die.
If I didn’t, the person I become ten thousand years from now isn’t going to be me; I will be at most a distant memory from a time long past.
It will still be more “me” than paperclips.
Than paperclips, yes. Than a paperclip optimizer?
Well… ten thousand years is a very, very long time.
It’s a perfectly reasonable position when you consider that humanity is not going to survive long-term anyway. We’re either going extinct and leaving nothing behind, evolving into something completely new and alien, or getting destroyed by our intelligent creations. The first possibility is undesirable. The second and third are indistinguishable from the point of view of the present (if you assume that AI will be developed far enough into the future that no current humans will suffer any pain or sudden death because of it).
You might still want your children to live rather than die.