The problem is that human extinction is a convergent outcome of billions of possible goal systems in superintelligent AI, whereas getting the first superintelligent AI to learn human values and maximize for them is like figuring out how to make the very first atomic bomb explode in the shape of an elephant.
Your argument there seems to be about preservation of humans, and the consensus at your link seems to be that the total set of humans that would be preserved in any form would be fairly small.
The resulting human population size is anyone’s guess, IMO. With the expansion of civilization, the lab-humans could easily reach trillions in number. Indeed, “they” could be “us”. That would make sense of our being on the verge of a major transition.
That’s the opposite of my argument—that preserving humans is likely to be a universal instrumental value.
Your argument there seems to be about preservation of humans, and the consensus at your link seems to be that the total set of humans that would be preserved in any form would be fairly small.
The resulting human population size is anyone’s guess, IMO. With the expansion of civilization, the lab-humans could easily reach trillions in number. Indeed, “they” could be “us”. That would make sense of our being on the verge of a major transition.