I don’t really care about preserving status-quo humanity
By non-extinction I don’t mean freezing the status quo of necessarily biological Homo sapiens on Earth, though with ASI I expect that non-extinction in particular keeps most individual people alive indefinitely in the way they individually choose. I think this is a more natural reading of non-extinction (as a sense of non-doom) than a perpetual state of human nature preserve.
By doom I mean the universe gets populated by AI with no moral worth
So losing cosmic wealth is sufficient to qualify an outcome as doom, as in Bostrom’s existential risk. What if Earth literally remains untouched by machines, protected from extinction level events but otherwise left alone, is it still doom in this sense? What if aliens that hold moral worth but currently labor under an unkind regime exterminate humanity, but then the aliens themselves spread to the stars (taking them for themselves) and live happily ever after, is that still not a central example of doom?
My point is that the term is highly ambiguous, resolution criteria for predictions that involve it are all over the place, so it’s no good for use in predictions or communication. There is illusion of transparency where people keep expecting it to be understood, and then others incorrectly think that they’ve understood the intended meaning. Splitting doom into extinction and loss of cosmic wealth seems less ambiguous.
So losing cosmic wealth is sufficient to qualify an outcome as doom
My utility function roughly looks like:
my survival
the survival of the people I know and care about
the distant future is populated by beings that are in some way “descended” from humanity and share at least some of the values (love, joy, curiosity, creativity) that I currently hold
Basically, if I sat down with a human from 10,000 years ago, I think there’s a lot we would disagree about, but at the end of the day I think they would get the feeling that I’m an “okay person”. I would like to imagine the same sort of thing holding for whatever follows us.
I don’t find the hair-splitting arguments like “what if the AGI takes over the universe but leaves Earth intact” particularly interesting except insofar as it allows for all 3 of the above. I also don’t think most people have a huge faction of P(~doom) on such weird technicalities.
By non-extinction I don’t mean freezing the status quo of necessarily biological Homo sapiens on Earth, though with ASI I expect that non-extinction in particular keeps most individual people alive indefinitely in the way they individually choose. I think this is a more natural reading of non-extinction (as a sense of non-doom) than a perpetual state of human nature preserve.
So losing cosmic wealth is sufficient to qualify an outcome as doom, as in Bostrom’s existential risk. What if Earth literally remains untouched by machines, protected from extinction level events but otherwise left alone, is it still doom in this sense? What if aliens that hold moral worth but currently labor under an unkind regime exterminate humanity, but then the aliens themselves spread to the stars (taking them for themselves) and live happily ever after, is that still not a central example of doom?
My point is that the term is highly ambiguous, resolution criteria for predictions that involve it are all over the place, so it’s no good for use in predictions or communication. There is illusion of transparency where people keep expecting it to be understood, and then others incorrectly think that they’ve understood the intended meaning. Splitting doom into extinction and loss of cosmic wealth seems less ambiguous.
My utility function roughly looks like:
my survival
the survival of the people I know and care about
the distant future is populated by beings that are in some way “descended” from humanity and share at least some of the values (love, joy, curiosity, creativity) that I currently hold
Basically, if I sat down with a human from 10,000 years ago, I think there’s a lot we would disagree about, but at the end of the day I think they would get the feeling that I’m an “okay person”. I would like to imagine the same sort of thing holding for whatever follows us.
I don’t find the hair-splitting arguments like “what if the AGI takes over the universe but leaves Earth intact” particularly interesting except insofar as it allows for all 3 of the above. I also don’t think most people have a huge faction of P(~doom) on such weird technicalities.