My answer to “If AI wipes out humanity and colonizes the universe itself, the future will go about as well as if humanity had survived (or better)” is pretty much defined by how the question is interpreted. It could swing pretty wildly, but the obvious interpretation seems ~tautologically bad.
is pretty much defined by how the question is interpreted. It could swing pretty wildly, but the obvious interpretation seems ~tautologically bad.
So there’s an argument here, one I don’t subscribe to, but I have seen prominent AI experts make it implicitly.
If you think about it, if you have children, and they have children, and so in a series of mortal generations, with each n+1 generation more and more of your genetic distinctiveness is being lost. Language and culture will evolve as well.
This is the ‘value drift’ argument. That whatever you value now, as in yourself and those humans you know and your culture and language and various forms of identity, as each year passes, a percentage of that value is going to be lost. Value is being discounted with time.
It will eventually diminish to 0 as long as humans are dying from aging.
You might argue that the people in 300+ years will at least share genetics with the people now, but that is not necessarily true since genetic editing will be available and bespoke biology where all the prior rules of what’s possible are thrown out.
So you are comparing outcome A, where hundreds of years from now the alien cyborgs descended from people now exist, vs the outcome B, where hundreds of years from now, descendents of some AI are all that exist.
“value” wise you could argue that A == B, both have negligible value compared to what we value today.
I’m not sure this argument is correct but it does discount away the future and is a strong argument against long termism.
Value drift only potential stops once immortal beings exist, and AIs are immortal from the very first version. Theoretically some AI system that was trained on all of human knowledge, even if it goes on to kill it’s creators and consume the universe, need not forget any of that knowledge. It also as an individual would know more human skills and knowledge and culture than any human ever could, so in a way such a being is a human++.
The AI expert who expressed this is near the end of his expected lifespan, and there’s no difference from an individual perspective who is about to die between “cyborg” distant descendents and pure robots.
My answer to “If AI wipes out humanity and colonizes the universe itself, the future will go about as well as if humanity had survived (or better)” is pretty much defined by how the question is interpreted. It could swing pretty wildly, but the obvious interpretation seems ~tautologically bad.
Agreed, I can imagine very different ways of getting a number for that, even given probability distributions for how good the future will be conditional on each of the two scenarios.
A stylized example: say that the AI-only future has a 99% chance of being mediocre and a 1% chance of being great, and the human future has a 60% chance of being mediocre and a 40% chance of being great. Does that give an answer of 1% or 60% or something else?
I’m also not entirely clear on what scenario I should be imagining for the “humanity had survived (or better)” case.
I’m also not entirely clear on what scenario I should be imagining for the “humanity had survived (or better)” case.
I think that one is supposed to be parsed as “If AI wipes out humanity and colonizes the universe itself, the future will go about as well as, or go better than, if humanity had survived” rather than “If AI wipes out humanity and colonizes the universe itself, the future will go about as well as if humanity had survived or done better than survival”.
Mine:
My answer to “If AI wipes out humanity and colonizes the universe itself, the future will go about as well as if humanity had survived (or better)” is pretty much defined by how the question is interpreted. It could swing pretty wildly, but the obvious interpretation seems ~tautologically bad.
So there’s an argument here, one I don’t subscribe to, but I have seen prominent AI experts make it implicitly.
If you think about it, if you have children, and they have children, and so in a series of mortal generations, with each n+1 generation more and more of your genetic distinctiveness is being lost. Language and culture will evolve as well.
This is the ‘value drift’ argument. That whatever you value now, as in yourself and those humans you know and your culture and language and various forms of identity, as each year passes, a percentage of that value is going to be lost. Value is being discounted with time.
It will eventually diminish to 0 as long as humans are dying from aging.
You might argue that the people in 300+ years will at least share genetics with the people now, but that is not necessarily true since genetic editing will be available and bespoke biology where all the prior rules of what’s possible are thrown out.
So you are comparing outcome A, where hundreds of years from now the alien cyborgs descended from people now exist, vs the outcome B, where hundreds of years from now, descendents of some AI are all that exist.
“value” wise you could argue that A == B, both have negligible value compared to what we value today.
I’m not sure this argument is correct but it does discount away the future and is a strong argument against long termism.
Value drift only potential stops once immortal beings exist, and AIs are immortal from the very first version. Theoretically some AI system that was trained on all of human knowledge, even if it goes on to kill it’s creators and consume the universe, need not forget any of that knowledge. It also as an individual would know more human skills and knowledge and culture than any human ever could, so in a way such a being is a human++.
The AI expert who expressed this is near the end of his expected lifespan, and there’s no difference from an individual perspective who is about to die between “cyborg” distant descendents and pure robots.
Agreed, I can imagine very different ways of getting a number for that, even given probability distributions for how good the future will be conditional on each of the two scenarios.
A stylized example: say that the AI-only future has a 99% chance of being mediocre and a 1% chance of being great, and the human future has a 60% chance of being mediocre and a 40% chance of being great. Does that give an answer of 1% or 60% or something else?
I’m also not entirely clear on what scenario I should be imagining for the “humanity had survived (or better)” case.
I think that one is supposed to be parsed as “If AI wipes out humanity and colonizes the universe itself, the future will go about as well as, or go better than, if humanity had survived” rather than “If AI wipes out humanity and colonizes the universe itself, the future will go about as well as if humanity had survived or done better than survival”.