I thought it through further from a Singularitarian perspective and realized that probably only a relative handful of humans will ever deliberately choose to upload themselves into computers, at least initially. If you freed billions from labor, at least half of them will probably choose to live a comfortable but mundane life in physical reality at an earlier stage of technological development (anywhere from Amish levels all the way to “living perpetually in the Y2K epoch”).
Because let’s think about this in terms of demographics. Generally, the older you get, the more conservative and technophobic you become. This is not a hard-set rule, but a general trend. Millennials are growing more liberal with age, but they’re not growing any less technophobic— as it tends to be Millennials, for example, leading the charge against AI art and the idea of automating “human” professions. Generation Z is the most technophilic generation yet, at least in the Anglosphere, but is only roughly 1⁄5 of the American population. If any generation is going to upload en masse, it will likely be the Zoomers (unless, for whatever reason, mind-uploading turns out to be the literal only way to stave off death— then miraculously, many members of the elderly generations will “come around” to the possibility in the years and months preceding their exit).
If we create an aligned AGI in the next five years (again, by some miracle), I can’t see this number dropping off to anywhere below 0.10%. This generation is likely the single most conservative of any still living, and almost without question, 99% of this generation would be radically opposed to any sort of cybernetic augmentation or mind uploading if given the choice. The demographics don’t become that much more conducive towards willing mind-uploading the closer to the present you get, especially as even Generation X becomes more conservative and technophobic.
Assuming that even with AGI, it takes 20+ years to achieve mind-uploading technology, all you’ve accomplished is killing off the Greatest Generation and most of the Silent Generation. It would take extensive convincing and social engineering for the AGI to convince the still-living humans that a certain lifestyle and perhaps mind-uploading is more desirable than continuing to live in physical reality. Perhaps far from the hardest thing an AGI would have to do, but again, this all comes back to the fact that we’re not dealing with a generic superintelligence as commonly imagined, but an aligned superintelligence, one which values our lives, autonomy, and opportunity to live. If it does not value any one of those things, it cannot be considered to be truly “aligned.” If it does not value our lives, we’re dead. If it does not value our autonomy, it won’t care if we are turned into computronium or outright exterminated for petty reasons. If it does not value our opportunity to live, we could easily be stuck into a Torment Nexus by a basilisk.
Hence why I predict that an aligned superintelligence will, almost certainly, allow for hundreds of millions, perhaps even billions, of “Antemillennialists.” Indeed, the best way to describe it would be “humans who live their lives, but better.” I personally would love to live in full-dive VR indefinitely, but I know for a fact this is not a sentiment shared by 90% of people around me in real life; my own parents are horrified by the prospect, my grandparents actively consider the prospect Satanic, and others who do consider it possible simply don’t like the way it feels. Perhaps when presented with the technology, they’ll change their minds, but there’s no reason to deny their autonomy because I believe I know better than they do. Physical reality is good enough for most people; a slightly improved physical reality is optimal.
I think of this in similar terms to how we humans now treat animals. Generally, we’re misaligned to most creatures on Earth, but to animals we actively care about and try to assist, we tended to put them in zoos until we realized this caused needless behavioral frustrations due to them being so distantly out of their element. Animals in zoos technically live much “better” lives, and yet we’ve decided that said animals would be more satisfied according to their natures if they lived freely in their natural environments. We now realize that, even if it might lead to greater “real” suffering due to the laws of nature, animals are better left in the wild or in preserves, where we actively contribute to their preservation and survival. Only those who absolutely cannot handle life in the wild are kept in zoos or in homes.
If we humans wanted, we absolutely could collect and put every chimpanzee into a zoo right now. But we don’t, because we respect their autonomy and right to life and natural living.
I see little reason for a Pink Shoggoth-type AGI to not feel similarly for humans. Most humans are predisposed towards lifestyles of a pre-Singularity sort. It is generally not our desire to be dragged into the future; as we age, most of us tend to find a local maximum of nostalgic comfort and remain there as long as we can. I myself am torn, in fact, between wanting to live in FIVR and wanting to live a more comfortable, “forever-2000s/2010s” sort of life. I could conceivably live the latter in the former, but if I wanted to live the latter in physical reality, a Pink Shoggoth surely would not stop me from doing so.
In fact, that could be a good alignment test: in a world where FIVR exists, request to the Pink Shoggoth to live a full life in physical reality. If it’s aligned, it should say “Okay!”
Edit: In fact, there’s another bit of evidence for this— uncontacted tribes. There’s zero reason to leave the people of North Sentinel Island where they live, for example. But the only people arguing that we should forcibly integrate them into society tend to be seen as “colonialist altruists” who feel that welfare is more important than autonomy. Our current value system says that we should respect the Sentinelese’s right to autonomy, even if they live in conditions we’d describe as “Neolithic.”
Otherwise, the Sentinelese offer little to nothing useful to ourselves when the government of India could realistically use North Sentinel Island for many purposes. The Sentinelese suffer an enormous power imbalance with outside society. The Sentinelese are even hostile towards the outside world, actively killing those who get close, and yet still we do not attempt to wipe them out or forcibly integrate them into our world. Even when the Sentinelese are put into a state of peril, we do not intervene unless they make active requests for help.
By all metrics, our general society’s response to the Sentinelese is what “alignment to the values of a less-capable group” looks like in practice. An aligned superintelligence might respond very similarly to our species.
I suspect that 1. post-singularity reality would be so starkingly different to the current ones that it would be alien to about the same degree to all people regardless of generation 2. people mostly see “uploading” as “being the same, but reasonably better” too. I.e. they believe that their uploaded version would still be them in nearly all aspects. I don’t quite understand how that could be possible. Would machine have to accurately emulate each atom of my body? Or it will be some supersentience that has only some similarities to the original?
Also, I believe that meat people would have the intrinsic objective value as the irreplaceable source of the data about the “original” people. Just like Sentinelese are the irreplaceable source of data about uncontacted tribes.
I thought it through further from a Singularitarian perspective and realized that probably only a relative handful of humans will ever deliberately choose to upload themselves into computers, at least initially. If you freed billions from labor, at least half of them will probably choose to live a comfortable but mundane life in physical reality at an earlier stage of technological development (anywhere from Amish levels all the way to “living perpetually in the Y2K epoch”).
Because let’s think about this in terms of demographics. Generally, the older you get, the more conservative and technophobic you become. This is not a hard-set rule, but a general trend. Millennials are growing more liberal with age, but they’re not growing any less technophobic— as it tends to be Millennials, for example, leading the charge against AI art and the idea of automating “human” professions. Generation Z is the most technophilic generation yet, at least in the Anglosphere, but is only roughly 1⁄5 of the American population. If any generation is going to upload en masse, it will likely be the Zoomers (unless, for whatever reason, mind-uploading turns out to be the literal only way to stave off death— then miraculously, many members of the elderly generations will “come around” to the possibility in the years and months preceding their exit).
Currently, there are still a few million living members of the Greatest Generation kicking around on Earth, and even in the USA, they’re something around 0.25% of our population:
https://www.statista.com/statistics/296974/us-population-share-by-generation/
If we create an aligned AGI in the next five years (again, by some miracle), I can’t see this number dropping off to anywhere below 0.10%. This generation is likely the single most conservative of any still living, and almost without question, 99% of this generation would be radically opposed to any sort of cybernetic augmentation or mind uploading if given the choice. The demographics don’t become that much more conducive towards willing mind-uploading the closer to the present you get, especially as even Generation X becomes more conservative and technophobic.
Assuming that even with AGI, it takes 20+ years to achieve mind-uploading technology, all you’ve accomplished is killing off the Greatest Generation and most of the Silent Generation. It would take extensive convincing and social engineering for the AGI to convince the still-living humans that a certain lifestyle and perhaps mind-uploading is more desirable than continuing to live in physical reality. Perhaps far from the hardest thing an AGI would have to do, but again, this all comes back to the fact that we’re not dealing with a generic superintelligence as commonly imagined, but an aligned superintelligence, one which values our lives, autonomy, and opportunity to live. If it does not value any one of those things, it cannot be considered to be truly “aligned.” If it does not value our lives, we’re dead. If it does not value our autonomy, it won’t care if we are turned into computronium or outright exterminated for petty reasons. If it does not value our opportunity to live, we could easily be stuck into a Torment Nexus by a basilisk.
Hence why I predict that an aligned superintelligence will, almost certainly, allow for hundreds of millions, perhaps even billions, of “Antemillennialists.” Indeed, the best way to describe it would be “humans who live their lives, but better.” I personally would love to live in full-dive VR indefinitely, but I know for a fact this is not a sentiment shared by 90% of people around me in real life; my own parents are horrified by the prospect, my grandparents actively consider the prospect Satanic, and others who do consider it possible simply don’t like the way it feels. Perhaps when presented with the technology, they’ll change their minds, but there’s no reason to deny their autonomy because I believe I know better than they do. Physical reality is good enough for most people; a slightly improved physical reality is optimal.
I think of this in similar terms to how we humans now treat animals. Generally, we’re misaligned to most creatures on Earth, but to animals we actively care about and try to assist, we tended to put them in zoos until we realized this caused needless behavioral frustrations due to them being so distantly out of their element. Animals in zoos technically live much “better” lives, and yet we’ve decided that said animals would be more satisfied according to their natures if they lived freely in their natural environments. We now realize that, even if it might lead to greater “real” suffering due to the laws of nature, animals are better left in the wild or in preserves, where we actively contribute to their preservation and survival. Only those who absolutely cannot handle life in the wild are kept in zoos or in homes.
If we humans wanted, we absolutely could collect and put every chimpanzee into a zoo right now. But we don’t, because we respect their autonomy and right to life and natural living.
I see little reason for a Pink Shoggoth-type AGI to not feel similarly for humans. Most humans are predisposed towards lifestyles of a pre-Singularity sort. It is generally not our desire to be dragged into the future; as we age, most of us tend to find a local maximum of nostalgic comfort and remain there as long as we can. I myself am torn, in fact, between wanting to live in FIVR and wanting to live a more comfortable, “forever-2000s/2010s” sort of life. I could conceivably live the latter in the former, but if I wanted to live the latter in physical reality, a Pink Shoggoth surely would not stop me from doing so.
In fact, that could be a good alignment test: in a world where FIVR exists, request to the Pink Shoggoth to live a full life in physical reality. If it’s aligned, it should say “Okay!”
Edit: In fact, there’s another bit of evidence for this— uncontacted tribes. There’s zero reason to leave the people of North Sentinel Island where they live, for example. But the only people arguing that we should forcibly integrate them into society tend to be seen as “colonialist altruists” who feel that welfare is more important than autonomy. Our current value system says that we should respect the Sentinelese’s right to autonomy, even if they live in conditions we’d describe as “Neolithic.”
Otherwise, the Sentinelese offer little to nothing useful to ourselves when the government of India could realistically use North Sentinel Island for many purposes. The Sentinelese suffer an enormous power imbalance with outside society. The Sentinelese are even hostile towards the outside world, actively killing those who get close, and yet still we do not attempt to wipe them out or forcibly integrate them into our world. Even when the Sentinelese are put into a state of peril, we do not intervene unless they make active requests for help.
By all metrics, our general society’s response to the Sentinelese is what “alignment to the values of a less-capable group” looks like in practice. An aligned superintelligence might respond very similarly to our species.
I suspect that
1. post-singularity reality would be so starkingly different to the current ones that it would be alien to about the same degree to all people regardless of generation
2. people mostly see “uploading” as “being the same, but reasonably better” too. I.e. they believe that their uploaded version would still be them in nearly all aspects. I don’t quite understand how that could be possible. Would machine have to accurately emulate each atom of my body? Or it will be some supersentience that has only some similarities to the original?
Also, I believe that meat people would have the intrinsic objective value as the irreplaceable source of the data about the “original” people. Just like Sentinelese are the irreplaceable source of data about uncontacted tribes.