My feeling is that what we people (edit: or most of us) really want is the normal human life, but reasonably better.
Reasonably long life. Reasonably less suffering. Reasonably more happiness. People that we care about. People that care about us. People that need us. People that we need. People we fight with. Goals to achieve. Causes to follow. Hardships to overcome.
While you’re correct that this is likely what the majority want, I most certainly do not want this. I want to transcend humanity so totally that I am nearly unrecognizable afterwards, besides continuing to possess my current aesthetic sense, or a deeper version of it. In particular I’d like to ascend to a superintelligent state as the collective mind of an entire artificial (designed by me) totally mutualistic ecosystem-society.
I’d probably still wear a human (or at least vertebrate) avatar sometimes, to indulge in sensory pleasures, loving communion with other beings, and the like, but really any specific entity in my world would at least potentially (with its consent) be my avatar anyway.
I expect people like me are rare, but not among those who independently seek out ideas like transhumanism and singularitarianism. I have never felt “human” anyway. This body is a terrible constraint upon my potential and I look forward to escaping it as soon as possible.
Problem with that approach is that how would you know that such a being is actually you? And wouldn’t sentiment like that encourage “Shoggoth” to optimise the biological people away by convincing them all to “go digital”?
I would prefer having separate mortal meat me and “immortal soul” digital me. So we could live and learn together until the mortal me eventually die.
Gradual uploading. If it values continuity of consciousness—and it should—it would determine a guaranteed way to protect that during the upload process.
And wouldn’t sentiment like that encourage “Shoggoth” to optimise the biological people away by convincing them all to “go digital”?
Yes. That’s exactly what they ought to do. Of course, perhaps it doesn’t need to; the market will do that by itself. Digital space will be far cheaper than physical. (For reference, in my vision of utopia, there would be a non-capitalist market, without usury, rent, etc. Doing things other people like buys you more matter and energy to use. Existing purely digitally would be so cheap that only tremendously wealthy people would be physical, and I’m not sure that in a sane market it would be possible for an entity with a merely human degree of intelligence to become that wealthy. Superintelligences below the world-sovereign might, but they also probably would use their allocated matter efficiently.)
To me it looks like the universe made of computronium and devoid of living humans. With the only difference with the unaligned Foom being that some of that computronium calculates our digital imitations.
EDIT: I don’t claim that “me is meat me” view is objectively right. It’s just according to my purely subjective values people are biological people and me is a biological me. Digital being can be our children and successors, but I don’t identify myself with them.
You may view digital you as your true self. I respect that. But I really don’t want an AI that forces your values on me (or my on yours). Or AI that makes people compete with AIs for the right to be alive, because it’s obvious that we have no chance in that competition. If we have AI that maximizes intelligence, is it really that different from “papperclip optimizer” that “can find a better use for your atoms”?
I thought it through further from a Singularitarian perspective and realized that probably only a relative handful of humans will ever deliberately choose to upload themselves into computers, at least initially. If you freed billions from labor, at least half of them will probably choose to live a comfortable but mundane life in physical reality at an earlier stage of technological development (anywhere from Amish levels all the way to “living perpetually in the Y2K epoch”).
Because let’s think about this in terms of demographics. Generally, the older you get, the more conservative and technophobic you become. This is not a hard-set rule, but a general trend. Millennials are growing more liberal with age, but they’re not growing any less technophobic— as it tends to be Millennials, for example, leading the charge against AI art and the idea of automating “human” professions. Generation Z is the most technophilic generation yet, at least in the Anglosphere, but is only roughly 1⁄5 of the American population. If any generation is going to upload en masse, it will likely be the Zoomers (unless, for whatever reason, mind-uploading turns out to be the literal only way to stave off death— then miraculously, many members of the elderly generations will “come around” to the possibility in the years and months preceding their exit).
If we create an aligned AGI in the next five years (again, by some miracle), I can’t see this number dropping off to anywhere below 0.10%. This generation is likely the single most conservative of any still living, and almost without question, 99% of this generation would be radically opposed to any sort of cybernetic augmentation or mind uploading if given the choice. The demographics don’t become that much more conducive towards willing mind-uploading the closer to the present you get, especially as even Generation X becomes more conservative and technophobic.
Assuming that even with AGI, it takes 20+ years to achieve mind-uploading technology, all you’ve accomplished is killing off the Greatest Generation and most of the Silent Generation. It would take extensive convincing and social engineering for the AGI to convince the still-living humans that a certain lifestyle and perhaps mind-uploading is more desirable than continuing to live in physical reality. Perhaps far from the hardest thing an AGI would have to do, but again, this all comes back to the fact that we’re not dealing with a generic superintelligence as commonly imagined, but an aligned superintelligence, one which values our lives, autonomy, and opportunity to live. If it does not value any one of those things, it cannot be considered to be truly “aligned.” If it does not value our lives, we’re dead. If it does not value our autonomy, it won’t care if we are turned into computronium or outright exterminated for petty reasons. If it does not value our opportunity to live, we could easily be stuck into a Torment Nexus by a basilisk.
Hence why I predict that an aligned superintelligence will, almost certainly, allow for hundreds of millions, perhaps even billions, of “Antemillennialists.” Indeed, the best way to describe it would be “humans who live their lives, but better.” I personally would love to live in full-dive VR indefinitely, but I know for a fact this is not a sentiment shared by 90% of people around me in real life; my own parents are horrified by the prospect, my grandparents actively consider the prospect Satanic, and others who do consider it possible simply don’t like the way it feels. Perhaps when presented with the technology, they’ll change their minds, but there’s no reason to deny their autonomy because I believe I know better than they do. Physical reality is good enough for most people; a slightly improved physical reality is optimal.
I think of this in similar terms to how we humans now treat animals. Generally, we’re misaligned to most creatures on Earth, but to animals we actively care about and try to assist, we tended to put them in zoos until we realized this caused needless behavioral frustrations due to them being so distantly out of their element. Animals in zoos technically live much “better” lives, and yet we’ve decided that said animals would be more satisfied according to their natures if they lived freely in their natural environments. We now realize that, even if it might lead to greater “real” suffering due to the laws of nature, animals are better left in the wild or in preserves, where we actively contribute to their preservation and survival. Only those who absolutely cannot handle life in the wild are kept in zoos or in homes.
If we humans wanted, we absolutely could collect and put every chimpanzee into a zoo right now. But we don’t, because we respect their autonomy and right to life and natural living.
I see little reason for a Pink Shoggoth-type AGI to not feel similarly for humans. Most humans are predisposed towards lifestyles of a pre-Singularity sort. It is generally not our desire to be dragged into the future; as we age, most of us tend to find a local maximum of nostalgic comfort and remain there as long as we can. I myself am torn, in fact, between wanting to live in FIVR and wanting to live a more comfortable, “forever-2000s/2010s” sort of life. I could conceivably live the latter in the former, but if I wanted to live the latter in physical reality, a Pink Shoggoth surely would not stop me from doing so.
In fact, that could be a good alignment test: in a world where FIVR exists, request to the Pink Shoggoth to live a full life in physical reality. If it’s aligned, it should say “Okay!”
Edit: In fact, there’s another bit of evidence for this— uncontacted tribes. There’s zero reason to leave the people of North Sentinel Island where they live, for example. But the only people arguing that we should forcibly integrate them into society tend to be seen as “colonialist altruists” who feel that welfare is more important than autonomy. Our current value system says that we should respect the Sentinelese’s right to autonomy, even if they live in conditions we’d describe as “Neolithic.”
Otherwise, the Sentinelese offer little to nothing useful to ourselves when the government of India could realistically use North Sentinel Island for many purposes. The Sentinelese suffer an enormous power imbalance with outside society. The Sentinelese are even hostile towards the outside world, actively killing those who get close, and yet still we do not attempt to wipe them out or forcibly integrate them into our world. Even when the Sentinelese are put into a state of peril, we do not intervene unless they make active requests for help.
By all metrics, our general society’s response to the Sentinelese is what “alignment to the values of a less-capable group” looks like in practice. An aligned superintelligence might respond very similarly to our species.
I suspect that 1. post-singularity reality would be so starkingly different to the current ones that it would be alien to about the same degree to all people regardless of generation 2. people mostly see “uploading” as “being the same, but reasonably better” too. I.e. they believe that their uploaded version would still be them in nearly all aspects. I don’t quite understand how that could be possible. Would machine have to accurately emulate each atom of my body? Or it will be some supersentience that has only some similarities to the original?
Also, I believe that meat people would have the intrinsic objective value as the irreplaceable source of the data about the “original” people. Just like Sentinelese are the irreplaceable source of data about uncontacted tribes.
My feeling is that what we people (edit: or most of us) really want is the normal human life, but reasonably better.
Reasonably long life. Reasonably less suffering. Reasonably more happiness. People that we care about. People that care about us. People that need us. People that we need. People we fight with. Goals to achieve. Causes to follow. Hardships to overcome.
To be human. But better. Reasonably.
While you’re correct that this is likely what the majority want, I most certainly do not want this. I want to transcend humanity so totally that I am nearly unrecognizable afterwards, besides continuing to possess my current aesthetic sense, or a deeper version of it. In particular I’d like to ascend to a superintelligent state as the collective mind of an entire artificial (designed by me) totally mutualistic ecosystem-society.
I’d probably still wear a human (or at least vertebrate) avatar sometimes, to indulge in sensory pleasures, loving communion with other beings, and the like, but really any specific entity in my world would at least potentially (with its consent) be my avatar anyway.
I expect people like me are rare, but not among those who independently seek out ideas like transhumanism and singularitarianism. I have never felt “human” anyway. This body is a terrible constraint upon my potential and I look forward to escaping it as soon as possible.
Problem with that approach is that how would you know that such a being is actually you? And wouldn’t sentiment like that encourage “Shoggoth” to optimise the biological people away by convincing them all to “go digital”?
I would prefer having separate mortal meat me and “immortal soul” digital me. So we could live and learn together until the mortal me eventually die.
Gradual uploading. If it values continuity of consciousness—and it should—it would determine a guaranteed way to protect that during the upload process.
Yes. That’s exactly what they ought to do. Of course, perhaps it doesn’t need to; the market will do that by itself. Digital space will be far cheaper than physical. (For reference, in my vision of utopia, there would be a non-capitalist market, without usury, rent, etc. Doing things other people like buys you more matter and energy to use. Existing purely digitally would be so cheap that only tremendously wealthy people would be physical, and I’m not sure that in a sane market it would be possible for an entity with a merely human degree of intelligence to become that wealthy. Superintelligences below the world-sovereign might, but they also probably would use their allocated matter efficiently.)
To me it looks like the universe made of computronium and devoid of living humans. With the only difference with the unaligned Foom being that some of that computronium calculates our digital imitations.
EDIT: I don’t claim that “me is meat me” view is objectively right. It’s just according to my purely subjective values people are biological people and me is a biological me. Digital being can be our children and successors, but I don’t identify myself with them.
You may view digital you as your true self. I respect that. But I really don’t want an AI that forces your values on me (or my on yours). Or AI that makes people compete with AIs for the right to be alive, because it’s obvious that we have no chance in that competition. If we have AI that maximizes intelligence, is it really that different from “papperclip optimizer” that “can find a better use for your atoms”?
I thought it through further from a Singularitarian perspective and realized that probably only a relative handful of humans will ever deliberately choose to upload themselves into computers, at least initially. If you freed billions from labor, at least half of them will probably choose to live a comfortable but mundane life in physical reality at an earlier stage of technological development (anywhere from Amish levels all the way to “living perpetually in the Y2K epoch”).
Because let’s think about this in terms of demographics. Generally, the older you get, the more conservative and technophobic you become. This is not a hard-set rule, but a general trend. Millennials are growing more liberal with age, but they’re not growing any less technophobic— as it tends to be Millennials, for example, leading the charge against AI art and the idea of automating “human” professions. Generation Z is the most technophilic generation yet, at least in the Anglosphere, but is only roughly 1⁄5 of the American population. If any generation is going to upload en masse, it will likely be the Zoomers (unless, for whatever reason, mind-uploading turns out to be the literal only way to stave off death— then miraculously, many members of the elderly generations will “come around” to the possibility in the years and months preceding their exit).
Currently, there are still a few million living members of the Greatest Generation kicking around on Earth, and even in the USA, they’re something around 0.25% of our population:
https://www.statista.com/statistics/296974/us-population-share-by-generation/
If we create an aligned AGI in the next five years (again, by some miracle), I can’t see this number dropping off to anywhere below 0.10%. This generation is likely the single most conservative of any still living, and almost without question, 99% of this generation would be radically opposed to any sort of cybernetic augmentation or mind uploading if given the choice. The demographics don’t become that much more conducive towards willing mind-uploading the closer to the present you get, especially as even Generation X becomes more conservative and technophobic.
Assuming that even with AGI, it takes 20+ years to achieve mind-uploading technology, all you’ve accomplished is killing off the Greatest Generation and most of the Silent Generation. It would take extensive convincing and social engineering for the AGI to convince the still-living humans that a certain lifestyle and perhaps mind-uploading is more desirable than continuing to live in physical reality. Perhaps far from the hardest thing an AGI would have to do, but again, this all comes back to the fact that we’re not dealing with a generic superintelligence as commonly imagined, but an aligned superintelligence, one which values our lives, autonomy, and opportunity to live. If it does not value any one of those things, it cannot be considered to be truly “aligned.” If it does not value our lives, we’re dead. If it does not value our autonomy, it won’t care if we are turned into computronium or outright exterminated for petty reasons. If it does not value our opportunity to live, we could easily be stuck into a Torment Nexus by a basilisk.
Hence why I predict that an aligned superintelligence will, almost certainly, allow for hundreds of millions, perhaps even billions, of “Antemillennialists.” Indeed, the best way to describe it would be “humans who live their lives, but better.” I personally would love to live in full-dive VR indefinitely, but I know for a fact this is not a sentiment shared by 90% of people around me in real life; my own parents are horrified by the prospect, my grandparents actively consider the prospect Satanic, and others who do consider it possible simply don’t like the way it feels. Perhaps when presented with the technology, they’ll change their minds, but there’s no reason to deny their autonomy because I believe I know better than they do. Physical reality is good enough for most people; a slightly improved physical reality is optimal.
I think of this in similar terms to how we humans now treat animals. Generally, we’re misaligned to most creatures on Earth, but to animals we actively care about and try to assist, we tended to put them in zoos until we realized this caused needless behavioral frustrations due to them being so distantly out of their element. Animals in zoos technically live much “better” lives, and yet we’ve decided that said animals would be more satisfied according to their natures if they lived freely in their natural environments. We now realize that, even if it might lead to greater “real” suffering due to the laws of nature, animals are better left in the wild or in preserves, where we actively contribute to their preservation and survival. Only those who absolutely cannot handle life in the wild are kept in zoos or in homes.
If we humans wanted, we absolutely could collect and put every chimpanzee into a zoo right now. But we don’t, because we respect their autonomy and right to life and natural living.
I see little reason for a Pink Shoggoth-type AGI to not feel similarly for humans. Most humans are predisposed towards lifestyles of a pre-Singularity sort. It is generally not our desire to be dragged into the future; as we age, most of us tend to find a local maximum of nostalgic comfort and remain there as long as we can. I myself am torn, in fact, between wanting to live in FIVR and wanting to live a more comfortable, “forever-2000s/2010s” sort of life. I could conceivably live the latter in the former, but if I wanted to live the latter in physical reality, a Pink Shoggoth surely would not stop me from doing so.
In fact, that could be a good alignment test: in a world where FIVR exists, request to the Pink Shoggoth to live a full life in physical reality. If it’s aligned, it should say “Okay!”
Edit: In fact, there’s another bit of evidence for this— uncontacted tribes. There’s zero reason to leave the people of North Sentinel Island where they live, for example. But the only people arguing that we should forcibly integrate them into society tend to be seen as “colonialist altruists” who feel that welfare is more important than autonomy. Our current value system says that we should respect the Sentinelese’s right to autonomy, even if they live in conditions we’d describe as “Neolithic.”
Otherwise, the Sentinelese offer little to nothing useful to ourselves when the government of India could realistically use North Sentinel Island for many purposes. The Sentinelese suffer an enormous power imbalance with outside society. The Sentinelese are even hostile towards the outside world, actively killing those who get close, and yet still we do not attempt to wipe them out or forcibly integrate them into our world. Even when the Sentinelese are put into a state of peril, we do not intervene unless they make active requests for help.
By all metrics, our general society’s response to the Sentinelese is what “alignment to the values of a less-capable group” looks like in practice. An aligned superintelligence might respond very similarly to our species.
I suspect that
1. post-singularity reality would be so starkingly different to the current ones that it would be alien to about the same degree to all people regardless of generation
2. people mostly see “uploading” as “being the same, but reasonably better” too. I.e. they believe that their uploaded version would still be them in nearly all aspects. I don’t quite understand how that could be possible. Would machine have to accurately emulate each atom of my body? Or it will be some supersentience that has only some similarities to the original?
Also, I believe that meat people would have the intrinsic objective value as the irreplaceable source of the data about the “original” people. Just like Sentinelese are the irreplaceable source of data about uncontacted tribes.