I am puzzled by Eliezer’s confidence in the rationality of signing up for cryonics given he thinks it would be characteristic of a “GODDAMNED SANE CIVILIZATION”. I am even more puzzled by the commenters overwhelming agreement with Eliezer. I am personally uncomfortable with cryonics for the two following reasons and am surprised that no one seems to bring these up.
I can see it being very plausible that somewhere along the line I would be subject to immense suffering, over which death would have been a far better option, but that I would be either potentially unable to take my life due to physical constraints or would lack the courage to do so (it takes quite some courage and persistent suffering to be driven to suicide IMO). I see this as analogous to a case where I am very near death and am faced with the two following options.
(a) Have my life support system turned off and die peacefully.
(b) Keep the life support system going but subsequently give up all autonomy over my life and body and place it entirely in the hands of others who are likely not even my immediate kin. I could be made to put up with immense suffering either due to technical glitches which are very likely since this is a very nascent area, or due to willful malevolence.
In this case I would very likely choose (a).
Note that in addition to prolonged suffering where I am effectively incapable of pulling the plug on myself, there is also the chance that I would be an oddity as far as future generations are concerned. Perhaps I would be made a circus or museum exhibit to entertain that generation. Our race is highly speciesist and I would not trust the future generations with their bionic implants and so on to even necessarily consider me to be of the same species and offer me the same rights and moral consideration.
Last but not the least is a point I made as a comment in response to Robin Hanson’s post. Robin Hanson expressed a preference for a world filled with more people with scarce per-capita resources compared to a world with fewer people with significantly better living conditions. His point was that this gives many people the opportunity to “be born” who would not have come into existence. And that this was for some reason a good thing. I suspect that Eliezer too has a similar opinion on this, and this is probably another place we widely differ.
I couldn’t care less if I weren’t born. As the saying goes, I have been dead/not existed for billions of years and haven’t suffered the slightest inconvenience. I see cryonics and a successful recovery as no different from dying and being re-born. Thus I assign virtually zero positives to being re-born, while I assign huge negatives to 1 and 2 above.
We are evolutionarily driven to dislike dying and try to postpone it for as long as possible. However I don’t think we are particularly hardwired to prefer this form of weird cryonic rebirth over never waking up at all. Given that our general preference to not die has nothing fundamental about it, but is rather a case of us following our evolutionary leanings, what makes it so obvious that cryonic rebirth is a good thing.
Some form of longetivity research which extends our life to say 200 years without going the cryonic route with all the above risks especially for the first few generations of cryonic guinea pigs, seems much harder to argue against.
Unfortunately all the discussion on this forum including the writings by Eliezer seem to draw absolutely no distinction between the two scenarios:
A. Signing up for cryonics now, with all the associated risks/benefits that I just discussed.
B. Some form of payment for some experimental longetivity research that you need to make upfront when you are 30. If the research succeeds and is tested safe, you can use the drugs for free and live to be 200. If not, you live your regular lifespan and merely forfeit the money that you paid to sponsor the research.
I can readily see myself choosing (B) if the rates were affordable
and if the probability of success seemed reasonable to justify that rate.
I find it astounding that repeated shallow arguments are made on this
blog which address scenario (A) as though it were identical to scenario (B).
If you were hit by a car tomorrow, would you be lying there thinking, ‘well, I’ve had a good life, and being dead’s not so bad, so I’ll call the funeral service’ or would you be calling an ambulance?
Ambulances are expensive, and doctors are not guaranteed to be able to fix you, and there is chance you might be in for some suffering, and you may be out of society for a while until you recover—but you call them anyway. You do this because you know that being alive is better than being dead.
Cryonics is just taking this one step further., and booking your ambulance ahead of time.
Could you supply a (rough) probability derivation for your concerns about dystopian futures?
I suspect the reason people aren’t bringing those possibilities up is that, through a variety of elements including in particular the standard Less Wrong understanding of FAI derived from the Sequences, LWers have a fairly high conditional probability Pr(Life after cryo will be fun | anybody can and bothers to nanotechnologically reconstruct my brain) along with at least a modest probability of that condition actually occurring.
Thank God, I’ve been lurking on this forum for years now, and its this one that I have never felt like such an outsider on this forum, especially with the very STRONG language Eliezer uses throughout both this post and the other one. It felt as if I was being called more than just a bit irrational but stupid for thinking there was a more than negligible chance that I upon waking I would be in a physical or mental state in which death was preferable yet I would be unable to deliver.
I can see it being very plausible to be awoken in extreme and constant agony, or perhaps in some sort of permanent vegetative state, or in some sort of not yet imagined unbreakable continued and torturous servitude for the 1,000+ years. I just do not the risks as outweighing the benefits of simply being alive.
It is not cryonics which carries this risk, it is the future in general.
Consider: what guarantees that you will not wake up tomorrow morning to a horrible situation, with nothing familiar to cling to ? Nothing; you might be kidnapped during the night and sequestered somewhere by terrorists. That is perhaps a far-out supposition, but no more fanciful than whatever your imagination is currently conjuring about your hypothetical revival from cryonics.
The future can be scary, I’ll grant you that. But the future isn’t “200 years from now”. The future is the next breath you take.
It is not cryonics which carries this risk, it is the future in general.
Not entirely. People who are cryonically preserved are legally deceased. There are possible futures which are only dystopic from the point of view of the frozen penniless refugees of the 21st century.
I think the chances of this are small—most people would recognize that someone revived is as human as anyone else and must be afforded the same respect and civil rights.
You don’t have to die to become a penniless refugee. All it takes is for the earth to move sideways, back and forth, for a few seconds.
I wasn’t going to bring this up, because it’s too convenient and I was afraid of sounding ghoulish. But think of the people in Haiti who were among the few with a secure future, one bright afternoon, and who became “penniless refugees” in the space of a few minutes. You don’t even have to postulate anything outlandish.
You are wealthy and well-connected now, compared to the rest of the population, and more likely than not to still be wealthy and well-connected tomorrow; the risk of losing these advantages looms large because you feel like you would not be in control while frozen. The same perception takes over when you decide between flying and driving somewhere: it feels safer to drive, to many people.
Yes, there are possible futures where your life is miserable, and the likelihoods do not seem to depend significantly on the manner in which the future becomes the present—live or paused, as it were—or on the length of the pauses.
The likelihoods do strongly depend on what actions we undertake in the present to reduce what we might call “ambient risk”: reduce the more extreme inequalities, attend to things like pollution and biodiversity, improve life-enhancing technologies, foster a political climate maximally protective of individual rights, and so on, up to and including global existential risks and the possibility of a Singularity.
Eh. At least when you’re alive, you can see nasty political things coming. At least from a couple meters off, if not kilometers. Things can change a lot more when you’re vitrified in a canister for 75-300 years than they can while you’re asleep. I prefer Technologos’ reply, plus that economic considerations make it likely that reviving someone would be a pretty altruistic act.
Most of what you’re worried about should be UnFriendly AI or insane transcending uploads; lesser forces probably lack the technology to revive you, and the technology to revive you bleeds swiftly into AGI or uploads.
If you’re worried that the average AI which preserves your conscious existence will torture that existence, then you should also worry about scenarios where an extremely fast mind strikes so fast that you don’t have the warning required to commit suicide—in fact, any UFAI that cares enough to preserve and torture you, has a motive to deliberately avoid giving such warning. This can happen at any time, including tomorrow; no one knows the space of self-modifying programs well enough to predict when the aggregate of meddling dabblers will hit something that effectively self-improves. Without benefit of hindsight, it could have been Eurisko.
You might expect more warning about uploads, but, given that you’re worried enough about negative outcomes to forego cryonic preservation out of fear, it seems clear that you should commit suicide immediately upon learning about the existence of whole-brain emulation or technology that seems like it might enable some party to run WBE in an underground lab.
In short: As usual, arguments against cryonics, if applied evenhandedly, tend to also show that we should commit suicide immediately in the present day.
Morendil put it very well: “The future isn’t 200 years from now. The future is the next breath you take.”
I am puzzled by Eliezer’s confidence in the rationality of signing up for cryonics given he thinks it would be characteristic of a “GODDAMNED SANE CIVILIZATION”. I am even more puzzled by the commenters overwhelming agreement with Eliezer. I am personally uncomfortable with cryonics for the two following reasons and am surprised that no one seems to bring these up.
I can see it being very plausible that somewhere along the line I would be subject to immense suffering, over which death would have been a far better option, but that I would be either potentially unable to take my life due to physical constraints or would lack the courage to do so (it takes quite some courage and persistent suffering to be driven to suicide IMO). I see this as analogous to a case where I am very near death and am faced with the two following options.
(a) Have my life support system turned off and die peacefully.
(b) Keep the life support system going but subsequently give up all autonomy over my life and body and place it entirely in the hands of others who are likely not even my immediate kin. I could be made to put up with immense suffering either due to technical glitches which are very likely since this is a very nascent area, or due to willful malevolence. In this case I would very likely choose (a).
Note that in addition to prolonged suffering where I am effectively incapable of pulling the plug on myself, there is also the chance that I would be an oddity as far as future generations are concerned. Perhaps I would be made a circus or museum exhibit to entertain that generation. Our race is highly speciesist and I would not trust the future generations with their bionic implants and so on to even necessarily consider me to be of the same species and offer me the same rights and moral consideration.
Last but not the least is a point I made as a comment in response to Robin Hanson’s post. Robin Hanson expressed a preference for a world filled with more people with scarce per-capita resources compared to a world with fewer people with significantly better living conditions. His point was that this gives many people the opportunity to “be born” who would not have come into existence. And that this was for some reason a good thing. I suspect that Eliezer too has a similar opinion on this, and this is probably another place we widely differ.
I couldn’t care less if I weren’t born. As the saying goes, I have been dead/not existed for billions of years and haven’t suffered the slightest inconvenience. I see cryonics and a successful recovery as no different from dying and being re-born. Thus I assign virtually zero positives to being re-born, while I assign huge negatives to 1 and 2 above.
We are evolutionarily driven to dislike dying and try to postpone it for as long as possible. However I don’t think we are particularly hardwired to prefer this form of weird cryonic rebirth over never waking up at all. Given that our general preference to not die has nothing fundamental about it, but is rather a case of us following our evolutionary leanings, what makes it so obvious that cryonic rebirth is a good thing. Some form of longetivity research which extends our life to say 200 years without going the cryonic route with all the above risks especially for the first few generations of cryonic guinea pigs, seems much harder to argue against.
Unfortunately all the discussion on this forum including the writings by Eliezer seem to draw absolutely no distinction between the two scenarios:
A. Signing up for cryonics now, with all the associated risks/benefits that I just discussed.
B. Some form of payment for some experimental longetivity research that you need to make upfront when you are 30. If the research succeeds and is tested safe, you can use the drugs for free and live to be 200. If not, you live your regular lifespan and merely forfeit the money that you paid to sponsor the research.
I can readily see myself choosing (B) if the rates were affordable and if the probability of success seemed reasonable to justify that rate. I find it astounding that repeated shallow arguments are made on this blog which address scenario (A) as though it were identical to scenario (B).
If you were hit by a car tomorrow, would you be lying there thinking, ‘well, I’ve had a good life, and being dead’s not so bad, so I’ll call the funeral service’ or would you be calling an ambulance?
Ambulances are expensive, and doctors are not guaranteed to be able to fix you, and there is chance you might be in for some suffering, and you may be out of society for a while until you recover—but you call them anyway. You do this because you know that being alive is better than being dead.
Cryonics is just taking this one step further., and booking your ambulance ahead of time.
Nope, ongoing disagreement with Robin. http://lesswrong.com/lw/ws/for_the_people_who_are_still_alive/
Could you supply a (rough) probability derivation for your concerns about dystopian futures?
I suspect the reason people aren’t bringing those possibilities up is that, through a variety of elements including in particular the standard Less Wrong understanding of FAI derived from the Sequences, LWers have a fairly high conditional probability Pr(Life after cryo will be fun | anybody can and bothers to nanotechnologically reconstruct my brain) along with at least a modest probability of that condition actually occurring.
Thank God, I’ve been lurking on this forum for years now, and its this one that I have never felt like such an outsider on this forum, especially with the very STRONG language Eliezer uses throughout both this post and the other one. It felt as if I was being called more than just a bit irrational but stupid for thinking there was a more than negligible chance that I upon waking I would be in a physical or mental state in which death was preferable yet I would be unable to deliver.
I can see it being very plausible to be awoken in extreme and constant agony, or perhaps in some sort of permanent vegetative state, or in some sort of not yet imagined unbreakable continued and torturous servitude for the 1,000+ years. I just do not the risks as outweighing the benefits of simply being alive.
It is not cryonics which carries this risk, it is the future in general.
Consider: what guarantees that you will not wake up tomorrow morning to a horrible situation, with nothing familiar to cling to ? Nothing; you might be kidnapped during the night and sequestered somewhere by terrorists. That is perhaps a far-out supposition, but no more fanciful than whatever your imagination is currently conjuring about your hypothetical revival from cryonics.
The future can be scary, I’ll grant you that. But the future isn’t “200 years from now”. The future is the next breath you take.
Not entirely. People who are cryonically preserved are legally deceased. There are possible futures which are only dystopic from the point of view of the frozen penniless refugees of the 21st century.
I think the chances of this are small—most people would recognize that someone revived is as human as anyone else and must be afforded the same respect and civil rights.
You don’t have to die to become a penniless refugee. All it takes is for the earth to move sideways, back and forth, for a few seconds.
I wasn’t going to bring this up, because it’s too convenient and I was afraid of sounding ghoulish. But think of the people in Haiti who were among the few with a secure future, one bright afternoon, and who became “penniless refugees” in the space of a few minutes. You don’t even have to postulate anything outlandish.
You are wealthy and well-connected now, compared to the rest of the population, and more likely than not to still be wealthy and well-connected tomorrow; the risk of losing these advantages looms large because you feel like you would not be in control while frozen. The same perception takes over when you decide between flying and driving somewhere: it feels safer to drive, to many people.
Yes, there are possible futures where your life is miserable, and the likelihoods do not seem to depend significantly on the manner in which the future becomes the present—live or paused, as it were—or on the length of the pauses.
The likelihoods do strongly depend on what actions we undertake in the present to reduce what we might call “ambient risk”: reduce the more extreme inequalities, attend to things like pollution and biodiversity, improve life-enhancing technologies, foster a political climate maximally protective of individual rights, and so on, up to and including global existential risks and the possibility of a Singularity.
Eh. At least when you’re alive, you can see nasty political things coming. At least from a couple meters off, if not kilometers. Things can change a lot more when you’re vitrified in a canister for 75-300 years than they can while you’re asleep. I prefer Technologos’ reply, plus that economic considerations make it likely that reviving someone would be a pretty altruistic act.
Most of what you’re worried about should be UnFriendly AI or insane transcending uploads; lesser forces probably lack the technology to revive you, and the technology to revive you bleeds swiftly into AGI or uploads.
If you’re worried that the average AI which preserves your conscious existence will torture that existence, then you should also worry about scenarios where an extremely fast mind strikes so fast that you don’t have the warning required to commit suicide—in fact, any UFAI that cares enough to preserve and torture you, has a motive to deliberately avoid giving such warning. This can happen at any time, including tomorrow; no one knows the space of self-modifying programs well enough to predict when the aggregate of meddling dabblers will hit something that effectively self-improves. Without benefit of hindsight, it could have been Eurisko.
You might expect more warning about uploads, but, given that you’re worried enough about negative outcomes to forego cryonic preservation out of fear, it seems clear that you should commit suicide immediately upon learning about the existence of whole-brain emulation or technology that seems like it might enable some party to run WBE in an underground lab.
In short: As usual, arguments against cryonics, if applied evenhandedly, tend to also show that we should commit suicide immediately in the present day.
Morendil put it very well: “The future isn’t 200 years from now. The future is the next breath you take.”