It is not cryonics which carries this risk, it is the future in general.
Consider: what guarantees that you will not wake up tomorrow morning to a horrible situation, with nothing familiar to cling to ? Nothing; you might be kidnapped during the night and sequestered somewhere by terrorists. That is perhaps a far-out supposition, but no more fanciful than whatever your imagination is currently conjuring about your hypothetical revival from cryonics.
The future can be scary, I’ll grant you that. But the future isn’t “200 years from now”. The future is the next breath you take.
It is not cryonics which carries this risk, it is the future in general.
Not entirely. People who are cryonically preserved are legally deceased. There are possible futures which are only dystopic from the point of view of the frozen penniless refugees of the 21st century.
I think the chances of this are small—most people would recognize that someone revived is as human as anyone else and must be afforded the same respect and civil rights.
You don’t have to die to become a penniless refugee. All it takes is for the earth to move sideways, back and forth, for a few seconds.
I wasn’t going to bring this up, because it’s too convenient and I was afraid of sounding ghoulish. But think of the people in Haiti who were among the few with a secure future, one bright afternoon, and who became “penniless refugees” in the space of a few minutes. You don’t even have to postulate anything outlandish.
You are wealthy and well-connected now, compared to the rest of the population, and more likely than not to still be wealthy and well-connected tomorrow; the risk of losing these advantages looms large because you feel like you would not be in control while frozen. The same perception takes over when you decide between flying and driving somewhere: it feels safer to drive, to many people.
Yes, there are possible futures where your life is miserable, and the likelihoods do not seem to depend significantly on the manner in which the future becomes the present—live or paused, as it were—or on the length of the pauses.
The likelihoods do strongly depend on what actions we undertake in the present to reduce what we might call “ambient risk”: reduce the more extreme inequalities, attend to things like pollution and biodiversity, improve life-enhancing technologies, foster a political climate maximally protective of individual rights, and so on, up to and including global existential risks and the possibility of a Singularity.
Eh. At least when you’re alive, you can see nasty political things coming. At least from a couple meters off, if not kilometers. Things can change a lot more when you’re vitrified in a canister for 75-300 years than they can while you’re asleep. I prefer Technologos’ reply, plus that economic considerations make it likely that reviving someone would be a pretty altruistic act.
Most of what you’re worried about should be UnFriendly AI or insane transcending uploads; lesser forces probably lack the technology to revive you, and the technology to revive you bleeds swiftly into AGI or uploads.
If you’re worried that the average AI which preserves your conscious existence will torture that existence, then you should also worry about scenarios where an extremely fast mind strikes so fast that you don’t have the warning required to commit suicide—in fact, any UFAI that cares enough to preserve and torture you, has a motive to deliberately avoid giving such warning. This can happen at any time, including tomorrow; no one knows the space of self-modifying programs well enough to predict when the aggregate of meddling dabblers will hit something that effectively self-improves. Without benefit of hindsight, it could have been Eurisko.
You might expect more warning about uploads, but, given that you’re worried enough about negative outcomes to forego cryonic preservation out of fear, it seems clear that you should commit suicide immediately upon learning about the existence of whole-brain emulation or technology that seems like it might enable some party to run WBE in an underground lab.
In short: As usual, arguments against cryonics, if applied evenhandedly, tend to also show that we should commit suicide immediately in the present day.
Morendil put it very well: “The future isn’t 200 years from now. The future is the next breath you take.”
It is not cryonics which carries this risk, it is the future in general.
Consider: what guarantees that you will not wake up tomorrow morning to a horrible situation, with nothing familiar to cling to ? Nothing; you might be kidnapped during the night and sequestered somewhere by terrorists. That is perhaps a far-out supposition, but no more fanciful than whatever your imagination is currently conjuring about your hypothetical revival from cryonics.
The future can be scary, I’ll grant you that. But the future isn’t “200 years from now”. The future is the next breath you take.
Not entirely. People who are cryonically preserved are legally deceased. There are possible futures which are only dystopic from the point of view of the frozen penniless refugees of the 21st century.
I think the chances of this are small—most people would recognize that someone revived is as human as anyone else and must be afforded the same respect and civil rights.
You don’t have to die to become a penniless refugee. All it takes is for the earth to move sideways, back and forth, for a few seconds.
I wasn’t going to bring this up, because it’s too convenient and I was afraid of sounding ghoulish. But think of the people in Haiti who were among the few with a secure future, one bright afternoon, and who became “penniless refugees” in the space of a few minutes. You don’t even have to postulate anything outlandish.
You are wealthy and well-connected now, compared to the rest of the population, and more likely than not to still be wealthy and well-connected tomorrow; the risk of losing these advantages looms large because you feel like you would not be in control while frozen. The same perception takes over when you decide between flying and driving somewhere: it feels safer to drive, to many people.
Yes, there are possible futures where your life is miserable, and the likelihoods do not seem to depend significantly on the manner in which the future becomes the present—live or paused, as it were—or on the length of the pauses.
The likelihoods do strongly depend on what actions we undertake in the present to reduce what we might call “ambient risk”: reduce the more extreme inequalities, attend to things like pollution and biodiversity, improve life-enhancing technologies, foster a political climate maximally protective of individual rights, and so on, up to and including global existential risks and the possibility of a Singularity.
Eh. At least when you’re alive, you can see nasty political things coming. At least from a couple meters off, if not kilometers. Things can change a lot more when you’re vitrified in a canister for 75-300 years than they can while you’re asleep. I prefer Technologos’ reply, plus that economic considerations make it likely that reviving someone would be a pretty altruistic act.
Most of what you’re worried about should be UnFriendly AI or insane transcending uploads; lesser forces probably lack the technology to revive you, and the technology to revive you bleeds swiftly into AGI or uploads.
If you’re worried that the average AI which preserves your conscious existence will torture that existence, then you should also worry about scenarios where an extremely fast mind strikes so fast that you don’t have the warning required to commit suicide—in fact, any UFAI that cares enough to preserve and torture you, has a motive to deliberately avoid giving such warning. This can happen at any time, including tomorrow; no one knows the space of self-modifying programs well enough to predict when the aggregate of meddling dabblers will hit something that effectively self-improves. Without benefit of hindsight, it could have been Eurisko.
You might expect more warning about uploads, but, given that you’re worried enough about negative outcomes to forego cryonic preservation out of fear, it seems clear that you should commit suicide immediately upon learning about the existence of whole-brain emulation or technology that seems like it might enable some party to run WBE in an underground lab.
In short: As usual, arguments against cryonics, if applied evenhandedly, tend to also show that we should commit suicide immediately in the present day.
Morendil put it very well: “The future isn’t 200 years from now. The future is the next breath you take.”