Eh. At least when you’re alive, you can see nasty political things coming. At least from a couple meters off, if not kilometers. Things can change a lot more when you’re vitrified in a canister for 75-300 years than they can while you’re asleep. I prefer Technologos’ reply, plus that economic considerations make it likely that reviving someone would be a pretty altruistic act.
Most of what you’re worried about should be UnFriendly AI or insane transcending uploads; lesser forces probably lack the technology to revive you, and the technology to revive you bleeds swiftly into AGI or uploads.
If you’re worried that the average AI which preserves your conscious existence will torture that existence, then you should also worry about scenarios where an extremely fast mind strikes so fast that you don’t have the warning required to commit suicide—in fact, any UFAI that cares enough to preserve and torture you, has a motive to deliberately avoid giving such warning. This can happen at any time, including tomorrow; no one knows the space of self-modifying programs well enough to predict when the aggregate of meddling dabblers will hit something that effectively self-improves. Without benefit of hindsight, it could have been Eurisko.
You might expect more warning about uploads, but, given that you’re worried enough about negative outcomes to forego cryonic preservation out of fear, it seems clear that you should commit suicide immediately upon learning about the existence of whole-brain emulation or technology that seems like it might enable some party to run WBE in an underground lab.
In short: As usual, arguments against cryonics, if applied evenhandedly, tend to also show that we should commit suicide immediately in the present day.
Morendil put it very well: “The future isn’t 200 years from now. The future is the next breath you take.”
Eh. At least when you’re alive, you can see nasty political things coming. At least from a couple meters off, if not kilometers. Things can change a lot more when you’re vitrified in a canister for 75-300 years than they can while you’re asleep. I prefer Technologos’ reply, plus that economic considerations make it likely that reviving someone would be a pretty altruistic act.
Most of what you’re worried about should be UnFriendly AI or insane transcending uploads; lesser forces probably lack the technology to revive you, and the technology to revive you bleeds swiftly into AGI or uploads.
If you’re worried that the average AI which preserves your conscious existence will torture that existence, then you should also worry about scenarios where an extremely fast mind strikes so fast that you don’t have the warning required to commit suicide—in fact, any UFAI that cares enough to preserve and torture you, has a motive to deliberately avoid giving such warning. This can happen at any time, including tomorrow; no one knows the space of self-modifying programs well enough to predict when the aggregate of meddling dabblers will hit something that effectively self-improves. Without benefit of hindsight, it could have been Eurisko.
You might expect more warning about uploads, but, given that you’re worried enough about negative outcomes to forego cryonic preservation out of fear, it seems clear that you should commit suicide immediately upon learning about the existence of whole-brain emulation or technology that seems like it might enable some party to run WBE in an underground lab.
In short: As usual, arguments against cryonics, if applied evenhandedly, tend to also show that we should commit suicide immediately in the present day.
Morendil put it very well: “The future isn’t 200 years from now. The future is the next breath you take.”