Interesting. The referenced discussions often assume the post-singularity AI (which for the record I think very unlikely). The development of that technology is likely to be, if not exactly independent, only loosely correlated with the technology for cryonic revival, isn’t it?
Certainly you have to allow for the possibility of cryonic revival without the post-singularity AI, and I think we can make better guesses about the possible configurations of those worlds than post-AI worlds.
I see the basic pro-cryonics argument as having the form of Pascal’s wager. Although the probability of success might be on the low side (for the record, I think it is very low), the potential benefits are so great that it is worth it. The cost is paid in mere money. But is it?
In my main post I used the “torture by theocracy” example as an extreme, but I think there are many other cases to worry about.
Suppose that among a population of billions, there are a few hundred people who can be revived. The sort of society we all hope for might just revive them so they can go on to achieve their inherent potential as they see fit. But in societies that are just a bit more broken than our own, those with the power to cause revival may have self-interest very much in mind. You can imagine that the autonomy of those who are revived would be seriously constrained, and this by itself could make a post-revival life far from what people hope. The suicide option might be closed off to them entirely; if they came to regret their continued existence they might well be unable to end it.
Perhaps the resurrected will have to deal with the strange and upsetting limitations that today’s brain damage patients face. Perhaps future society will be unable to find a way for revived people to overcome such problems, and yet keep them alive for hundreds of years—they are just too valuable as experimental subjects.
Brain damage aside, what value will they have in a future society? They will have had unique and direct knowledge of life in a bygone century, including its speech patterns and thought patterns. I think modern historians would be ecstatic at the prospect of being able to observe or interview pockets of people from various epochs in history, including ancient ones (ethical considerations aside).
Perhaps they will be valued as scientific subjects and carefully insulated from any contaminating knowledge of the future world as it has developed. That might be profoundly boring and frustrating.
Perhaps the revived will be confined in “living museums” where they face a thousand years re-enacting what life was like in 21st century America—perhaps subject to coercion to do it in a way that pleases the masters.
If the revived people are set free, what then? Older people in every age typically shake their heads in dismay at changes in the world; this effect magnified manyfold might be profoundly unsettling—downright depressing, in fact.
One can reasonably object that all of these are all low-probability. But are they less probable than the positive high-payoff scenarios (in just, happy societies that value freedom, comfort, and the pursuit of knowledge)? Evidence? Are you keeping in mind optimism bias?
In deciding in favor of cryonic preservation, I don’t think the decision can be near costs traded off against scenarios of far happiness. There’s far misery to consider as well.
But are they less probable than the positive high-payoff scenarios (in just, happy societies that value freedom, comfort, and the pursuit of knowledge)? Evidence? Are you keeping in mind optimism bias?
While I admit that a theocratic torturing society seems less likely to develop the technology to revive people, I’m not at all sure that an enlightened one is more likely to do so than the one I assumed as the basis of my other examples. A society could be enlightened in various ways and still not think it a priority to revive frozen people for their own sake. But a society could be much more strongly motivated if it was reviving a precious commodity for the selfish ends of an elite. This might also imply that they would be less concerned about the risk of things like brain damage that would interfere with the revivee’s happiness but still allow them to be useful for the reviver’s purposes.
Interesting. The referenced discussions often assume the post-singularity AI (which for the record I think very unlikely). The development of that technology is likely to be, if not exactly independent, only loosely correlated with the technology for cryonic revival, isn’t it?
Certainly you have to allow for the possibility of cryonic revival without the post-singularity AI, and I think we can make better guesses about the possible configurations of those worlds than post-AI worlds.
I see the basic pro-cryonics argument as having the form of Pascal’s wager. Although the probability of success might be on the low side (for the record, I think it is very low), the potential benefits are so great that it is worth it. The cost is paid in mere money. But is it?
In my main post I used the “torture by theocracy” example as an extreme, but I think there are many other cases to worry about.
Suppose that among a population of billions, there are a few hundred people who can be revived. The sort of society we all hope for might just revive them so they can go on to achieve their inherent potential as they see fit. But in societies that are just a bit more broken than our own, those with the power to cause revival may have self-interest very much in mind. You can imagine that the autonomy of those who are revived would be seriously constrained, and this by itself could make a post-revival life far from what people hope. The suicide option might be closed off to them entirely; if they came to regret their continued existence they might well be unable to end it.
Perhaps the resurrected will have to deal with the strange and upsetting limitations that today’s brain damage patients face. Perhaps future society will be unable to find a way for revived people to overcome such problems, and yet keep them alive for hundreds of years—they are just too valuable as experimental subjects.
Brain damage aside, what value will they have in a future society? They will have had unique and direct knowledge of life in a bygone century, including its speech patterns and thought patterns. I think modern historians would be ecstatic at the prospect of being able to observe or interview pockets of people from various epochs in history, including ancient ones (ethical considerations aside).
Perhaps they will be valued as scientific subjects and carefully insulated from any contaminating knowledge of the future world as it has developed. That might be profoundly boring and frustrating.
Perhaps the revived will be confined in “living museums” where they face a thousand years re-enacting what life was like in 21st century America—perhaps subject to coercion to do it in a way that pleases the masters.
If the revived people are set free, what then? Older people in every age typically shake their heads in dismay at changes in the world; this effect magnified manyfold might be profoundly unsettling—downright depressing, in fact.
One can reasonably object that all of these are all low-probability. But are they less probable than the positive high-payoff scenarios (in just, happy societies that value freedom, comfort, and the pursuit of knowledge)? Evidence? Are you keeping in mind optimism bias?
In deciding in favor of cryonic preservation, I don’t think the decision can be near costs traded off against scenarios of far happiness. There’s far misery to consider as well.
Adele_L in a comment in this thread:
While I admit that a theocratic torturing society seems less likely to develop the technology to revive people, I’m not at all sure that an enlightened one is more likely to do so than the one I assumed as the basis of my other examples. A society could be enlightened in various ways and still not think it a priority to revive frozen people for their own sake. But a society could be much more strongly motivated if it was reviving a precious commodity for the selfish ends of an elite. This might also imply that they would be less concerned about the risk of things like brain damage that would interfere with the revivee’s happiness but still allow them to be useful for the reviver’s purposes.