If everything else we believe about the universe stays true, and humanity survives the next century, cryonics should work by default. Are there a number of things that could go wrong? Yes. Is the disjunction of all those possibilities a large probability? Quite. But by default, it should simply work.
Hmm. the “if humanity survives the next century” covers the uFAI possibility (where I suspect the bulk of the
probability is). I’m taking it as a given that successful cryonics is possibly in principle (no vitalism etc.).
Still, even conditional on no uFAI, there are still substantial probabilities that cryonics, as a practical
matter of actually reviving patients, is likely to fail:
Technology may simply not be applied in that direction. The amount of specific research needed to
actually revive patients may exceed the funding available.
Technology as a whole may stop progressing. We’ve had a lot of success in the last few decades in
computing, less in energy, little in transportation, what looks much like saturation in pharmaceuticals -
and the lithography advances which have been driving computing look like they have maybe another
factor of two to go (unless we get atomically precise nanotechnology—which mostly hasn’t been funded)
Perhaps there is a version of “coming to terms with one’s mortality” which isn’t deathist, and isn’t
theological, and isn’t some vague displacements of one’s hopes on to later generations, but is
simply saying that hope of increasing one’s lifespan by additional efforts isn’t plausibly supported
by the evidence, and the tradeoff of what one could instead do with that effort.
One other thing that makes me skeptical about “cryonics should work by default”:
A large chuck of what makes powerful parts of our society value (at least some) human life is
their current inability to manufacture plug-compatible replacements for humans. Neither
governments nor corporations can currently build taxpayers or employees. If these structures
gained the ability to build human equivalents for the functions that they value, I’d expect that
policies like requiring emergency rooms to admit people regardless of ability to pay to be dropped.
Successful revival of cryonics patients requires the ability to either repair or upload a frozen,
rather damaged, brain. Either of these capabilities strongly suggests the ability to construct
a healthy but blank brain or uploaded equivalent from scratch—but this is most of what is needed
to create a plug-compatible replacement for a person (albeit requiring training—one time anyway,
and then copying can be used...).
To put it another way: corporations and governments have capabilities beyond what individuals
have, and they aren’t known for using them humanely. They already are uFAIs, in a sense.
Fortunately, for now, they are built of humans as component parts, so they currently can’t
dispense with us. If technology progresses to the point of being able to manufacture human
equivalents, these structures will be free to evolve into full-blown uFAIs, presumably with lethal
consequences.
If “by default” includes keeping something like our current social structure, with structures like
corporations and governments present, I’d expect that for cryonics patients to be revived, our
society would have to hit a very narrow window of technological capability. It would have to be
capable of repairing or uploading frozen brains, but not capable of building plug-in human
equivalents. This looks inherently improbable, rather than what I’d consider a default scenario.
Hmm. the “if humanity survives the next century” covers the uFAI possibility (where I suspect the bulk of the probability is). I’m taking it as a given that successful cryonics is possibly in principle (no vitalism etc.). Still, even conditional on no uFAI, there are still substantial probabilities that cryonics, as a practical matter of actually reviving patients, is likely to fail:
Technology may simply not be applied in that direction. The amount of specific research needed to actually revive patients may exceed the funding available.
Technology as a whole may stop progressing. We’ve had a lot of success in the last few decades in computing, less in energy, little in transportation, what looks much like saturation in pharmaceuticals - and the lithography advances which have been driving computing look like they have maybe another factor of two to go (unless we get atomically precise nanotechnology—which mostly hasn’t been funded)
Perhaps there is a version of “coming to terms with one’s mortality” which isn’t deathist, and isn’t theological, and isn’t some vague displacements of one’s hopes on to later generations, but is simply saying that hope of increasing one’s lifespan by additional efforts isn’t plausibly supported by the evidence, and the tradeoff of what one could instead do with that effort.
’scuse the self-follow-up...
One other thing that makes me skeptical about “cryonics should work by default”:
A large chuck of what makes powerful parts of our society value (at least some) human life is their current inability to manufacture plug-compatible replacements for humans. Neither governments nor corporations can currently build taxpayers or employees. If these structures gained the ability to build human equivalents for the functions that they value, I’d expect that policies like requiring emergency rooms to admit people regardless of ability to pay to be dropped.
Successful revival of cryonics patients requires the ability to either repair or upload a frozen, rather damaged, brain. Either of these capabilities strongly suggests the ability to construct a healthy but blank brain or uploaded equivalent from scratch—but this is most of what is needed to create a plug-compatible replacement for a person (albeit requiring training—one time anyway, and then copying can be used...).
To put it another way: corporations and governments have capabilities beyond what individuals have, and they aren’t known for using them humanely. They already are uFAIs, in a sense. Fortunately, for now, they are built of humans as component parts, so they currently can’t dispense with us. If technology progresses to the point of being able to manufacture human equivalents, these structures will be free to evolve into full-blown uFAIs, presumably with lethal consequences.
If “by default” includes keeping something like our current social structure, with structures like corporations and governments present, I’d expect that for cryonics patients to be revived, our society would have to hit a very narrow window of technological capability. It would have to be capable of repairing or uploading frozen brains, but not capable of building plug-in human equivalents. This looks inherently improbable, rather than what I’d consider a default scenario.