Clarke’s quote is apt, but the rest of the article does not hold all that well together. All you can say about cryonics is that it arrests the decay at the cost of destroying some structures in the process. Whether what is left is enough for eventual reversal, whether biological or technological, is a huge unknown whose probability you cannot reasonably estimate at this time. All we know is that the alternative (natural decomposition) is strictly worse. If someone gives you a concrete point estimate probability of revival, their estimate is automatically untrustworthy. We do not have anywhere close to the amount of data we need to make a reasonable guess.
If someone says “I believe that the probability of cryonic revival is 7%”, what useful information can you extract from it, beyond “this person has certain beliefs”? Of course, if you consider them an authority on the topic, you can decide whether 7% is enough for you to sign up for cryonics. Or maybe because you know them to be well calibrated on a variety of subjects they have expressed probabilistic views on, including topics that have so many unknowns, they have to have some special ineffable insight to be well calibrated on. I am skeptical that there is a reference class like this that includes cryonic revival where one can be considered well calibrated.
It is, at the very least, interesting that people signed up for cryonics tend to give lower estimates for probability of future revival than the general population, and this may give useful insight for both the state of the field (“If you haven’t looked into it, the odds are probably worse than you think.”), and variance in human decision making (“How much do you value increased personal longevity, really?”), and how the field should strive to educate and market and grow.
It could also be interesting and potentially insightful to see how those numbers have changed over time. Even if the numbers themselves are roughly meaningless, any trends in them may reflect advancement of the field, or better marketing, or change in the population signing up or considering doing so. If I had strong reason to think that there were encouraging trends in odds of revival, as well as cost and public acceptance, that would increase my odds of signing up. After all, under most non-catastrophic-future scenarios, and barring personal disasters likely to prevent preservation anyway, I’m much more likely to die in the 2050s-2080s than before that, and be preserved with that decade’s technologies, which means compounding positive trends vs. static odds can make a massive difference to me. OTOH, if we’re not seeing such improvement yet but there’s reason to think we will, then waiting a few years could greatly reduce my costs (relative to early adopters) without dramatically increasing my odds of dying before signing up.
(If we’re really lucky and sane in the coming decades there’s a small chance preservation of some sort will be considered standard healthcare practice by the time I die, but I don’t put much weight on that.)
Your comment creates a misleading impression of my article. Nowhere do I say experts can give a point probability of success. On the contrary, I frequently reject that idea. I also find it silly when people say the probably of AI destroying humans is 20%, or 45%, or whatever.
You don’t provide any support for the claim that “the rest of the article doesn’t hold all that well together”, so I’m unable to respond usefully.
Clarke’s quote is apt, but the rest of the article does not hold all that well together. All you can say about cryonics is that it arrests the decay at the cost of destroying some structures in the process. Whether what is left is enough for eventual reversal, whether biological or technological, is a huge unknown whose probability you cannot reasonably estimate at this time. All we know is that the alternative (natural decomposition) is strictly worse. If someone gives you a concrete point estimate probability of revival, their estimate is automatically untrustworthy. We do not have anywhere close to the amount of data we need to make a reasonable guess.
This goes strongly against probabilistic forecasting. It seems a wrong principle to me.
If someone says “I believe that the probability of cryonic revival is 7%”, what useful information can you extract from it, beyond “this person has certain beliefs”? Of course, if you consider them an authority on the topic, you can decide whether 7% is enough for you to sign up for cryonics. Or maybe because you know them to be well calibrated on a variety of subjects they have expressed probabilistic views on, including topics that have so many unknowns, they have to have some special ineffable insight to be well calibrated on. I am skeptical that there is a reference class like this that includes cryonic revival where one can be considered well calibrated.
It is, at the very least, interesting that people signed up for cryonics tend to give lower estimates for probability of future revival than the general population, and this may give useful insight for both the state of the field (“If you haven’t looked into it, the odds are probably worse than you think.”), and variance in human decision making (“How much do you value increased personal longevity, really?”), and how the field should strive to educate and market and grow.
It could also be interesting and potentially insightful to see how those numbers have changed over time. Even if the numbers themselves are roughly meaningless, any trends in them may reflect advancement of the field, or better marketing, or change in the population signing up or considering doing so. If I had strong reason to think that there were encouraging trends in odds of revival, as well as cost and public acceptance, that would increase my odds of signing up. After all, under most non-catastrophic-future scenarios, and barring personal disasters likely to prevent preservation anyway, I’m much more likely to die in the 2050s-2080s than before that, and be preserved with that decade’s technologies, which means compounding positive trends vs. static odds can make a massive difference to me. OTOH, if we’re not seeing such improvement yet but there’s reason to think we will, then waiting a few years could greatly reduce my costs (relative to early adopters) without dramatically increasing my odds of dying before signing up.
(If we’re really lucky and sane in the coming decades there’s a small chance preservation of some sort will be considered standard healthcare practice by the time I die, but I don’t put much weight on that.)
Your comment creates a misleading impression of my article. Nowhere do I say experts can give a point probability of success. On the contrary, I frequently reject that idea. I also find it silly when people say the probably of AI destroying humans is 20%, or 45%, or whatever.
You don’t provide any support for the claim that “the rest of the article doesn’t hold all that well together”, so I’m unable to respond usefully.