HA: “Trying cryonics requires a leap of faith straight into the unknown for a benefit with an unestimable likelihood.”
It’s what probability is for, isn’t it? If you don’t know and don’t have good prior hints, you just choose prior at random, merely making sure that mutually exclusive outcomes sum up to 1, and then adjust with what little evidence you’ve got. In reality, you usually do have some prior predispositions though. You don’t raise your hands in awe and exclaim that this probability is too shaky to be estimated and even thought about, because in doing so you make decisions, actions, which given your goals implicitly assume certain assignment of probability.
In other words, if you decide not to take a bet, you implicitly assigne low probability to the outcome. It conflicts with you saying that “there are too many unknowns to make an estimation”. You just made an estimation. If you don’t back it up, it’s as good as any other.
I assign high probability to success of cryonics (about 50%), given benevolent singularity (which is a different issue entirely, and it’s not necessarily a high-probability outcome, so it can shift resulting absolute probability significantly). In other words, if information-theoretic death doesn’t occur during cryopreservation (and I don’t presently have noticeable reasons to believe that it does), singularity-grade AI should provide enough technological buck to revive patients “for free”. Of course for the decision it’s absolute probability that matters, but I have my own reasons to believe that benevolent singularity will likely be technically possible (relative to other outcomes), and I assign about 10% to it.
HA: “Trying cryonics requires a leap of faith straight into the unknown for a benefit with an unestimable likelihood.”
It’s what probability is for, isn’t it? If you don’t know and don’t have good prior hints, you just choose prior at random, merely making sure that mutually exclusive outcomes sum up to 1, and then adjust with what little evidence you’ve got. In reality, you usually do have some prior predispositions though. You don’t raise your hands in awe and exclaim that this probability is too shaky to be estimated and even thought about, because in doing so you make decisions, actions, which given your goals implicitly assume certain assignment of probability.
In other words, if you decide not to take a bet, you implicitly assigne low probability to the outcome. It conflicts with you saying that “there are too many unknowns to make an estimation”. You just made an estimation. If you don’t back it up, it’s as good as any other.
I assign high probability to success of cryonics (about 50%), given benevolent singularity (which is a different issue entirely, and it’s not necessarily a high-probability outcome, so it can shift resulting absolute probability significantly). In other words, if information-theoretic death doesn’t occur during cryopreservation (and I don’t presently have noticeable reasons to believe that it does), singularity-grade AI should provide enough technological buck to revive patients “for free”. Of course for the decision it’s absolute probability that matters, but I have my own reasons to believe that benevolent singularity will likely be technically possible (relative to other outcomes), and I assign about 10% to it.