Probabilities of basic cryonics tech working are questions of neuroscience, full stop
I’d say full speed ahead, Cap’n. Basic cryonics tech working—while being a sine qua non—isn’t the ultimate question for people signing up for cryonics. It’s just a term in the probability calculation for the actual goal: “Will I be revived (in some form that would be recognizable to my current self as myself)?” (You’ve mentioned that in the parent comment, but it deserves more than a passing remark.)
And that most decidedly requires a host of complex assumptions, such as “an agent / a group of agents will have an interest in expending resources into reviving a group of frozen old-version homo sapiens, without any enhancements, me among them”, “the future agents’ goals cannot be served merely by reading my memory engrams, then using them as a database, without granting personhood”, “there won’t be so many cryo-patients at a future point (once it catches on with better tech) that thawing all of them would be infeasible, or disallowed”, not to mention my favorite “I won’t be instantly integrated into some hivemind in which I lose all traces of my individuality”.
What we’re all hoping for, of course, is for a benevolent super-current-human agent—e.g. an FAI—to care enough about us to solve all the technical issues and grant us back our agent-hood. By construction at least in your case the advent of such an FAI would be after your passing (you wouldn’t be frozen otherwise). That means that you (of all people) would also need to qualify the most promising scenario “there will be a friendly AI to do it” with “and it will have been successfully implemented by someone other than me”.
Also, with current tech not only would true x-risks preclude you from ever being revived, even non x-risk catastrophic events (partial civilizatory collapse due to Malthusian dynamics etc.) could easily destroy the facility you’re held in, or take away anyone’s incentive to maintain it. (TW: That’s not even taking into account Siam the Star Shredder.)
I’m trying to avoid motivated cognition here, but there are lot of terms going into the actual calculation, and while that in itself doesn’t mean the probability will be vanishingly small, there seem to be a lot more (and given human nature, unfortunately likely / contributing more probability mass) scenarios in which your goal wouldn’t be achieved—or be achieved in some undesirable fashion—than the “here you go, welcome back to a society you’d like to live in” variety.
That being said, I’ll take the small chance over nothing. Hopefully some decent options will be established near my place of residence, soon.
I’d say full speed ahead, Cap’n. Basic cryonics tech working—while being a sine qua non—isn’t the ultimate question for people signing up for cryonics. It’s just a term in the probability calculation for the actual goal: “Will I be revived (in some form that would be recognizable to my current self as myself)?” (You’ve mentioned that in the parent comment, but it deserves more than a passing remark.)
And that most decidedly requires a host of complex assumptions, such as “an agent / a group of agents will have an interest in expending resources into reviving a group of frozen old-version homo sapiens, without any enhancements, me among them”, “the future agents’ goals cannot be served merely by reading my memory engrams, then using them as a database, without granting personhood”, “there won’t be so many cryo-patients at a future point (once it catches on with better tech) that thawing all of them would be infeasible, or disallowed”, not to mention my favorite “I won’t be instantly integrated into some hivemind in which I lose all traces of my individuality”.
What we’re all hoping for, of course, is for a benevolent super-current-human agent—e.g. an FAI—to care enough about us to solve all the technical issues and grant us back our agent-hood. By construction at least in your case the advent of such an FAI would be after your passing (you wouldn’t be frozen otherwise). That means that you (of all people) would also need to qualify the most promising scenario “there will be a friendly AI to do it” with “and it will have been successfully implemented by someone other than me”.
Also, with current tech not only would true x-risks preclude you from ever being revived, even non x-risk catastrophic events (partial civilizatory collapse due to Malthusian dynamics etc.) could easily destroy the facility you’re held in, or take away anyone’s incentive to maintain it. (TW: That’s not even taking into account Siam the Star Shredder.)
I’m trying to avoid motivated cognition here, but there are lot of terms going into the actual calculation, and while that in itself doesn’t mean the probability will be vanishingly small, there seem to be a lot more (and given human nature, unfortunately likely / contributing more probability mass) scenarios in which your goal wouldn’t be achieved—or be achieved in some undesirable fashion—than the “here you go, welcome back to a society you’d like to live in” variety.
That being said, I’ll take the small chance over nothing. Hopefully some decent options will be established near my place of residence, soon.