If I could be confident of 5%, it would be attractive right now. The problem isn’t really any single point of failure, the problem is that there are way too many points of failure that all have pretty good chances of happening, any single one of which dooms all or most of the clients. Even so, if I had substantially more assets then it would be attractive even at 0-2%.
Most of the points of failure seem like things that could be averted by a small group of people who care about the patients not thawing. Usually when people care about something it is at least feasible, if not inevitable, that all the avertable failures are actually averted. Rocket launches, for example, have at least hundreds of points of potential failure. It sounds like your reasoning would imply that rocket launches could never ever happen. In other words, I’m saying, the points of failure seem pretty correlated with each other, with shared factors being “civilization doesn’t fall apart” and “there’s people with resources who care about the patients”.
Is your 5% here an actual probability estimate (e.g., you have experience making probability estimates that you get feedback on and followed a similar process in this case; or, you’d make a bet against someone who said “20% probability” or on the other side “1% probability” if it were easy to have the bet resolved), or is it more an impressionistic nonquantitative statement of your sense of the plausibility of cryonics working? I ask because maybe the meta-question here is less “what specifically is JBlack’s expected utility calculation and where might there be important flaws, if any”, and more “how is JBlack dealing with decision making for nebulous questions like cryonics”.
I have in fact looked into cryonics as a possible life-extension mechanism, looked at a bunch of the possible failure modes, and many of these cannot be reliably averted by a group of well-meaning people. If you’re actually trying to model “people who are not currently investing in cryonic preservation”, then it does little good to post hypotheses such as “they are too scared of false hope”. Maybe some are, but certainly not all.
Also yes, my threshold around 5% is where I have calculated that it would be “worth it to me”, and my upper bound (not estimate) of 2% is based on some weeks of investigation of cryonic technology, the social context in which it occurs, and expectations for the next 50 years. If there have been any exciting revolutions in the past ten years that significantly alter these numbers, I haven’t seen them mentioned anywhere (including company websites and sites like this).
As far as bets go, I am literally staking my life on this estimate, am I not?
>then it does little good to post hypotheses such as “they are too scared of false hope”
What would you recommend? I spend time talking to such people, but I’m just one person and I wish there were more theory about this. I’m surprised LW isn’t interested.
>Maybe some are, but certainly not all.
What do view as the main factors?
>many of these cannot be reliably averted by a group of well-meaning people
Which ones are you thinking of? Some that I’m worried about:
-- forceful intervention (e.g. by a state or invader) that prevents people from putting LN in the dewars
-- collapse of civilization, such that it’s too expensive to produce LN and even people who care a lot can’t rig something together (have you looked into this? do you know under what circumstances this might happen?)
-- X-risk, e.g. AGI risk, killing everyone (in particular, preventing humanity from developing nanotech etc.)
> investigation of cryonic technology, the social context in which it occurs, and expectations for the next 50 years
Which seem like key points of failure? You said there’s lots of points of failure, but this doesn’t give an estimate, if we expand conjunctions we have to also expand disjunctions. For example, I could be worried that I might die in a way that makes my cryo-preservation worse—I’m left dead for a few days, there’s physical trauma to my brain, etc. This does decrease the probability of success, but also we have to factor in uncertainty about how good preservation needs to be; it’s not just P(perfect preservation)xP(humanity makes it to nanotech), it’s also additionally P(bad preservation)xP(nanotech)xP(souls can be well reconstructed from bad preservations).
If I could be confident of 5%, it would be attractive right now. The problem isn’t really any single point of failure, the problem is that there are way too many points of failure that all have pretty good chances of happening, any single one of which dooms all or most of the clients. Even so, if I had substantially more assets then it would be attractive even at 0-2%.
Most of the points of failure seem like things that could be averted by a small group of people who care about the patients not thawing. Usually when people care about something it is at least feasible, if not inevitable, that all the avertable failures are actually averted. Rocket launches, for example, have at least hundreds of points of potential failure. It sounds like your reasoning would imply that rocket launches could never ever happen. In other words, I’m saying, the points of failure seem pretty correlated with each other, with shared factors being “civilization doesn’t fall apart” and “there’s people with resources who care about the patients”.
See https://www.lesswrong.com/posts/ebiCeBHr7At8Yyq9R/being-half-rational-about-pascal-s-wager-is-even-worse?commentId=4ZrmawPKwNqMbzMXy
and
https://www.lesswrong.com/posts/ebiCeBHr7At8Yyq9R/being-half-rational-about-pascal-s-wager-is-even-worse?commentId=XxANusJcNkiRcq7kF
Is your 5% here an actual probability estimate (e.g., you have experience making probability estimates that you get feedback on and followed a similar process in this case; or, you’d make a bet against someone who said “20% probability” or on the other side “1% probability” if it were easy to have the bet resolved), or is it more an impressionistic nonquantitative statement of your sense of the plausibility of cryonics working? I ask because maybe the meta-question here is less “what specifically is JBlack’s expected utility calculation and where might there be important flaws, if any”, and more “how is JBlack dealing with decision making for nebulous questions like cryonics”.
I have in fact looked into cryonics as a possible life-extension mechanism, looked at a bunch of the possible failure modes, and many of these cannot be reliably averted by a group of well-meaning people. If you’re actually trying to model “people who are not currently investing in cryonic preservation”, then it does little good to post hypotheses such as “they are too scared of false hope”. Maybe some are, but certainly not all.
Also yes, my threshold around 5% is where I have calculated that it would be “worth it to me”, and my upper bound (not estimate) of 2% is based on some weeks of investigation of cryonic technology, the social context in which it occurs, and expectations for the next 50 years. If there have been any exciting revolutions in the past ten years that significantly alter these numbers, I haven’t seen them mentioned anywhere (including company websites and sites like this).
As far as bets go, I am literally staking my life on this estimate, am I not?
>then it does little good to post hypotheses such as “they are too scared of false hope”
What would you recommend? I spend time talking to such people, but I’m just one person and I wish there were more theory about this. I’m surprised LW isn’t interested.
>Maybe some are, but certainly not all.
What do view as the main factors?
>many of these cannot be reliably averted by a group of well-meaning people
Which ones are you thinking of? Some that I’m worried about:
-- forceful intervention (e.g. by a state or invader) that prevents people from putting LN in the dewars
-- collapse of civilization, such that it’s too expensive to produce LN and even people who care a lot can’t rig something together (have you looked into this? do you know under what circumstances this might happen?)
-- X-risk, e.g. AGI risk, killing everyone (in particular, preventing humanity from developing nanotech etc.)
> investigation of cryonic technology, the social context in which it occurs, and expectations for the next 50 years
Which seem like key points of failure? You said there’s lots of points of failure, but this doesn’t give an estimate, if we expand conjunctions we have to also expand disjunctions. For example, I could be worried that I might die in a way that makes my cryo-preservation worse—I’m left dead for a few days, there’s physical trauma to my brain, etc. This does decrease the probability of success, but also we have to factor in uncertainty about how good preservation needs to be; it’s not just P(perfect preservation)xP(humanity makes it to nanotech), it’s also additionally P(bad preservation)xP(nanotech)xP(souls can be well reconstructed from bad preservations).