While Robin has fewer categories, if you look at the detailed probabilities that people gave we could throw out most of their answers without changing their final numbers; people were good about saying “that seems very unlikely” and giving near-zero probabilities. Most of the effect on the total comes from a few questions where people were saying “oh, that seems potentially serious”. If I do this more I’ll fold many of the less likely questions into more likely ones (mostly so I get a shorter survey) but I don’t think that will change the outcome much.
I would expect unpacking to work for two reasons: to help avoid the planning fallacy and to let us see (and focus on) the individual steps people most disagree on.
unpacking all the disjunctive paths to success into finer and finer subcategories
As far as I can tell there’s really only one path to success, and it’s the one I put here. In my reply to torekp I talked about why I thought in-the-flesh revival was enough less likely not to matter. What would you put as disjunctive paths where “you sign up to get frozen and start playing for it” makes the difference is you being revived?
If any disjunctive paths are serious enough I’m willing to go back and add them to my model.
EDIT retracted: “looking up the Subadditivity effect I think your claim is just wrong. If anything breaking larger probabilities down makes for larger combined probabilities”. [This was wrong because I was confusing the negative and positive formulations. Robin Hanson’s is positive (which subadditivity should push in the ‘cryonics-likely’ direction) while mine was negative (which subadditivity should push in the ‘cryonics-unlikely’ direction).]
As far as I can tell there’s really only one path to success, and it’s the one I put here.
I raised an alternative path to success when we discussed this Sunday, at the end when you asked for probability of “other failure” and I argued that it should go both ways. Specifically, I suggested that we could be in a multiverse such that being cryopreserved, even if poorly, would increase the probability of other universes copying you into them. I don’t remember the probability I gave this at the time, but I believe it was on the order of 10^-2 - small, but still bigger than your bottom-line probability of 1/1500 (which I disagree with) for cryonics working the obvious way.
Some other low-probability paths-to-win that you neglected:
My cryopreservation subscription fees are the marginal research dollars that prevent me from dying in the first place, via a cryonics-related discovery with non-cryonics implications
I am unsuccessfully preserved but my helping cryonics reach scale saves others; a future AI keeps me alive longer because having signed up for cryonics signaled that I value my life more
While my cryopreserved brain is not adequate to resurrect me by itself, it will be combined with electronic records of my life and others’ memories of me to build an approximation of me.
There are also some less-traditional paths-to-lose:
Your cryopreservation subscription fees prevent you from buying something else that ends up saving your life (or someone else’s)
You would never die anyway, so your cryopreservation fees only cost pre-singularity utilons from you (or others you would have given the money to).
Simulation is possible, but it is for some reason much “thinner” than reality; that is, a given simulation, even as it runs on a computer existing in a quantum MWI, follows only a very limited number of quantum branches, so has a tiny impact on the measure of the set of future happy versions of you (smaller even than the plain old non-technological-quantum-immortality versions who simply didn’t happen to die).
You are resurrected by a future UFAI in a hell-world. For instance, in order to get one working version of you, the UFAI must create many almost-versions which are painfully insane; and its ethics say that’s OK. And it does this to all corpsicles it finds but not to any other dead people.
I have strong opinions of the likeliness of these (I’d put one at p>99% and another at p<1%) but in any case they’re worth mentioning.
Hmm, regarding quantum immortality, I did think about it. Taken to its extreme, I could perform quantum suicide while tying the result of the quantum draw to the lottery. Then it occurred to me that the vast majority of worlds, in which I did not win the lottery, would contain one more sad mother. Such a situation scores far lower in my utility function than the status quo does.
I feel I should treat quantum suicide by cryostination the same way. The only problem is that the status quo bias works against me this time.
Sorry: I edited the comment you were responding to to clarify my intended meaning, and now perhaps the (unintended?) idea you were responding to is no longer there.
looking up the Subadditivity effect I think your claim is just wrong. If anything breaking larger probabilities down makes for larger combined probabilities.
The idea that when people disagree over complex topics that they should break their disagreement down is one I’ve learned in part from Robin Hanson and in fact he applies it cryonics
While Robin has fewer categories, if you look at the detailed probabilities that people gave we could throw out most of their answers without changing their final numbers; people were good about saying “that seems very unlikely” and giving near-zero probabilities. Most of the effect on the total comes from a few questions where people were saying “oh, that seems potentially serious”. If I do this more I’ll fold many of the less likely questions into more likely ones (mostly so I get a shorter survey) but I don’t think that will change the outcome much.
I would expect unpacking to work for two reasons: to help avoid the planning fallacy and to let us see (and focus on) the individual steps people most disagree on.
As far as I can tell there’s really only one path to success, and it’s the one I put here. In my reply to torekp I talked about why I thought in-the-flesh revival was enough less likely not to matter. What would you put as disjunctive paths where “you sign up to get frozen and start playing for it” makes the difference is you being revived?
If any disjunctive paths are serious enough I’m willing to go back and add them to my model.
EDIT retracted: “looking up the Subadditivity effect I think your claim is just wrong. If anything breaking larger probabilities down makes for larger combined probabilities”. [This was wrong because I was confusing the negative and positive formulations. Robin Hanson’s is positive (which subadditivity should push in the ‘cryonics-likely’ direction) while mine was negative (which subadditivity should push in the ‘cryonics-unlikely’ direction).]
I raised an alternative path to success when we discussed this Sunday, at the end when you asked for probability of “other failure” and I argued that it should go both ways. Specifically, I suggested that we could be in a multiverse such that being cryopreserved, even if poorly, would increase the probability of other universes copying you into them. I don’t remember the probability I gave this at the time, but I believe it was on the order of 10^-2 - small, but still bigger than your bottom-line probability of 1/1500 (which I disagree with) for cryonics working the obvious way.
Some other low-probability paths-to-win that you neglected:
My cryopreservation subscription fees are the marginal research dollars that prevent me from dying in the first place, via a cryonics-related discovery with non-cryonics implications
I am unsuccessfully preserved but my helping cryonics reach scale saves others; a future AI keeps me alive longer because having signed up for cryonics signaled that I value my life more
While my cryopreserved brain is not adequate to resurrect me by itself, it will be combined with electronic records of my life and others’ memories of me to build an approximation of me.
There are also some less-traditional paths-to-lose:
Your cryopreservation subscription fees prevent you from buying something else that ends up saving your life (or someone else’s)
You would never die anyway, so your cryopreservation fees only cost pre-singularity utilons from you (or others you would have given the money to).
Simulation is possible, but it is for some reason much “thinner” than reality; that is, a given simulation, even as it runs on a computer existing in a quantum MWI, follows only a very limited number of quantum branches, so has a tiny impact on the measure of the set of future happy versions of you (smaller even than the plain old non-technological-quantum-immortality versions who simply didn’t happen to die).
You are resurrected by a future UFAI in a hell-world. For instance, in order to get one working version of you, the UFAI must create many almost-versions which are painfully insane; and its ethics say that’s OK. And it does this to all corpsicles it finds but not to any other dead people.
I have strong opinions of the likeliness of these (I’d put one at p>99% and another at p<1%) but in any case they’re worth mentioning.
Hmm, regarding quantum immortality, I did think about it. Taken to its extreme, I could perform quantum suicide while tying the result of the quantum draw to the lottery. Then it occurred to me that the vast majority of worlds, in which I did not win the lottery, would contain one more sad mother. Such a situation scores far lower in my utility function than the status quo does.
I feel I should treat quantum suicide by cryostination the same way. The only problem is that the status quo bias works against me this time.
Sorry: I edited the comment you were responding to to clarify my intended meaning, and now perhaps the (unintended?) idea you were responding to is no longer there.
Whoops; this totally slipped my mind. Thanks for including them here.
Yes, that was the claim.