Now, how does this bear on Pascal’s wager? Well, I just don’t register long-term life outcomes that happen with a probability of less than one in a thousand. End of story!
I think you just admitted to being outright irrational about this. That’s fair enough if what you’re trying to explain is why Pascal’s wager doesn’t move you, but if the question is why it shouldn’t (or, less tendentiously, whether it should and why) I don’t think it’ll do.
As for that cryonics probability stackup: I say 75% for longish-term human civilization at a decent level; 25% for getting frozen quickly enough and well enough by current standards; 50% for enough brain structure being preserved; 25% for the cryonics provider surviving (and not screwing up) for long enough; 25% for getting revived into a decent society. There are some factors missing: let’s say 50% for enough technological advances to make reanimation feasible, conditional on civilization surviving at a decent level, and then 50% conditional on that for technological and/or social improvements providing a really good life for a long enough time to make any of this worth while. I think some of those are a bit optimistic and some a bit pessimistic; perhaps the errors cancel out. My guess is that there are more ways for the chain to fail unexpectedly than to succeed unexpectedly, so most likely my estimate is too optimistic. Anyway, it ends up at 3/2048 if I’ve counted my powers of 2 correctly. Four sixes. Enough to consider, for sure, but—for me—not enough to justify the expenditure when it could instead provide somewhat better quality of life for me and my family in the more clearly foreseeable future.
Spelling nitpick: -naut, not -naught. (Related to “nautical”, not “naughty”.)
I think some of those are a bit optimistic and some a bit pessimistic; perhaps the errors cancel out. My guess is that there are more ways for the chain to fail unexpectedly than to succeed unexpectedly, so most likely my estimate is too optimistic. Anyway, it ends up at 3/2048.
Here’s the thing: let’s say that there’re some “objective probabilities” out there, and that your estimate is indeed “most likely too optimistic” compared to those objective probabilities, but that there’s some significant (e.g., 10%) chance that it’s too pessimistic with compared to those same probabilities. If your estimate is “over optimistic”, its overoptimistic by at most 3/2048. If your estimate is “over pessimistic”, it could easily be over pessimistic by more than ten times that much (i.e., by more than 30/2048; Robin Hanson’s estimates the odds as “>5%”, i.e. more than 100/2048). And if you’re trying to do decision theory on whether or not to sign up for cryonics, you’re basically trying to take an average over the different values these “objective probabilities” could have, weighted by how likely they are to have those values—which means that the scenarios in which your estimate is “too pessimistic” actually have a lot of impact, even if they’re only 10% likely.
Or in other words: one’s analysis has to be unusually careful if it is to justify a resulting probability as low as 3/2048. Absent a terribly careful analysis, if one is trying to estimate some quantity that kinda sounds plausible or about which experts disagree (e.g., not “chances we’ll have a major earthquake during such-and-such a particular milisecond), one should probably just remember the overconfidence results and be wary of assigning a probability that’s very near one or zero.
Or in other words: one’s analysis has to be unusually careful if it is to justify a resulting probability as low as 3/2048. Absent a terribly careful analysis, if one is trying to estimate some quantity that kinda sounds plausible or about which experts disagree (e.g., not “chances we’ll have a major earthquake during such-and-such a particular milisecond), one should probably just remember the overconfidence results and be wary of assigning a probability that’s very near one or zero.
Great comment—people make this mistake a lot. This should be promoted to a top level post.
It’s quite true that my estimate of 3/2048 is (to say the least) more likely to be too low by 1⁄512 than to be too high by 1⁄512 :-). The error is probably something-like-lognormally distributed, being the result of multiplying a bunch of hopefully-kinda-independent errors.
But:
Suppose (for simplicity) that the probability we seek is the product of several independent probabilities, each of which I have independently estimated. Then Pr_actual(win) = Pr_actual(win1) Pr_actual(win2) … , and likewise Pr_est(win) = Pr_est(win1) Pr_est(win2) … . If I haven’t goofed in estimating the individual probabilities, then Pr_est(win1) = E_subjective(Pr_actual(win1)) etc. Hence:
In other words, even taking my unreliability in probability-estimating into account, and despite the asymmetry you noted, once I’ve estimated the individual probabilities my best estimate of the overall probability is what I get by using those individual estimates. I should not increase my probability estimate merely because there are uncertainties in the individual probability estimates.
For sure, my estimates could be wrong. For sure, they can be too high by much more than they can be too low. But they are very unlikely to be too high, and it turns out that (subject to the assumptions above) my overall estimate is unbiased provided my individual estimates are.
There’s something wrong with an analysis that biases the outcome in a particular direction as you add more details. In this case, the more different kinds of things that might go wrong or have to go right, the more fractions you have to multiply your result by. I don’t know how to get out of this trap, but it seems a failure mode with any attempt to predict the future by multiplying lists of probabilities, each generated by a handwave.
The only one of your numbers that I think can be estimated based on current experience is #2. Alcor publishes details regularly about how their cryopreservations go, and numbers for how many members quit or die in circumstances that make their preservation hopeless. Your 80% number sounds like it’s in the right ballpark. (*)
My other complaint is that your numbers are connected by a more complicated web of conditional likelihoods and interactions than simple multiplication shows. If civilization survives, the likelihood of particular technologies being developed increases, and if organizations like Alcor persist, that ought to raise those odds as well. In many of the bad societal outcomes, you won’t get revived. This reduces your downside almost as much as it reduces your upside. You’ve still paid for suspension, but it doesn’t count as a hell scenario.
I don’t think “Shut up and Multiply” should be taken literally here.
(*) ETA: simpleton’s reference to Alcor’s numbers says I’m wrong about this.
It’s perfectly correct that your estimate of P(cryonics will work for me) should go down as you think of more things that all have to happen for it to work. When something depends on many things all working right, it’s less likely to work out than intuition suggests; that’s one reason why project time estimates are almost always much too short, and why many projects fail.
Of course my probability estimates are only rough guesses. I don’t trust estimates derived in this way very much; but I trust an estimate derived by breaking the problem down into smallish bits and handwaving all the bits better than I trust one derived by handwaving the whole thing. (And there’s nothing about the breaking-it-down approach that necessitates a pessimistic answer; Robin Hanson did a similar handwavy calculation over on OB a little while ago and came up with a very different result.)
The estimate of 80% for #2 was not mine but Roko’s. My estimate for that one is 25%. I’m not in the US, which I gather makes a substantial difference, but in any case, as your later edit points out, it looks like my number may be better than Roko’s anyway.
Yes, the relationship between all those factors is not as simple as a bunch of independent events. That would be why, in the comment you’re replying to, I said “Suppose (for simplicity) that the probability we seek is the product of several independent probabilities, each of which I have independently estimated.” And also why some of my original estimates were explicitly made conditional on their predecessors.
“Shut up and multiply” was never meant to be taken literally, and as it happens I am not so stupid as to think that because someone once said “Shut up and multiply” I should therefore treat all probability calculations as chains of independent events. “Shut up and calculate” would be more accurate, but in the particular cases for which SUAM was (I think) coined the key calculations were very simple.
Hmm… I have an idea regarding this, and also regarding Roko’s suggestion to disregard low probabilities.
If you are generally unable to estimate probabilities of events lower than, say, 1/1000, it means that you must calibrate the estimates for these events way down, below 1/1000.
There are very many things that you’ll only be able to estimate as “probability below 1/1000”, some of them mutually exclusive. Normalization requires keeping the sum of their probabilities below unity, so the estimate must actually be tuned down. As a result, you can’t insist that there are parts of the distribution resulting from uncertain estimate that are sufficiently high to matter, and generally should treat things falling in this class as way less probable than the class suggests.
I think you just admitted to being outright irrational about this. That’s fair enough if what you’re trying to explain is why Pascal’s wager doesn’t move you, but if the question is why it shouldn’t (or, less tendentiously, whether it should and why) I don’t think it’ll do.
As for that cryonics probability stackup: I say 75% for longish-term human civilization at a decent level; 25% for getting frozen quickly enough and well enough by current standards; 50% for enough brain structure being preserved; 25% for the cryonics provider surviving (and not screwing up) for long enough; 25% for getting revived into a decent society. There are some factors missing: let’s say 50% for enough technological advances to make reanimation feasible, conditional on civilization surviving at a decent level, and then 50% conditional on that for technological and/or social improvements providing a really good life for a long enough time to make any of this worth while. I think some of those are a bit optimistic and some a bit pessimistic; perhaps the errors cancel out. My guess is that there are more ways for the chain to fail unexpectedly than to succeed unexpectedly, so most likely my estimate is too optimistic. Anyway, it ends up at 3/2048 if I’ve counted my powers of 2 correctly. Four sixes. Enough to consider, for sure, but—for me—not enough to justify the expenditure when it could instead provide somewhat better quality of life for me and my family in the more clearly foreseeable future.
Spelling nitpick: -naut, not -naught. (Related to “nautical”, not “naughty”.)
Here’s the thing: let’s say that there’re some “objective probabilities” out there, and that your estimate is indeed “most likely too optimistic” compared to those objective probabilities, but that there’s some significant (e.g., 10%) chance that it’s too pessimistic with compared to those same probabilities. If your estimate is “over optimistic”, its overoptimistic by at most 3/2048. If your estimate is “over pessimistic”, it could easily be over pessimistic by more than ten times that much (i.e., by more than 30/2048; Robin Hanson’s estimates the odds as “>5%”, i.e. more than 100/2048). And if you’re trying to do decision theory on whether or not to sign up for cryonics, you’re basically trying to take an average over the different values these “objective probabilities” could have, weighted by how likely they are to have those values—which means that the scenarios in which your estimate is “too pessimistic” actually have a lot of impact, even if they’re only 10% likely.
Or in other words: one’s analysis has to be unusually careful if it is to justify a resulting probability as low as 3/2048. Absent a terribly careful analysis, if one is trying to estimate some quantity that kinda sounds plausible or about which experts disagree (e.g., not “chances we’ll have a major earthquake during such-and-such a particular milisecond), one should probably just remember the overconfidence results and be wary of assigning a probability that’s very near one or zero.
Great comment—people make this mistake a lot. This should be promoted to a top level post.
It’s quite true that my estimate of 3/2048 is (to say the least) more likely to be too low by 1⁄512 than to be too high by 1⁄512 :-). The error is probably something-like-lognormally distributed, being the result of multiplying a bunch of hopefully-kinda-independent errors.
But:
Suppose (for simplicity) that the probability we seek is the product of several independent probabilities, each of which I have independently estimated. Then Pr_actual(win) = Pr_actual(win1) Pr_actual(win2) … , and likewise Pr_est(win) = Pr_est(win1) Pr_est(win2) … . If I haven’t goofed in estimating the individual probabilities, then Pr_est(win1) = E_subjective(Pr_actual(win1)) etc. Hence:
E_subjective(Pr_actual(win)) = {by (objective) independence} E_subjective(product Pr_actual(win_j)) = {by (subjective) independence} product E_subjective(Pr_actual(win_j)) = {my individual estimates are OK, by assumption} product Pr_est(win_j) = {by (subjective) independence} Pr_est(win)
In other words, even taking my unreliability in probability-estimating into account, and despite the asymmetry you noted, once I’ve estimated the individual probabilities my best estimate of the overall probability is what I get by using those individual estimates. I should not increase my probability estimate merely because there are uncertainties in the individual probability estimates.
For sure, my estimates could be wrong. For sure, they can be too high by much more than they can be too low. But they are very unlikely to be too high, and it turns out that (subject to the assumptions above) my overall estimate is unbiased provided my individual estimates are.
There’s something wrong with an analysis that biases the outcome in a particular direction as you add more details. In this case, the more different kinds of things that might go wrong or have to go right, the more fractions you have to multiply your result by. I don’t know how to get out of this trap, but it seems a failure mode with any attempt to predict the future by multiplying lists of probabilities, each generated by a handwave.
The only one of your numbers that I think can be estimated based on current experience is #2. Alcor publishes details regularly about how their cryopreservations go, and numbers for how many members quit or die in circumstances that make their preservation hopeless. Your 80% number sounds like it’s in the right ballpark. (*)
My other complaint is that your numbers are connected by a more complicated web of conditional likelihoods and interactions than simple multiplication shows. If civilization survives, the likelihood of particular technologies being developed increases, and if organizations like Alcor persist, that ought to raise those odds as well. In many of the bad societal outcomes, you won’t get revived. This reduces your downside almost as much as it reduces your upside. You’ve still paid for suspension, but it doesn’t count as a hell scenario.
I don’t think “Shut up and Multiply” should be taken literally here.
(*) ETA: simpleton’s reference to Alcor’s numbers says I’m wrong about this.
It’s perfectly correct that your estimate of P(cryonics will work for me) should go down as you think of more things that all have to happen for it to work. When something depends on many things all working right, it’s less likely to work out than intuition suggests; that’s one reason why project time estimates are almost always much too short, and why many projects fail.
Of course my probability estimates are only rough guesses. I don’t trust estimates derived in this way very much; but I trust an estimate derived by breaking the problem down into smallish bits and handwaving all the bits better than I trust one derived by handwaving the whole thing. (And there’s nothing about the breaking-it-down approach that necessitates a pessimistic answer; Robin Hanson did a similar handwavy calculation over on OB a little while ago and came up with a very different result.)
The estimate of 80% for #2 was not mine but Roko’s. My estimate for that one is 25%. I’m not in the US, which I gather makes a substantial difference, but in any case, as your later edit points out, it looks like my number may be better than Roko’s anyway.
Yes, the relationship between all those factors is not as simple as a bunch of independent events. That would be why, in the comment you’re replying to, I said “Suppose (for simplicity) that the probability we seek is the product of several independent probabilities, each of which I have independently estimated.” And also why some of my original estimates were explicitly made conditional on their predecessors.
“Shut up and multiply” was never meant to be taken literally, and as it happens I am not so stupid as to think that because someone once said “Shut up and multiply” I should therefore treat all probability calculations as chains of independent events. “Shut up and calculate” would be more accurate, but in the particular cases for which SUAM was (I think) coined the key calculations were very simple.
Hmm… I have an idea regarding this, and also regarding Roko’s suggestion to disregard low probabilities.
If you are generally unable to estimate probabilities of events lower than, say, 1/1000, it means that you must calibrate the estimates for these events way down, below 1/1000.
There are very many things that you’ll only be able to estimate as “probability below 1/1000”, some of them mutually exclusive. Normalization requires keeping the sum of their probabilities below unity, so the estimate must actually be tuned down. As a result, you can’t insist that there are parts of the distribution resulting from uncertain estimate that are sufficiently high to matter, and generally should treat things falling in this class as way less probable than the class suggests.