if you never experience a timeline in which you’ve permanently died, then the only timelines you experience are the ones in which you have sufficient resources to survive; thus implying that whatever resources you have are going to be sufficient to survive.
because you can’t get that kind of information about the future (“are going to be sufficient”) just from the fact that you haven’t died in the past.
As for the more central issue:
If you buy a lottery ticket, and /win/, then via Bayesian inference from the previous paragraphs, you have just collected evidence which suggests an increased likelihood that you are about to face a disaster which requires a great deal of resources to survive.
this also seems terribly wrong to me, at least if the situation I’m supposed to imagine is that I bought a lottery ticket just for fun, or out of habit, or something like that. Because surely the possible worlds that get more likely according to your quantum-immortality argument are ones in which I bought a lottery ticket in the expectation of a disaster. Further, I don’t see how winning makes this situation any more likely, at least until the disaster has actually occurred and been surmounted with the help of your winnings.
Imagine 10^12 equal-probability versions of you. 10^6 of them anticipate situations that desperately require wealth and buy lottery tickets. Another 10^9 versions of you buy lottery tickets just for fun. Then one of the 10^6, and 10^3 of the 10^9, win the lottery. OK, so now your odds (conditional on having just bought a lottery ticket) of being about to face wealth-requiring danger are only 10^3:1 instead of 10^6:1 as they were before—but you need to conditionalize on all the relevant evidence. Let’s suppose that you can predict those terrible dangers half the time when they occur; so there are another 10^6 of you facing that situation without knowing it; 10^3 of them bought lottery tickets, and 10^-3 of them won. So conditional on having just bought a lottery ticket for fun, your odds of being in danger are still 10^6:1 (10^9 out of danger, 10^3 in); conditional on having just bought a lottery ticket for fun and won, they’re still 10^6:1 (10^3 out of danger, 10^-3 in).
Perhaps I’m missing something important; I’ve never found the idea of “quantum immortality” compelling, and I think the modes of thought that make it compelling involve wrongheadedness about probability and QM, but maybe I’m the one who’s wrongheaded...
So, suppose I rig up a machine with the following behaviour. It “flips a coin” (actually, in case it matters, exploiting some source of quantum randomness so that heads and tails have more or less exactly equal quantum measure). If it comes up heads, it arranges that in ten years’ time you will be very decisively killed.
If we take “Pr(L)=1” (in that comment’s notation) seriously then it follows that Pr(tails)=1 too. But if there are 100 of you using these machines, then about 50 are going to see heads; and if you are confident of getting tails—in fact, if your estimate of Pr(tails) is substantially bigger than 1⁄2 -- you’re liable to get money-pumped.
One possible conclusion: Pr(L)=1 is the wrong way to think about quantum immortality if you believe in it.
Another: the situation I described isn’t really possible, because the machine can’t make it certain that you will die in 10 years, and the correct conclusion is simply that if it comes up heads then the universe will find some way to keep you alive despite whatever it does.
But note that that objection applies just as well to the original scenario. Any disaster that you can survive with the help of an extra $10M, you can probably survive without the $10M but with a lot of luck. Or without the $10M from the lottery but with $10M that unexpectedly reaches you by other means.
Your last paragraph is leading me to consider an alternative scenario: There are two ways to survive the disaster, either pleasantly by having enough money (via winning the lottery) or unpleasantly (such as by having to amputate most of your limbs to reduce your bodymass to have enough delta-vee). I’m currently trying to use Venn-like overlapping categories to see if I can figure out any “If X then Y” conclusions. The basic parameters of the setting seem to rule out all but five combinations (using ! to mean ‘not’):
At this very moment, I’m trying to figure out what happens if quantum immortality means the ‘dead’ line doesn’t exist...
… But I’m as likely as not to miss some consequence of this. Anyone care to take a shot at how to set things up so that any Bayesian calculations on the matter have at least a shot at reflecting reality?
Just out of curiosity: How (if at all) is this related to your LW post about a year ago?
I think surely the following has to be wrong:
because you can’t get that kind of information about the future (“are going to be sufficient”) just from the fact that you haven’t died in the past.
As for the more central issue:
this also seems terribly wrong to me, at least if the situation I’m supposed to imagine is that I bought a lottery ticket just for fun, or out of habit, or something like that. Because surely the possible worlds that get more likely according to your quantum-immortality argument are ones in which I bought a lottery ticket in the expectation of a disaster. Further, I don’t see how winning makes this situation any more likely, at least until the disaster has actually occurred and been surmounted with the help of your winnings.
Imagine 10^12 equal-probability versions of you. 10^6 of them anticipate situations that desperately require wealth and buy lottery tickets. Another 10^9 versions of you buy lottery tickets just for fun. Then one of the 10^6, and 10^3 of the 10^9, win the lottery. OK, so now your odds (conditional on having just bought a lottery ticket) of being about to face wealth-requiring danger are only 10^3:1 instead of 10^6:1 as they were before—but you need to conditionalize on all the relevant evidence. Let’s suppose that you can predict those terrible dangers half the time when they occur; so there are another 10^6 of you facing that situation without knowing it; 10^3 of them bought lottery tickets, and 10^-3 of them won. So conditional on having just bought a lottery ticket for fun, your odds of being in danger are still 10^6:1 (10^9 out of danger, 10^3 in); conditional on having just bought a lottery ticket for fun and won, they’re still 10^6:1 (10^3 out of danger, 10^-3 in).
Perhaps I’m missing something important; I’ve never found the idea of “quantum immortality” compelling, and I think the modes of thought that make it compelling involve wrongheadedness about probability and QM, but maybe I’m the one who’s wrongheaded...
Same general assumptions, taken in a somewhat different direction.
(I’m just browsing messages in the middle of the night, so will have to wait to respond to the rest of your post for some hours. In the meantime, the response to my question at https://www.reddit.com/r/rational/comments/2g09xh/bstqrsthsf_factchecking_some_quantum_math/ckex8ul seems worth reading.)
So, suppose I rig up a machine with the following behaviour. It “flips a coin” (actually, in case it matters, exploiting some source of quantum randomness so that heads and tails have more or less exactly equal quantum measure). If it comes up heads, it arranges that in ten years’ time you will be very decisively killed.
If we take “Pr(L)=1” (in that comment’s notation) seriously then it follows that Pr(tails)=1 too. But if there are 100 of you using these machines, then about 50 are going to see heads; and if you are confident of getting tails—in fact, if your estimate of Pr(tails) is substantially bigger than 1⁄2 -- you’re liable to get money-pumped.
One possible conclusion: Pr(L)=1 is the wrong way to think about quantum immortality if you believe in it.
Another: the situation I described isn’t really possible, because the machine can’t make it certain that you will die in 10 years, and the correct conclusion is simply that if it comes up heads then the universe will find some way to keep you alive despite whatever it does.
But note that that objection applies just as well to the original scenario. Any disaster that you can survive with the help of an extra $10M, you can probably survive without the $10M but with a lot of luck. Or without the $10M from the lottery but with $10M that unexpectedly reaches you by other means.
Your last paragraph is leading me to consider an alternative scenario: There are two ways to survive the disaster, either pleasantly by having enough money (via winning the lottery) or unpleasantly (such as by having to amputate most of your limbs to reduce your bodymass to have enough delta-vee). I’m currently trying to use Venn-like overlapping categories to see if I can figure out any “If X then Y” conclusions. The basic parameters of the setting seem to rule out all but five combinations (using ! to mean ‘not’):
WinLotto, !Disaster, !Amputee, Live: All Good WinLotto, Disaster, !Amputee, Live: Buy survival !WinLotto, !Disaster, !Amputee, Live: Nothing happens !WinLotto, Disaster, Amputee, Live: Unpleasant survival !WinLotto, Disaster, !Amputee, !Live: Dead.
At this very moment, I’m trying to figure out what happens if quantum immortality means the ‘dead’ line doesn’t exist...
… But I’m as likely as not to miss some consequence of this. Anyone care to take a shot at how to set things up so that any Bayesian calculations on the matter have at least a shot at reflecting reality?