Can Bayesian inference be applied to quantum immortality?
I’m writing an odd science fiction story, in which I’d like to express an idea; but I’d like to get the details correct. Another redditor suggested that I might find someone here with enough of an understanding of Bayesian theory, the Multiple Worlds interpretation of quantum mechanics, quantum suicide, that I might be able to get some feedback in time:
Assuming the Multiple Worlds Interpretation of quantum theory is true, then buying lottery tickets can be looked at in an interesting way: it can be viewed as an individual funneling money from the timelines where the buyer loses to the timelines where the buyer wins. While there is a great degree of ‘friction’ in this funneling (if a lottery has an average 45% payout, then 55% of the money is lost to the “friction”), it is the method that has, perhaps, the lowest barrier to entry: it only costs as much as a lottery ticket, and doesn’t require significant education into abstruse financial instruments.
While, on the whole, buying a lottery ticket may have a negative expected utility (due to that “friction”), there is at least one set of circumstances where making the purchase is warranted: if a disaster is forthcoming, which requires a certain minimal amount of wealth to survive. As a simplification, if the only future timelines in which you continue to live are ones in which you’ve won the lottery, then buying tickets increases the portion of timelines in which you live. (Another redditor phrased it thusly: Hypothetically, let’s say you have special knowledge that at 5pm next Wednesday the evil future government is going to deactivate the cortical implants of the poorest 80% of the population, killing them all swiftly and painlessly. In that circumstance, there would be positive expected utility, because you wouldn’t be alive if you lost.)
Which brings us to the final bit: If you buy a lottery ticket, and /win/, then via Bayesian inference from the previous paragraphs, you have just collected evidence which suggests an increased likelihood that you are about to face a disaster which requires a great deal of resources to survive. That is, according to the idea of quantum immortality, if you never experience a timeline in which you’ve permanently died, then the only timelines you experience are the ones in which you have sufficient resources to survive; thus implying that whatever resources you have are going to be sufficient to survive.
However, I’m not /quite/ sure that I’ve got all my inferential ducks lined up in a row there. So if anyone reading this could point out whether anything like the idea I’m trying to describe could be considered reasonably accurate, then I’d appreciate the heads-up. (I’m reasonably confident that it would be trivial to point out some error in the above paragraphs; you could say that I’m trying to figure out the details of the steelmanned version.)
if you never experience a timeline in which you’ve permanently died, then the only timelines you experience are the ones in which you have sufficient resources to survive; thus implying that whatever resources you have are going to be sufficient to survive.
because you can’t get that kind of information about the future (“are going to be sufficient”) just from the fact that you haven’t died in the past.
As for the more central issue:
If you buy a lottery ticket, and /win/, then via Bayesian inference from the previous paragraphs, you have just collected evidence which suggests an increased likelihood that you are about to face a disaster which requires a great deal of resources to survive.
this also seems terribly wrong to me, at least if the situation I’m supposed to imagine is that I bought a lottery ticket just for fun, or out of habit, or something like that. Because surely the possible worlds that get more likely according to your quantum-immortality argument are ones in which I bought a lottery ticket in the expectation of a disaster. Further, I don’t see how winning makes this situation any more likely, at least until the disaster has actually occurred and been surmounted with the help of your winnings.
Imagine 10^12 equal-probability versions of you. 10^6 of them anticipate situations that desperately require wealth and buy lottery tickets. Another 10^9 versions of you buy lottery tickets just for fun. Then one of the 10^6, and 10^3 of the 10^9, win the lottery. OK, so now your odds (conditional on having just bought a lottery ticket) of being about to face wealth-requiring danger are only 10^3:1 instead of 10^6:1 as they were before—but you need to conditionalize on all the relevant evidence. Let’s suppose that you can predict those terrible dangers half the time when they occur; so there are another 10^6 of you facing that situation without knowing it; 10^3 of them bought lottery tickets, and 10^-3 of them won. So conditional on having just bought a lottery ticket for fun, your odds of being in danger are still 10^6:1 (10^9 out of danger, 10^3 in); conditional on having just bought a lottery ticket for fun and won, they’re still 10^6:1 (10^3 out of danger, 10^-3 in).
Perhaps I’m missing something important; I’ve never found the idea of “quantum immortality” compelling, and I think the modes of thought that make it compelling involve wrongheadedness about probability and QM, but maybe I’m the one who’s wrongheaded...
So, suppose I rig up a machine with the following behaviour. It “flips a coin” (actually, in case it matters, exploiting some source of quantum randomness so that heads and tails have more or less exactly equal quantum measure). If it comes up heads, it arranges that in ten years’ time you will be very decisively killed.
If we take “Pr(L)=1” (in that comment’s notation) seriously then it follows that Pr(tails)=1 too. But if there are 100 of you using these machines, then about 50 are going to see heads; and if you are confident of getting tails—in fact, if your estimate of Pr(tails) is substantially bigger than 1⁄2 -- you’re liable to get money-pumped.
One possible conclusion: Pr(L)=1 is the wrong way to think about quantum immortality if you believe in it.
Another: the situation I described isn’t really possible, because the machine can’t make it certain that you will die in 10 years, and the correct conclusion is simply that if it comes up heads then the universe will find some way to keep you alive despite whatever it does.
But note that that objection applies just as well to the original scenario. Any disaster that you can survive with the help of an extra $10M, you can probably survive without the $10M but with a lot of luck. Or without the $10M from the lottery but with $10M that unexpectedly reaches you by other means.
Your last paragraph is leading me to consider an alternative scenario: There are two ways to survive the disaster, either pleasantly by having enough money (via winning the lottery) or unpleasantly (such as by having to amputate most of your limbs to reduce your bodymass to have enough delta-vee). I’m currently trying to use Venn-like overlapping categories to see if I can figure out any “If X then Y” conclusions. The basic parameters of the setting seem to rule out all but five combinations (using ! to mean ‘not’):
At this very moment, I’m trying to figure out what happens if quantum immortality means the ‘dead’ line doesn’t exist...
… But I’m as likely as not to miss some consequence of this. Anyone care to take a shot at how to set things up so that any Bayesian calculations on the matter have at least a shot at reflecting reality?
The character has come uncomfortably close to dying several times in a relatively short period, having had to use one or another rare or unusual skill or piece of equipment just to survive each time. (In other words, she’s a Protagonist.)
Can Bayesian inference be applied to quantum immortality?
I’m writing an odd science fiction story, in which I’d like to express an idea; but I’d like to get the details correct. Another redditor suggested that I might find someone here with enough of an understanding of Bayesian theory, the Multiple Worlds interpretation of quantum mechanics, quantum suicide, that I might be able to get some feedback in time:
Assuming the Multiple Worlds Interpretation of quantum theory is true, then buying lottery tickets can be looked at in an interesting way: it can be viewed as an individual funneling money from the timelines where the buyer loses to the timelines where the buyer wins. While there is a great degree of ‘friction’ in this funneling (if a lottery has an average 45% payout, then 55% of the money is lost to the “friction”), it is the method that has, perhaps, the lowest barrier to entry: it only costs as much as a lottery ticket, and doesn’t require significant education into abstruse financial instruments.
While, on the whole, buying a lottery ticket may have a negative expected utility (due to that “friction”), there is at least one set of circumstances where making the purchase is warranted: if a disaster is forthcoming, which requires a certain minimal amount of wealth to survive. As a simplification, if the only future timelines in which you continue to live are ones in which you’ve won the lottery, then buying tickets increases the portion of timelines in which you live. (Another redditor phrased it thusly: Hypothetically, let’s say you have special knowledge that at 5pm next Wednesday the evil future government is going to deactivate the cortical implants of the poorest 80% of the population, killing them all swiftly and painlessly. In that circumstance, there would be positive expected utility, because you wouldn’t be alive if you lost.)
Which brings us to the final bit: If you buy a lottery ticket, and /win/, then via Bayesian inference from the previous paragraphs, you have just collected evidence which suggests an increased likelihood that you are about to face a disaster which requires a great deal of resources to survive. That is, according to the idea of quantum immortality, if you never experience a timeline in which you’ve permanently died, then the only timelines you experience are the ones in which you have sufficient resources to survive; thus implying that whatever resources you have are going to be sufficient to survive.
However, I’m not /quite/ sure that I’ve got all my inferential ducks lined up in a row there. So if anyone reading this could point out whether anything like the idea I’m trying to describe could be considered reasonably accurate, then I’d appreciate the heads-up. (I’m reasonably confident that it would be trivial to point out some error in the above paragraphs; you could say that I’m trying to figure out the details of the steelmanned version.)
(My original formulation of the question was posted to https://www.reddit.com/r/rational/comments/2g09xh/bstqrsthsf_factchecking_some_quantum_math/ .)
Just out of curiosity: How (if at all) is this related to your LW post about a year ago?
I think surely the following has to be wrong:
because you can’t get that kind of information about the future (“are going to be sufficient”) just from the fact that you haven’t died in the past.
As for the more central issue:
this also seems terribly wrong to me, at least if the situation I’m supposed to imagine is that I bought a lottery ticket just for fun, or out of habit, or something like that. Because surely the possible worlds that get more likely according to your quantum-immortality argument are ones in which I bought a lottery ticket in the expectation of a disaster. Further, I don’t see how winning makes this situation any more likely, at least until the disaster has actually occurred and been surmounted with the help of your winnings.
Imagine 10^12 equal-probability versions of you. 10^6 of them anticipate situations that desperately require wealth and buy lottery tickets. Another 10^9 versions of you buy lottery tickets just for fun. Then one of the 10^6, and 10^3 of the 10^9, win the lottery. OK, so now your odds (conditional on having just bought a lottery ticket) of being about to face wealth-requiring danger are only 10^3:1 instead of 10^6:1 as they were before—but you need to conditionalize on all the relevant evidence. Let’s suppose that you can predict those terrible dangers half the time when they occur; so there are another 10^6 of you facing that situation without knowing it; 10^3 of them bought lottery tickets, and 10^-3 of them won. So conditional on having just bought a lottery ticket for fun, your odds of being in danger are still 10^6:1 (10^9 out of danger, 10^3 in); conditional on having just bought a lottery ticket for fun and won, they’re still 10^6:1 (10^3 out of danger, 10^-3 in).
Perhaps I’m missing something important; I’ve never found the idea of “quantum immortality” compelling, and I think the modes of thought that make it compelling involve wrongheadedness about probability and QM, but maybe I’m the one who’s wrongheaded...
Same general assumptions, taken in a somewhat different direction.
(I’m just browsing messages in the middle of the night, so will have to wait to respond to the rest of your post for some hours. In the meantime, the response to my question at https://www.reddit.com/r/rational/comments/2g09xh/bstqrsthsf_factchecking_some_quantum_math/ckex8ul seems worth reading.)
So, suppose I rig up a machine with the following behaviour. It “flips a coin” (actually, in case it matters, exploiting some source of quantum randomness so that heads and tails have more or less exactly equal quantum measure). If it comes up heads, it arranges that in ten years’ time you will be very decisively killed.
If we take “Pr(L)=1” (in that comment’s notation) seriously then it follows that Pr(tails)=1 too. But if there are 100 of you using these machines, then about 50 are going to see heads; and if you are confident of getting tails—in fact, if your estimate of Pr(tails) is substantially bigger than 1⁄2 -- you’re liable to get money-pumped.
One possible conclusion: Pr(L)=1 is the wrong way to think about quantum immortality if you believe in it.
Another: the situation I described isn’t really possible, because the machine can’t make it certain that you will die in 10 years, and the correct conclusion is simply that if it comes up heads then the universe will find some way to keep you alive despite whatever it does.
But note that that objection applies just as well to the original scenario. Any disaster that you can survive with the help of an extra $10M, you can probably survive without the $10M but with a lot of luck. Or without the $10M from the lottery but with $10M that unexpectedly reaches you by other means.
Your last paragraph is leading me to consider an alternative scenario: There are two ways to survive the disaster, either pleasantly by having enough money (via winning the lottery) or unpleasantly (such as by having to amputate most of your limbs to reduce your bodymass to have enough delta-vee). I’m currently trying to use Venn-like overlapping categories to see if I can figure out any “If X then Y” conclusions. The basic parameters of the setting seem to rule out all but five combinations (using ! to mean ‘not’):
WinLotto, !Disaster, !Amputee, Live: All Good WinLotto, Disaster, !Amputee, Live: Buy survival !WinLotto, !Disaster, !Amputee, Live: Nothing happens !WinLotto, Disaster, Amputee, Live: Unpleasant survival !WinLotto, Disaster, !Amputee, !Live: Dead.
At this very moment, I’m trying to figure out what happens if quantum immortality means the ‘dead’ line doesn’t exist...
… But I’m as likely as not to miss some consequence of this. Anyone care to take a shot at how to set things up so that any Bayesian calculations on the matter have at least a shot at reflecting reality?
I think you’re leaving out that disasters which require a lot of money to survive are fairly rare and hard to predict.
The character has come uncomfortably close to dying several times in a relatively short period, having had to use one or another rare or unusual skill or piece of equipment just to survive each time. (In other words, she’s a Protagonist.)