That’s the problem—it shouldn’t really convince him. If he shares all the data and priors with external observers, his posterior probability of MWI being true should end up the same as theirs.
It’s not very different from surviving thousand classical Russian roulettes in a row.
ETA: If the chance of survival is p, then in both cases P(I survive) = p, P(I survive | I’m there to observe it) = 1. I think you should use the second one in appraising the MWI...
If he shares all the data and priors with external observers, his posterior probability of MWI being true should end up the same as theirs.
No; I think you’re using the Aumann agreement theorem, which can’t be used in real life. It has many exceedingly unrealistic assumptions, including that all Bayesians agree completely on all definitions and all category judgements, and all their knowledge about the world (their partition functions) is mutual knowledge.
In particular, to deal with the quantum suicide problem, the reasoner has to use an indexical representation, meaning this is knowledge expressed by a proposition containing the term “me”, where me is defined as “the agent doing the reasoning”. A proposition that contains an indexical can’t be mutual knowledge. You can transform it into a different form in someone else’s brain that will have the same extensional meaning, but that person will not be able to derive the same conclusions from it, because some of their knowledge is also in indexical form.
(There’s a more basic problem with the Aumann agreement theorem—when it says, “To say that 1 knows that 2 knows E means that E includes all P2 in N2 that intersect P1,” that’s an incorrect usage of the word “knows”. 1 knows that E includes P1(w), and that E includes P2(w). 1 concludes that E includes P1 union P2, for some P2 that intersects P1. Not for all P2 that intersect P1. In other words, the theorem is mathematically correct, but semantically incorrect; because the things it’s talking about aren’t the things that the English gloss says it’s talking about.)
There are indeed many cases where Aumann’s agreement theorem seems to apply semantically, but in fact doesn’t apply mathematically. Would there be interest in a top-level post about how Aumann’s agreement theorem can be used in real life, centering mostly around learning from disagreements rather than forcing agreements?
I’d be interested, but I’ll probably disagree. I don’t think Aumann’s agreement theorem can ever be used in real life. There are several reasons, but the simplest is that it requires the people involved share the same partition function over possible worlds. If I recall correctly, this means that they have the same function describing how different observations would restrict the possible worlds they are in. This means that the proof assumes that these two rational agents would agree on the implications of any shared observation—which is almost equivalent to what it is trying to prove!
I don’t really see what is the problem with Aumann’s in that situation. If X commits suicide and Y watches, are there any factors (like P(MWI), or P(X dies|MWI)) that X and Y necessarily disagree on (or them agreeing would be completely unrealistic)?
but only joe can apply this rule. For jack, the rule doesn’t match the data. This means that joe and jack have different partition functions regarding the extensional observation survives(joe, X), which joe represents as survives(me, X).
If joe and jack both use an extensional representation, as the theorem would require, then neither joe nor jack can understand quantum immortality.
So you’re saying that the knowledge “I survive X with probability 1” can in no way be translated into objective rule without losing some information?
I assume the rules speak about subjective experience, not about “some Everett branch existing” (so if I flip a coin, P(I observe heads) = 0.5, not 1). (What do probabilities of possible, mutually exclusive outcomes of given action sum to in your system?)
Isn’t the translation a matter of applying conditional probability? i.e. (P(survives(me, X) = 1 ⇔ P(survives(joe, X) | joe’s experience continues = 1)
I was actually going off the idea that the vast majority − 100% minus pr(survive all suicides) - of worlds would have the subject dead at some point, so all those worlds would not be convinced. Sure, people in your branch might believe you, but in (100 − 9.3x10^-302) percent of the branches, you aren’t there to prove that quantum suicide works. This means, I think, that the chance of you existing to prove that quantum suicide proves MWI to the rest of the world, the chance is equal to the chance of you surviving in a nonMWI universe.
I was going to say well, if you had a test with a 1% chance of confirming X and a 99% chance of disconfirming X, and you ran it a thousand times and made sure you presented only the confirmations, you would be laughed at to suggest that X is confirmed—but it is MWI that predicts every quantum event comes out every result, so only under MWI could you run the test a thousand times—so that would indeed be pretty convincing evidence that MWI is true.
Also: I only have a passing familiarity with Robin’s mangled worlds, but at the power of negative three hundred, it feels like a small enough ‘world’ to get absorbed into the mass of worlds where it works a few times and then they actually do die.
The problem I have with that is that from my perspective as an external observer it looks no different than someone flipping a coin (appropriately weighted) a thousand times and getting thousand heads. It’s quite improbable, but the fact that someone’s life depends on the coin shouldn’t make any difference for me—the universe doesn’t care.
Of course it also doesn’t convince me that the coin will fall heads for the 1001-st time.
(That’s only if I consider MWI and Copenhagen here. In reality after 1000 coin flips/suicides I would start to strongly suspect some alternative hypotheses. But even then it shouldn’t change my confidence of MWI relative to my confidence of Copenhagen).
If the chance of survival is p, then in both cases P(I survive) = p, P(I survive | I’m there to observe it) = 1.
Indeed, the anthropic principle explains the result of quantum suicide, whether or not you subscribe to the MWI. The real question is whether you ought to commit quantum suicide (and harness its anthropic superpowers for good). It’s a question of morality.
I would say quantum suiciding is not “harnessing its anthropic superpowers for good”, it’s just conveniently excluding yourself from the branches where your superpowers don’t work. So it has no more positive impact on the universe than you dying has.
No. You’re comparing the likelihood of 2 hypothesis. The observation that you survived 1000 good suicide attempts is much more likely under MWI than under Copenhagen. Then you flip it around using Bayes’ rule, and believe in MWI.
But other Bayesians around you should not agree with you. This is a case where Bayesians should agree to disagree.
First, not to be nit-picky but MWI != QI. Second, if your suicide attempts are well documented the branch in which you survived would be populated by Bayesians who agreed with you, no?
The Bayesians wouldn’t agree with you, because the observation that you survived all those suicide attempts is, to them, equally likely under MWI or Copenhagen.
The observation that you survived 1000 good suicide attempts is much more likely under MWI than under Copenhagen.
Isn’t that like saying “Under MWI, the observation that the coin came up heads, and the observation that it came up tails, both have probability of 1”?
The observation that I survive 1000 good suicide attempts has a probability of 1, but only if I condition on my being capable of making any observation at all (i.e. alive). In which case it’s the same under Copenhagen.
The observation is that you’re alive. If the Quantum Immortality hypothesis is true you will continue making that observation after an arbitrary number of good suicide attempts. The probability that you will continue making that observation if Quantum Immortality is false is much smaller than one.
The probability that there exists an Everett branch in which I continue making that observation is 1. I’m not sure if jumping straight to subjective experience from that is justified:
If P(I survive|MWI) = 1, and P(I survive|Copenhagen) = p, then what is the rest of that probability mass in Copenhagen interpretation? Why is P(~(I survive)|Copenhagen) = 1-p and what does it really describe? It seems to me that calling it “I don’t make any observation” is jumping from subjective experiences back to objective. This looks like a confusion of levels.
ETA: And, of course, the problem with “anthropic probabilities” gets even harder when you consider copies and merging, simulations, Tegmark level 4, and Boltzmann brains (The Anthropic Trilemma). I’m not sure if there even is a general solution. But I strongly suspect that “you can prove MWI by quantum suicide” is an incorrect usage of probabilities.
It even depends on philosophy. Specifically on whether following equality holds.
I survive = There (not necessarily in our universe) exists someone who remembers everything I remember now plus failed suicide I’m going to conduct now.
or
I survive = There exists someone who don’t remember everything I remember now, but he acts as I would acted if I remember what he remembers. (I’m not sure whether I correctly expressed subjunctive mood)
If P(I survive|MWI) = 1, and P(I survive|Copenhagen) = p, then what is the rest of that probability mass in Copenhagen interpretation?
First, I’m gonna clarify some terms to make this more precise. Let Y be a person psychologically continuous with your present self. P(there is some Y that observes surviving a suicide attempt|Quantum immortality) = 1. Note MWI != QI. But QI entails MWI. P(there is some Y that observes surviving a suicide attempt| ~QI) = p.
It follows from this that P(~(there is some Y that observes surviving a suicide attempt)|~QI) = 1-p.
I don’t see a confusion of levels (whatever that means).
ETA: And, of course, the problem with “anthropic probabilities” gets even harder when you consider copies and merging, simulations, Tegmark level 4, and Boltzmann brains (The Anthropic Trilemma). I’m not sure if there even is a general solution. But I strongly suspect that “you can prove MWI by quantum suicide” is an incorrect usage of probabilities.
I don’t know if this is the point you meant to make but the existence of these other hypotheses that could imply anthropic immortality definitely does get in the way of providing evidence in favor of Many Worlds through suicide. Surviving increases the probability of all of those hypotheses (to different extents but not really enough to distinguish them).
First, I’m gonna clarify some terms to make this more precise. Let Y be a person psychologically continuous with your present self. P(there is some Y that observes surviving a suicide attempt|Quantum immortality) = 1. Note MWI != QI. But QI entails MWI. P(there is some Y that observes surviving a suicide attempt| ~QI) = p.
It follows from this that P(~(there is some Y that observes surviving a suicide attempt)|~QI) = 1-p.
I don’t see a confusion of levels (whatever that means).
I still see a problem here. Substitute quantum suicide → quantum coinflip, and surviving a suicide attempt → observing the coin turning up heads.
Now we have P(there is some Y that observes coin falling heads|MWI) = 1, and P(there is some Y that observes coin falling heads|Copenhagen) = p.
So any specific outcome of a quantum event would be evidence in favor of MWI.
I think that works actually. If you observe 30 quantum heads in a row you have strong evidence in favor of MWI. The quantum suicide thing is just a way of increasing the proportion of future you’s that have this information.
If you observe 30 quantum heads in a row you have strong evidence in favor of MWI.
But then if I observed any string of 30 outcomes I would have strong evidence for MWI (if the coin is fair, “p” for any specific string would be 2^-30).
Sorry, now I have no idea what we’re talking about. If your experiment involves killing yourself after seeing the wrong string, this is close to the standard quantum suicide.
If not, I would have to see the probabilities to understand. My analysis is like this: P(I observe string S | MWI) = P(I observe string S | Copenhagen) = 2^-30, regardless of whether the string S is specified beforehand or not. MWI doesn’t mean that my next Everett branch must be S because I say so.
The reason why this doesn’t work (for coins) is that (when MWI is true) A=”my observation is heads” implies B=”some Y observes heads”, but not the other way around. So P(B|A)=1, but P(A|B) = p, and after plugging that into the Bayes formula we have P(MWI|A) = P(Copenhagen|A).
Can you translate that to the quantum suicide case?
Isn’t that like saying “Under MWI, the observation that the coin came up heads, and the observation that it came up tails, both have probability of 1”?
I have no theories about what you’re thinking when you say that.
Either you condition the observation (of surviving 1000 attempts) on the observer existing, and you have 1 in both cases, or you don’t condition it on the observer and you have p^-1000 in both cases. You can’t have it both ways.
Of course it is testable. Just do some 30 quantum coin flips in a row. If any of them is a head, knock yourself down into a deep sleep (with anesthesia) for 24 hours.
If you are still awake 1 hour after the coin failed for the last time, QI is probably the fact.
Nah. QI relies on your “subjective thread” coming to an end in some worlds and continuing in others. In your experiment I’d be pretty certain to get knocked out and wake up after 24 hours.
How does the Multiverse know, I am just sleeping for 24 (or 24000) hours? How the Multiverse knows, I’ll not be rescued after the real suicide attempt after a quantum coin head popped up?
Or resurrected by some ultratech?
Where is the fine red line, that the Quantum Immortality is possible, but a Quantum Awakening described above—isn’t?
It doesn’t, not right now in the present moment. But there’s no reason why “subjective threads” and “subjective probabilities” should depend on physical laws only locally. Imagine you’re an algorithm running on a computer. If someone pauses the computer for a thousand years, afterwards you go on running like nothing happened, even though at the moment of pausing nobody “knew” when/if you’d be restarted again.
If someone pauses the computer for a thousand years, afterwards you go on running like nothing happened, even though at the moment of pausing nobody “knew” when/if you’d be restarted again.
But what if a new computer arises every time and an instance of this algorithm start there?
How does the Multiverse know, I am just sleeping for 24 (or 24000) hours? How the Multiverse knows, I’ll not be rescued after the real suicide attempt after a quantum coin head popped up?
Because you won’t be back. Universe has the whole eternity to just wait for you to come back. If you don’t, the only remaining ones that keep on experiencing from where you left off are the branches where coin didn’t come heads.
In fact, quantum immortality has little to do with the actual properties of the universe, as long as it’s probabilistic. It’s just what happens when you arbitrarily (well, anthropically) decide to stop counting certain possibilities.
No, it always splits into two everett branches. It’s just that if you do in fact wake up in the distant future, that version of you that wakes up will be a successor of the you that is awake now, as is the version of you that never went to sleep in the next microsecond (or whatever). And you should anticipate either’s experiences equally.
Or at least that’s how I think it works (this assumes timeless physics, which I think is what Jonii assumed).
First, the result of a coin flip is almost certainly determined by starting conditions. With enough knowledge of those conditions you could predict the result. Instead you should make a measurement on a quantum system, such as measuring the spin of an electron.
Second the result of this test does not distinguish between QI and not-QI. The probability of being knocked out or left awake is the same in both cases.
I suppose you could be assuming that your consciousness can jump arbitrarily between universes to follow a conscious version of you.… but no that would just be silly.
You might have missed the part where Thomas made it a “quantum coin flip”. The problem with the test is that by definition is can’t be replicated successfully by the scientific community and that even if QI is true you will get dis-confirming evidence in most Everett branches.
First, the result of a coin flip is almost certainly determined by starting conditions. With enough knowledge of those conditions you could predict the result.
If that’s a valid objection, then quantum suicide won’t work either. In fact, if that’s a valid objection, then many-worlds is impossible, since everything is deterministic with no possible alternatives.
Many-worlds is a deterministic theory, as it says that the split configurations both occur.
Quantum immortality, mind you, is a very silly idea for a variety of other reasons—foremost of which is that a googleplex of universes still doesn’t ensure that there exists one of them in which a recognizable “you” survives next week, let alone to the end of time.
Quantum immortality is not observable. You surviving a quantum suicide is not evidence for MWI—no more than it is for external observers.
What about me surviving a thousand quantum suicides (with neglible odds of survival) in a row?
It convinces you that MWI is true. Due to the nature of quantum suicide, though, you will struggle to share this revelation with anyone else.
That’s the problem—it shouldn’t really convince him. If he shares all the data and priors with external observers, his posterior probability of MWI being true should end up the same as theirs.
It’s not very different from surviving thousand classical Russian roulettes in a row.
ETA: If the chance of survival is p, then in both cases P(I survive) = p, P(I survive | I’m there to observe it) = 1. I think you should use the second one in appraising the MWI...
ETA2: Ok maybe not.
No; I think you’re using the Aumann agreement theorem, which can’t be used in real life. It has many exceedingly unrealistic assumptions, including that all Bayesians agree completely on all definitions and all category judgements, and all their knowledge about the world (their partition functions) is mutual knowledge.
In particular, to deal with the quantum suicide problem, the reasoner has to use an indexical representation, meaning this is knowledge expressed by a proposition containing the term “me”, where me is defined as “the agent doing the reasoning”. A proposition that contains an indexical can’t be mutual knowledge. You can transform it into a different form in someone else’s brain that will have the same extensional meaning, but that person will not be able to derive the same conclusions from it, because some of their knowledge is also in indexical form.
(There’s a more basic problem with the Aumann agreement theorem—when it says, “To say that 1 knows that 2 knows E means that E includes all P2 in N2 that intersect P1,” that’s an incorrect usage of the word “knows”. 1 knows that E includes P1(w), and that E includes P2(w). 1 concludes that E includes P1 union P2, for some P2 that intersects P1. Not for all P2 that intersect P1. In other words, the theorem is mathematically correct, but semantically incorrect; because the things it’s talking about aren’t the things that the English gloss says it’s talking about.)
There are indeed many cases where Aumann’s agreement theorem seems to apply semantically, but in fact doesn’t apply mathematically. Would there be interest in a top-level post about how Aumann’s agreement theorem can be used in real life, centering mostly around learning from disagreements rather than forcing agreements?
I’d be interested, but I’ll probably disagree. I don’t think Aumann’s agreement theorem can ever be used in real life. There are several reasons, but the simplest is that it requires the people involved share the same partition function over possible worlds. If I recall correctly, this means that they have the same function describing how different observations would restrict the possible worlds they are in. This means that the proof assumes that these two rational agents would agree on the implications of any shared observation—which is almost equivalent to what it is trying to prove!
I will include this in the post, if and when I can produce one I think is up to scratch.
What if you represented those disagreements over implications as coming from agents having different logical information?
I don’t really see what is the problem with Aumann’s in that situation. If X commits suicide and Y watches, are there any factors (like P(MWI), or P(X dies|MWI)) that X and Y necessarily disagree on (or them agreeing would be completely unrealistic)?
If joe tries and fails to commit suicide, joe will have the proposition (in SNActor-like syntax)
action(agent(me), act(suicide)) survives(me, suicide)
while jack will have the propositions
action(agent(joe), act(suicide)) survives(joe, suicide)
They both have a rule something like
MWI ⇒ for every X, act(X) ⇒ P(survives(me, X) = 1
but only joe can apply this rule. For jack, the rule doesn’t match the data. This means that joe and jack have different partition functions regarding the extensional observation survives(joe, X), which joe represents as survives(me, X).
If joe and jack both use an extensional representation, as the theorem would require, then neither joe nor jack can understand quantum immortality.
So you’re saying that the knowledge “I survive X with probability 1” can in no way be translated into objective rule without losing some information?
I assume the rules speak about subjective experience, not about “some Everett branch existing” (so if I flip a coin, P(I observe heads) = 0.5, not 1). (What do probabilities of possible, mutually exclusive outcomes of given action sum to in your system?)
Isn’t the translation a matter of applying conditional probability? i.e. (P(survives(me, X) = 1 ⇔ P(survives(joe, X) | joe’s experience continues = 1)
I was actually going off the idea that the vast majority − 100% minus pr(survive all suicides) - of worlds would have the subject dead at some point, so all those worlds would not be convinced. Sure, people in your branch might believe you, but in (100 − 9.3x10^-302) percent of the branches, you aren’t there to prove that quantum suicide works. This means, I think, that the chance of you existing to prove that quantum suicide proves MWI to the rest of the world, the chance is equal to the chance of you surviving in a nonMWI universe.
I was going to say well, if you had a test with a 1% chance of confirming X and a 99% chance of disconfirming X, and you ran it a thousand times and made sure you presented only the confirmations, you would be laughed at to suggest that X is confirmed—but it is MWI that predicts every quantum event comes out every result, so only under MWI could you run the test a thousand times—so that would indeed be pretty convincing evidence that MWI is true.
Also: I only have a passing familiarity with Robin’s mangled worlds, but at the power of negative three hundred, it feels like a small enough ‘world’ to get absorbed into the mass of worlds where it works a few times and then they actually do die.
The problem I have with that is that from my perspective as an external observer it looks no different than someone flipping a coin (appropriately weighted) a thousand times and getting thousand heads. It’s quite improbable, but the fact that someone’s life depends on the coin shouldn’t make any difference for me—the universe doesn’t care.
Of course it also doesn’t convince me that the coin will fall heads for the 1001-st time.
(That’s only if I consider MWI and Copenhagen here. In reality after 1000 coin flips/suicides I would start to strongly suspect some alternative hypotheses. But even then it shouldn’t change my confidence of MWI relative to my confidence of Copenhagen).
Indeed, the anthropic principle explains the result of quantum suicide, whether or not you subscribe to the MWI. The real question is whether you ought to commit quantum suicide (and harness its anthropic superpowers for good). It’s a question of morality.
I would say quantum suiciding is not “harnessing its anthropic superpowers for good”, it’s just conveniently excluding yourself from the branches where your superpowers don’t work. So it has no more positive impact on the universe than you dying has.
I think you are correct.
Related (somewhat): The Hero With A Thousand Chances.
That only provides evidence that you are determinedly suicidal and that you will eventually succeed.
But I’d have fun with my reality-steering anthropic superpowers in the meantime.
One of you would. The huge number of other diverging yous would run into an unpleasant surprise sooner or later.
No. You’re comparing the likelihood of 2 hypothesis. The observation that you survived 1000 good suicide attempts is much more likely under MWI than under Copenhagen. Then you flip it around using Bayes’ rule, and believe in MWI.
But other Bayesians around you should not agree with you. This is a case where Bayesians should agree to disagree.
First, not to be nit-picky but MWI != QI. Second, if your suicide attempts are well documented the branch in which you survived would be populated by Bayesians who agreed with you, no?
The Bayesians wouldn’t agree with you, because the observation that you survived all those suicide attempts is, to them, equally likely under MWI or Copenhagen.
What is QI?
Flip a quantum coin.
Isn’t that like saying “Under MWI, the observation that the coin came up heads, and the observation that it came up tails, both have probability of 1”?
The observation that I survive 1000 good suicide attempts has a probability of 1, but only if I condition on my being capable of making any observation at all (i.e. alive). In which case it’s the same under Copenhagen.
The observation is that you’re alive. If the Quantum Immortality hypothesis is true you will continue making that observation after an arbitrary number of good suicide attempts. The probability that you will continue making that observation if Quantum Immortality is false is much smaller than one.
The probability that there exists an Everett branch in which I continue making that observation is 1. I’m not sure if jumping straight to subjective experience from that is justified:
If P(I survive|MWI) = 1, and P(I survive|Copenhagen) = p, then what is the rest of that probability mass in Copenhagen interpretation? Why is P(~(I survive)|Copenhagen) = 1-p and what does it really describe? It seems to me that calling it “I don’t make any observation” is jumping from subjective experiences back to objective. This looks like a confusion of levels.
ETA: And, of course, the problem with “anthropic probabilities” gets even harder when you consider copies and merging, simulations, Tegmark level 4, and Boltzmann brains (The Anthropic Trilemma). I’m not sure if there even is a general solution. But I strongly suspect that “you can prove MWI by quantum suicide” is an incorrect usage of probabilities.
It even depends on philosophy. Specifically on whether following equality holds.
I survive = There (not necessarily in our universe) exists someone who remembers everything I remember now plus failed suicide I’m going to conduct now.
or
I survive = There exists someone who don’t remember everything I remember now, but he acts as I would acted if I remember what he remembers. (I’m not sure whether I correctly expressed subjunctive mood)
First, I’m gonna clarify some terms to make this more precise. Let Y be a person psychologically continuous with your present self. P(there is some Y that observes surviving a suicide attempt|Quantum immortality) = 1. Note MWI != QI. But QI entails MWI. P(there is some Y that observes surviving a suicide attempt| ~QI) = p.
It follows from this that P(~(there is some Y that observes surviving a suicide attempt)|~QI) = 1-p.
I don’t see a confusion of levels (whatever that means).
I don’t know if this is the point you meant to make but the existence of these other hypotheses that could imply anthropic immortality definitely does get in the way of providing evidence in favor of Many Worlds through suicide. Surviving increases the probability of all of those hypotheses (to different extents but not really enough to distinguish them).
I still see a problem here. Substitute quantum suicide → quantum coinflip, and surviving a suicide attempt → observing the coin turning up heads.
Now we have P(there is some Y that observes coin falling heads|MWI) = 1, and P(there is some Y that observes coin falling heads|Copenhagen) = p.
So any specific outcome of a quantum event would be evidence in favor of MWI.
I think that works actually. If you observe 30 quantum heads in a row you have strong evidence in favor of MWI. The quantum suicide thing is just a way of increasing the proportion of future you’s that have this information.
But then if I observed any string of 30 outcomes I would have strong evidence for MWI (if the coin is fair, “p” for any specific string would be 2^-30).
You have to specify a particular string to look for before you do the experiment.
Sorry, now I have no idea what we’re talking about. If your experiment involves killing yourself after seeing the wrong string, this is close to the standard quantum suicide.
If not, I would have to see the probabilities to understand. My analysis is like this: P(I observe string S | MWI) = P(I observe string S | Copenhagen) = 2^-30, regardless of whether the string S is specified beforehand or not. MWI doesn’t mean that my next Everett branch must be S because I say so.
The reason why this doesn’t work (for coins) is that (when MWI is true) A=”my observation is heads” implies B=”some Y observes heads”, but not the other way around. So P(B|A)=1, but P(A|B) = p, and after plugging that into the Bayes formula we have P(MWI|A) = P(Copenhagen|A).
Can you translate that to the quantum suicide case?
I have no theories about what you’re thinking when you say that.
Either you condition the observation (of surviving 1000 attempts) on the observer existing, and you have 1 in both cases, or you don’t condition it on the observer and you have p^-1000 in both cases. You can’t have it both ways.
If it’s not observable, what difference than does it make?
It makes no difference. Its a thought experiment about the consequences of MWI, but it isn’t a testable prediction.
Of course it is testable. Just do some 30 quantum coin flips in a row. If any of them is a head, knock yourself down into a deep sleep (with anesthesia) for 24 hours.
If you are still awake 1 hour after the coin failed for the last time, QI is probably the fact.
Nah. QI relies on your “subjective thread” coming to an end in some worlds and continuing in others. In your experiment I’d be pretty certain to get knocked out and wake up after 24 hours.
How does the Multiverse know, I am just sleeping for 24 (or 24000) hours? How the Multiverse knows, I’ll not be rescued after the real suicide attempt after a quantum coin head popped up?
Or resurrected by some ultratech?
Where is the fine red line, that the Quantum Immortality is possible, but a Quantum Awakening described above—isn’t?
It doesn’t, not right now in the present moment. But there’s no reason why “subjective threads” and “subjective probabilities” should depend on physical laws only locally. Imagine you’re an algorithm running on a computer. If someone pauses the computer for a thousand years, afterwards you go on running like nothing happened, even though at the moment of pausing nobody “knew” when/if you’d be restarted again.
But what if a new computer arises every time and an instance of this algorithm start there?
As it allegedly does in MW?
Because you won’t be back. Universe has the whole eternity to just wait for you to come back. If you don’t, the only remaining ones that keep on experiencing from where you left off are the branches where coin didn’t come heads.
I see. The MW has a book of those who will wake up and those who will not?
And acts accordingly. Splits or not.
I do not buy this, of course.
It’s a good thought to reject.
In fact, quantum immortality has little to do with the actual properties of the universe, as long as it’s probabilistic. It’s just what happens when you arbitrarily (well, anthropically) decide to stop counting certain possibilities.
No, it always splits into two everett branches. It’s just that if you do in fact wake up in the distant future, that version of you that wakes up will be a successor of the you that is awake now, as is the version of you that never went to sleep in the next microsecond (or whatever). And you should anticipate either’s experiences equally.
Or at least that’s how I think it works (this assumes timeless physics, which I think is what Jonii assumed).
There are two problems with this test.
First, the result of a coin flip is almost certainly determined by starting conditions. With enough knowledge of those conditions you could predict the result. Instead you should make a measurement on a quantum system, such as measuring the spin of an electron.
Second the result of this test does not distinguish between QI and not-QI. The probability of being knocked out or left awake is the same in both cases.
I suppose you could be assuming that your consciousness can jump arbitrarily between universes to follow a conscious version of you.… but no that would just be silly.
This is probably what Thomas meant by “quantum” coin flip.
You are right, I missed that. I probably shouldn’t post comments when I’m hungry, I’ve got a few other comments like this to account for as well. :)
I don’t postulate anything, what it is not already postulated in the so called Quantum Suicide mental experiment.
I just apply this on to the sleeping/coma case. Should work the same.
But I don’t think it works in either case.
The test you proposed does not distinguish between QI and not-QI. I don’t think that the current formulation of MWI even allows this to be tested.
Not a factor to my argument, both are untestable. You are arguing this point against other others, not me.
You might have missed the part where Thomas made it a “quantum coin flip”. The problem with the test is that by definition is can’t be replicated successfully by the scientific community and that even if QI is true you will get dis-confirming evidence in most Everett branches.
If that’s a valid objection, then quantum suicide won’t work either. In fact, if that’s a valid objection, then many-worlds is impossible, since everything is deterministic with no possible alternatives.
Many-worlds is a deterministic theory, as it says that the split configurations both occur.
Quantum immortality, mind you, is a very silly idea for a variety of other reasons—foremost of which is that a googleplex of universes still doesn’t ensure that there exists one of them in which a recognizable “you” survives next week, let alone to the end of time.