There’s a general consensus that, although quantum theory has changed our understanding of reality, Newtonian physics remains a reliable short term guide to the macro world. In principle, the vast majority of macro events that are just about to happen are thought to be 99.9999% inevitable, as opposed to 100% like Newton thought. From that I deduce that if a coin is shortly to be flipped, the outcome is unknown but, is as good as determined as makes no odds. Whereas if a coin is flipped farther into the future from a point of prediction, the outcome is proportionately more likely to be undetermined.
I’m willing to concede debate about this. What I do recognise is that Beauty’s answer of 2⁄3 Heads, after she learns it’s Monday, depends on it being an already certain but unknown outcome. Whereas if the equivalent of a quantum coin were to be flipped on Monday night, this makes a difference. In that case, awaking on Monday morning, Beauty would not yet be in a Heads world or a Tails world. Her answer would certainly be 1⁄2 , after she learns it’s Monday. What it would be before she learns it’s Monday would depend on what quantum theory model is used. I can consider this another time.
Perspective disagreement between interacting parties, as a result of someone having more than one possible self-locating identity, is something I can certainly see a reason for. Invalidating someone’s likelihood of what that identity might be, I can’t find a reason for. I’ve looked hard.
I’d like to explore your simplified experiment. First it’s important to distinguish precisely what happens with Heads to the version of me that is not woken during the experiment. If the other me is woken after the experiment and told this fact, then there’s no controversy. On finding myself awake in the experiment, my answer is definitely 1⁄3 for Heads and 2⁄3 for Tails. Furthermore, it should makes no difference which version might have woken inside the experiment and which outside, assuming the coin landed Heads. Nor does it matter if that potential selection was made before the flip and I’m subsequently told what the choice was. I’d argue that this information about my possible identity is irrelevant to my credence for the coin.
This takes us to a controversy at the heart of anthropic debate. In the event of Heads, if the version of me that is not woken in the experiment never wakes up at all, it becomes like standard Sleeping Beauty and the answer is 1⁄2 for Heads or Tails. This is because all awakenings will now be inside the experiment and at least one awakening is guaranteed. Regardless of identity, my mind was certain to continue, so long as at least one version woke up. Whether it’s the original or clone, either share the same memories and there is no qualitative difference for the guaranteed continuity of my consciousness. All that matters is that there is no possible experience outside the experiment.
Even if it is an uncertain event as to which body woke up, that uncertainty doesn’t apply to my mind. This was guaranteed to carry on in whichever body it found itself. For the unconscious body that never wakes up, no mind is present. If that body was the original, it’s former mind now continues in the clone body, complete with memories. There is no qualitative difference if I continue in my original body or if I continue as the clone. In terms of actual consciousness, my primitive self has no greater or lesser claim to identical memories of my past, because of the body I have. For some, this will be controversial.
It’s also irrelevant whether the potential sole awakening of original or clone was decided before the flip or whether I’m told what the choice was. Would you actually claim it’s 1⁄3 for Heads providing that, in the event of that outcome, you know don’t whether you woke as the original or clone? However, if you learn what the potential Heads selection was – regardless of whether this turns out to be original or clone – Heads goes up to 1/2? We’ve touched on this before. It wouldn’t be a perspective disagreement with a third party. It would be a perspective disagreement with yourself.
If I am understanding correctly, you are saying if the sleeping beauty problem does not use a coin toss, but measures the spin of an election instead, then the answer would be different. For the coin’s case, you will give the probability of Heads (yet to be tossed ) as 2⁄3 after learning it is Monday. But for the spin’s case, or a quantum coin, the probability must be 1⁄2 after learning it is Monday as it is a quantum event yet to happen.
That seems very ad-hoc to me. And I think differentiating “true quantum randomness” with something “99.99999% inevitable” in probability theories is a huge can of worms. But anyway, my question is, if the sleeping beauty problem uses a quantum coin, what is the probability of heads when you wake up, before being told what day it is? And what’s your probability after learning “it is Monday now”?
You said the answer depends on the quantum model used. I find it difficult to understand. Quantum models give different interpretations to make sense of the observed probability. The probability part is just experimental observation, not changed by which interpretation one prefers. But anyway, I am interested in your answer. How it can both keeps giving 1⁄2 to a quantum coin yet to be tossed, and obey bayesian probability when learning it is Monday.
As for the clone and waking experiment, you said the answer depends on what happens after the experiment, whether or not there will be further awakenings: If there are, thirding; if not, halving. Again, very ad-hoc. If the awakening depends on a second coin to be tossed after the experiment ends, what then? How should an independent event in the future retroactively affect the probability of the first coin toss? What if both coins are quantum? How can you keep your answer bayesian?
Just to be clear, my answer to cloning and waking P(H)=1/3 when waken up. The probability that I am the randomly chosen one, who would wake up regardless of the coin toss, is 2⁄3. The probability of Heads after learning I am the chosen one is 1⁄2. The answer does not depend on what happens after the experiment. And in all this reasoning, I do not know and do not need to think about, whether I am the original or the clone.
At this point, I find I am focusing more on arguing against SSA rather than explaining PBR. And the discussion is steering away from concrete thought experiments with numbers to metaphysical arguments.
I haven’t followed your arguments all the way here but I saw the comment
If I am understanding correctly, you are saying if the sleeping beauty problem does not use a coin toss, but measures the spin of an election instead, then the answer would be different.
and would just jump in and say that others have made a similar arguments. The one written example I’ve seen is this Master’s Thesis.
I’m not sure if I’m convinced but at least I buy that depending on how the particular selection goes about there can be instances were difference between probabilities as subjective credences or densities of Everett branches can have decision theoretic implications.
The link point back to this post. But I also remember reading similar arguments from halfer before, that the answer changes depending on if it is true quantum randomness, could not remember the source though.
But the problem remains the same: can Halfers keep the probability of a coin yet to be tossed at 1⁄2, and remain Bayesian. Michael Titelbaum showed it cannot be true as long as the probability of “Today is Tuesday” is valid and non-zero. If Lewisian Halfer argues that, unlike true quantum randomness, a coin yet to be tossed can have a probability differing from half, such that they can endorse self-locating probability and remains Bayesian. Then the question can simply be changed to using quantum measurements (or quantum coin for ease of expression). Then Lewisian Halfers faces the counter-argument again: either the probability is 1⁄2 at waking up and remains at 1⁄2 after learning it is Monday, therefore non-Bayesian. Or the probability is indeed 1⁄3 and updates to 1⁄2 after learning it is Monday, therefore non-halving. The latter effectively says SSA is only correct in non-quantum events and SIA is correct only for quantum events. But differentiating the cases between quantum and non-quantum events is no easy job. A detailed analysis of a simple coin toss result can lead to many independent physical causes, which can very well depend on quantum randomness. What shall we do in these cases? It is a very assumption-heavy argument for an initially simple Halfer answer.
Edit: Just gave the linked thesis a quick read. The writer seems to be partial to MWI and thinks it gives a more logical explanation to anthropic questions. He is not keen on the notion of treating probability/chance as that a randomly possible world becomes actualized, but considers all possible worlds ARE real (many-worlds), that the source of probability (or “the illusion of probability” as the writer says) is from which branch-world “I” am in. My problem with that is the “I” in such statements is taken as intrinsically understood, i.e. has no explanation. It does not give any justification on what the probability of “I am in a Heads world” is. For it to give a probability, additional assumptions about “among all the physically-similar agents across the many-branched worlds, which one is I” is needed. And that circles back to anthropics. At the end of the day, it is still using anthropic assumptions to answer anthropic problems, just like SIA or SSA.
I have argued against MWI in anthropics in another post. If you are interested.
There’s a general consensus that, although quantum theory has changed our understanding of reality, Newtonian physics remains a reliable short term guide to the macro world. In principle, the vast majority of macro events that are just about to happen are thought to be 99.9999% inevitable, as opposed to 100% like Newton thought. From that I deduce that if a coin is shortly to be flipped, the outcome is unknown but, is as good as determined as makes no odds. Whereas if a coin is flipped farther into the future from a point of prediction, the outcome is proportionately more likely to be undetermined.
I’m willing to concede debate about this. What I do recognise is that Beauty’s answer of 2⁄3 Heads, after she learns it’s Monday, depends on it being an already certain but unknown outcome. Whereas if the equivalent of a quantum coin were to be flipped on Monday night, this makes a difference. In that case, awaking on Monday morning, Beauty would not yet be in a Heads world or a Tails world. Her answer would certainly be 1⁄2 , after she learns it’s Monday. What it would be before she learns it’s Monday would depend on what quantum theory model is used. I can consider this another time.
Perspective disagreement between interacting parties, as a result of someone having more than one possible self-locating identity, is something I can certainly see a reason for. Invalidating someone’s likelihood of what that identity might be, I can’t find a reason for. I’ve looked hard.
I’d like to explore your simplified experiment. First it’s important to distinguish precisely what happens with Heads to the version of me that is not woken during the experiment. If the other me is woken after the experiment and told this fact, then there’s no controversy. On finding myself awake in the experiment, my answer is definitely 1⁄3 for Heads and 2⁄3 for Tails. Furthermore, it should makes no difference which version might have woken inside the experiment and which outside, assuming the coin landed Heads. Nor does it matter if that potential selection was made before the flip and I’m subsequently told what the choice was. I’d argue that this information about my possible identity is irrelevant to my credence for the coin.
This takes us to a controversy at the heart of anthropic debate. In the event of Heads, if the version of me that is not woken in the experiment never wakes up at all, it becomes like standard Sleeping Beauty and the answer is 1⁄2 for Heads or Tails. This is because all awakenings will now be inside the experiment and at least one awakening is guaranteed. Regardless of identity, my mind was certain to continue, so long as at least one version woke up. Whether it’s the original or clone, either share the same memories and there is no qualitative difference for the guaranteed continuity of my consciousness. All that matters is that there is no possible experience outside the experiment.
Even if it is an uncertain event as to which body woke up, that uncertainty doesn’t apply to my mind. This was guaranteed to carry on in whichever body it found itself. For the unconscious body that never wakes up, no mind is present. If that body was the original, it’s former mind now continues in the clone body, complete with memories. There is no qualitative difference if I continue in my original body or if I continue as the clone. In terms of actual consciousness, my primitive self has no greater or lesser claim to identical memories of my past, because of the body I have. For some, this will be controversial.
It’s also irrelevant whether the potential sole awakening of original or clone was decided before the flip or whether I’m told what the choice was. Would you actually claim it’s 1⁄3 for Heads providing that, in the event of that outcome, you know don’t whether you woke as the original or clone? However, if you learn what the potential Heads selection was – regardless of whether this turns out to be original or clone – Heads goes up to 1/2? We’ve touched on this before. It wouldn’t be a perspective disagreement with a third party. It would be a perspective disagreement with yourself.
If I am understanding correctly, you are saying if the sleeping beauty problem does not use a coin toss, but measures the spin of an election instead, then the answer would be different. For the coin’s case, you will give the probability of Heads (yet to be tossed ) as 2⁄3 after learning it is Monday. But for the spin’s case, or a quantum coin, the probability must be 1⁄2 after learning it is Monday as it is a quantum event yet to happen.
That seems very ad-hoc to me. And I think differentiating “true quantum randomness” with something “99.99999% inevitable” in probability theories is a huge can of worms. But anyway, my question is, if the sleeping beauty problem uses a quantum coin, what is the probability of heads when you wake up, before being told what day it is? And what’s your probability after learning “it is Monday now”?
You said the answer depends on the quantum model used. I find it difficult to understand. Quantum models give different interpretations to make sense of the observed probability. The probability part is just experimental observation, not changed by which interpretation one prefers. But anyway, I am interested in your answer. How it can both keeps giving 1⁄2 to a quantum coin yet to be tossed, and obey bayesian probability when learning it is Monday.
As for the clone and waking experiment, you said the answer depends on what happens after the experiment, whether or not there will be further awakenings: If there are, thirding; if not, halving. Again, very ad-hoc. If the awakening depends on a second coin to be tossed after the experiment ends, what then? How should an independent event in the future retroactively affect the probability of the first coin toss? What if both coins are quantum? How can you keep your answer bayesian?
Just to be clear, my answer to cloning and waking P(H)=1/3 when waken up. The probability that I am the randomly chosen one, who would wake up regardless of the coin toss, is 2⁄3. The probability of Heads after learning I am the chosen one is 1⁄2. The answer does not depend on what happens after the experiment. And in all this reasoning, I do not know and do not need to think about, whether I am the original or the clone.
At this point, I find I am focusing more on arguing against SSA rather than explaining PBR. And the discussion is steering away from concrete thought experiments with numbers to metaphysical arguments.
I haven’t followed your arguments all the way here but I saw the comment
and would just jump in and say that others have made a similar arguments. The one written example I’ve seen is this Master’s Thesis.
I’m not sure if I’m convinced but at least I buy that depending on how the particular selection goes about there can be instances were difference between probabilities as subjective credences or densities of Everett branches can have decision theoretic implications.
Edit: I’ve fixed the link
The link point back to this post. But I also remember reading similar arguments from halfer before, that the answer changes depending on if it is true quantum randomness, could not remember the source though.
But the problem remains the same: can Halfers keep the probability of a coin yet to be tossed at 1⁄2, and remain Bayesian. Michael Titelbaum showed it cannot be true as long as the probability of “Today is Tuesday” is valid and non-zero. If Lewisian Halfer argues that, unlike true quantum randomness, a coin yet to be tossed can have a probability differing from half, such that they can endorse self-locating probability and remains Bayesian. Then the question can simply be changed to using quantum measurements (or quantum coin for ease of expression). Then Lewisian Halfers faces the counter-argument again: either the probability is 1⁄2 at waking up and remains at 1⁄2 after learning it is Monday, therefore non-Bayesian. Or the probability is indeed 1⁄3 and updates to 1⁄2 after learning it is Monday, therefore non-halving. The latter effectively says SSA is only correct in non-quantum events and SIA is correct only for quantum events. But differentiating the cases between quantum and non-quantum events is no easy job. A detailed analysis of a simple coin toss result can lead to many independent physical causes, which can very well depend on quantum randomness. What shall we do in these cases? It is a very assumption-heavy argument for an initially simple Halfer answer.
Edit: Just gave the linked thesis a quick read. The writer seems to be partial to MWI and thinks it gives a more logical explanation to anthropic questions. He is not keen on the notion of treating probability/chance as that a randomly possible world becomes actualized, but considers all possible worlds ARE real (many-worlds), that the source of probability (or “the illusion of probability” as the writer says) is from which branch-world “I” am in. My problem with that is the “I” in such statements is taken as intrinsically understood, i.e. has no explanation. It does not give any justification on what the probability of “I am in a Heads world” is. For it to give a probability, additional assumptions about “among all the physically-similar agents across the many-branched worlds, which one is I” is needed. And that circles back to anthropics. At the end of the day, it is still using anthropic assumptions to answer anthropic problems, just like SIA or SSA.
I have argued against MWI in anthropics in another post. If you are interested.