If I am understanding correctly, you are saying if the sleeping beauty problem does not use a coin toss, but measures the spin of an election instead, then the answer would be different. For the coin’s case, you will give the probability of Heads (yet to be tossed ) as 2⁄3 after learning it is Monday. But for the spin’s case, or a quantum coin, the probability must be 1⁄2 after learning it is Monday as it is a quantum event yet to happen.
That seems very ad-hoc to me. And I think differentiating “true quantum randomness” with something “99.99999% inevitable” in probability theories is a huge can of worms. But anyway, my question is, if the sleeping beauty problem uses a quantum coin, what is the probability of heads when you wake up, before being told what day it is? And what’s your probability after learning “it is Monday now”?
You said the answer depends on the quantum model used. I find it difficult to understand. Quantum models give different interpretations to make sense of the observed probability. The probability part is just experimental observation, not changed by which interpretation one prefers. But anyway, I am interested in your answer. How it can both keeps giving 1⁄2 to a quantum coin yet to be tossed, and obey bayesian probability when learning it is Monday.
As for the clone and waking experiment, you said the answer depends on what happens after the experiment, whether or not there will be further awakenings: If there are, thirding; if not, halving. Again, very ad-hoc. If the awakening depends on a second coin to be tossed after the experiment ends, what then? How should an independent event in the future retroactively affect the probability of the first coin toss? What if both coins are quantum? How can you keep your answer bayesian?
Just to be clear, my answer to cloning and waking P(H)=1/3 when waken up. The probability that I am the randomly chosen one, who would wake up regardless of the coin toss, is 2⁄3. The probability of Heads after learning I am the chosen one is 1⁄2. The answer does not depend on what happens after the experiment. And in all this reasoning, I do not know and do not need to think about, whether I am the original or the clone.
At this point, I find I am focusing more on arguing against SSA rather than explaining PBR. And the discussion is steering away from concrete thought experiments with numbers to metaphysical arguments.
I haven’t followed your arguments all the way here but I saw the comment
If I am understanding correctly, you are saying if the sleeping beauty problem does not use a coin toss, but measures the spin of an election instead, then the answer would be different.
and would just jump in and say that others have made a similar arguments. The one written example I’ve seen is this Master’s Thesis.
I’m not sure if I’m convinced but at least I buy that depending on how the particular selection goes about there can be instances were difference between probabilities as subjective credences or densities of Everett branches can have decision theoretic implications.
The link point back to this post. But I also remember reading similar arguments from halfer before, that the answer changes depending on if it is true quantum randomness, could not remember the source though.
But the problem remains the same: can Halfers keep the probability of a coin yet to be tossed at 1⁄2, and remain Bayesian. Michael Titelbaum showed it cannot be true as long as the probability of “Today is Tuesday” is valid and non-zero. If Lewisian Halfer argues that, unlike true quantum randomness, a coin yet to be tossed can have a probability differing from half, such that they can endorse self-locating probability and remains Bayesian. Then the question can simply be changed to using quantum measurements (or quantum coin for ease of expression). Then Lewisian Halfers faces the counter-argument again: either the probability is 1⁄2 at waking up and remains at 1⁄2 after learning it is Monday, therefore non-Bayesian. Or the probability is indeed 1⁄3 and updates to 1⁄2 after learning it is Monday, therefore non-halving. The latter effectively says SSA is only correct in non-quantum events and SIA is correct only for quantum events. But differentiating the cases between quantum and non-quantum events is no easy job. A detailed analysis of a simple coin toss result can lead to many independent physical causes, which can very well depend on quantum randomness. What shall we do in these cases? It is a very assumption-heavy argument for an initially simple Halfer answer.
Edit: Just gave the linked thesis a quick read. The writer seems to be partial to MWI and thinks it gives a more logical explanation to anthropic questions. He is not keen on the notion of treating probability/chance as that a randomly possible world becomes actualized, but considers all possible worlds ARE real (many-worlds), that the source of probability (or “the illusion of probability” as the writer says) is from which branch-world “I” am in. My problem with that is the “I” in such statements is taken as intrinsically understood, i.e. has no explanation. It does not give any justification on what the probability of “I am in a Heads world” is. For it to give a probability, additional assumptions about “among all the physically-similar agents across the many-branched worlds, which one is I” is needed. And that circles back to anthropics. At the end of the day, it is still using anthropic assumptions to answer anthropic problems, just like SIA or SSA.
I have argued against MWI in anthropics in another post. If you are interested.
If I am understanding correctly, you are saying if the sleeping beauty problem does not use a coin toss, but measures the spin of an election instead, then the answer would be different. For the coin’s case, you will give the probability of Heads (yet to be tossed ) as 2⁄3 after learning it is Monday. But for the spin’s case, or a quantum coin, the probability must be 1⁄2 after learning it is Monday as it is a quantum event yet to happen.
That seems very ad-hoc to me. And I think differentiating “true quantum randomness” with something “99.99999% inevitable” in probability theories is a huge can of worms. But anyway, my question is, if the sleeping beauty problem uses a quantum coin, what is the probability of heads when you wake up, before being told what day it is? And what’s your probability after learning “it is Monday now”?
You said the answer depends on the quantum model used. I find it difficult to understand. Quantum models give different interpretations to make sense of the observed probability. The probability part is just experimental observation, not changed by which interpretation one prefers. But anyway, I am interested in your answer. How it can both keeps giving 1⁄2 to a quantum coin yet to be tossed, and obey bayesian probability when learning it is Monday.
As for the clone and waking experiment, you said the answer depends on what happens after the experiment, whether or not there will be further awakenings: If there are, thirding; if not, halving. Again, very ad-hoc. If the awakening depends on a second coin to be tossed after the experiment ends, what then? How should an independent event in the future retroactively affect the probability of the first coin toss? What if both coins are quantum? How can you keep your answer bayesian?
Just to be clear, my answer to cloning and waking P(H)=1/3 when waken up. The probability that I am the randomly chosen one, who would wake up regardless of the coin toss, is 2⁄3. The probability of Heads after learning I am the chosen one is 1⁄2. The answer does not depend on what happens after the experiment. And in all this reasoning, I do not know and do not need to think about, whether I am the original or the clone.
At this point, I find I am focusing more on arguing against SSA rather than explaining PBR. And the discussion is steering away from concrete thought experiments with numbers to metaphysical arguments.
I haven’t followed your arguments all the way here but I saw the comment
and would just jump in and say that others have made a similar arguments. The one written example I’ve seen is this Master’s Thesis.
I’m not sure if I’m convinced but at least I buy that depending on how the particular selection goes about there can be instances were difference between probabilities as subjective credences or densities of Everett branches can have decision theoretic implications.
Edit: I’ve fixed the link
The link point back to this post. But I also remember reading similar arguments from halfer before, that the answer changes depending on if it is true quantum randomness, could not remember the source though.
But the problem remains the same: can Halfers keep the probability of a coin yet to be tossed at 1⁄2, and remain Bayesian. Michael Titelbaum showed it cannot be true as long as the probability of “Today is Tuesday” is valid and non-zero. If Lewisian Halfer argues that, unlike true quantum randomness, a coin yet to be tossed can have a probability differing from half, such that they can endorse self-locating probability and remains Bayesian. Then the question can simply be changed to using quantum measurements (or quantum coin for ease of expression). Then Lewisian Halfers faces the counter-argument again: either the probability is 1⁄2 at waking up and remains at 1⁄2 after learning it is Monday, therefore non-Bayesian. Or the probability is indeed 1⁄3 and updates to 1⁄2 after learning it is Monday, therefore non-halving. The latter effectively says SSA is only correct in non-quantum events and SIA is correct only for quantum events. But differentiating the cases between quantum and non-quantum events is no easy job. A detailed analysis of a simple coin toss result can lead to many independent physical causes, which can very well depend on quantum randomness. What shall we do in these cases? It is a very assumption-heavy argument for an initially simple Halfer answer.
Edit: Just gave the linked thesis a quick read. The writer seems to be partial to MWI and thinks it gives a more logical explanation to anthropic questions. He is not keen on the notion of treating probability/chance as that a randomly possible world becomes actualized, but considers all possible worlds ARE real (many-worlds), that the source of probability (or “the illusion of probability” as the writer says) is from which branch-world “I” am in. My problem with that is the “I” in such statements is taken as intrinsically understood, i.e. has no explanation. It does not give any justification on what the probability of “I am in a Heads world” is. For it to give a probability, additional assumptions about “among all the physically-similar agents across the many-branched worlds, which one is I” is needed. And that circles back to anthropics. At the end of the day, it is still using anthropic assumptions to answer anthropic problems, just like SIA or SSA.
I have argued against MWI in anthropics in another post. If you are interested.