There are quite a few points here I disagree with. Allow me to explain.
As I said in the previous reply, a mathematical statement by itself doesn’t have a probability of being right/wrong. It is the process under which someone makes or evaluates said statement that can have a probability attached to it. Maybe the experimenter picked a random number from 1 to 10000 and then check that digit of pi to determine whether to destroy or wake the copy in question. And he picked ten in this case. This process/circumstance enables us to assign a probability to it. Whereas in self-locating probabilities there is no process explaining where the “I” comes from.
Also just because macrophysical objects do not exhibit quantum phenomena such as randomness does not mean the macro world is deterministic. So I would not say Lewisian halfer predicting a fair coin yet to be tossed has the probability of Heads of 2⁄3 is metaphysically problem-free. Furthermore, even if you bite the bullet here, there are problems of probability pumping and retro-causations. I will attach the thought experiment later.
Before going further, I wish to give PBR’s answers to the thought experiment you raised. The probability that I am the clone or original is invalid. As it is a self-locating probability. The probability that today is Monday is 2⁄3. It is valid because both the clone and the original have the same value. There is no need to explain “which person I am”. The probability that the chosen digit of pi falls between 6-0 is 2⁄3. It is valid for the same reason. And the probability that the coin landed heads given “I” am awake is invalid. i.e. we cannot update based on the information “I” am awake. As the value depends on if “I” am the clone or the original.
I don’t think thirders would have any trouble giving a probability of 2⁄3 to the tenth digit of pi. All they have to do is treat “I” as a random sample between the original and clone and then conduct the analysis from a god’s eye perspective: randomly select a subject among the two, then find out she is awakened in the experiment, meaning the chosen digit is twice likely to be 6-0 (2/3). If the chosen subject does not wake up in the experiment then the chosen digit must be 1-5 (100%). They can also have a frequentist model as such no problem.
The thirders think reasoning objectively from a god’s eye view is the only correct way. I think reasoning from any perspective, like that of an experiment subject’s first-person perspective, is just as valid. And from the subject’s first-person perspective the original sleeping beauty problem has a probability of 1⁄2, verifiable with a frequentist model. I thought you agreed with this.
However, if you endorse traditional Lewsian halfer’s reasoning or SSA, which seems clear to me given your latest thought experiment with cloning and waking, then you can’t agree with me on that. Furthermore cannot use the subject’s first-person perspective, or the subsequent perspective disagreement, as a supporting argument for halving.
Consider this experiment. You will be cloned tonight in your sleep with memories preserved. Then a coin would be tossed, if Heads a randomly selected copy would be waked up in the morning. The other one would sleep through the experiment. If Tails both would be waked up. So after waking up the next day, what is the probability of Heads?
Thirders would consider this problem logically identical to the original sleeping beauty. I would guess as a Lewsian Halfer, you would too, and still giving the probability as 1⁄2. But for PBR, this problem is different. The probability of Head would rightfully be 1⁄3 in this case. As “I am awake in this experiment” is no longer guaranteed, it gives legitimate new information. And if I take part in such experiments repeatedly and count on my personal experience, the relative fraction of Heads in experiments where I got waken up would approach 1⁄3. So here you have to make a choice. If you stick with Lewsian Halfer’s reasoning, then you have to give up on thinking from a subject’s first-person perspective. Therefore cannot use it as a supporting argument for halving, nor use it as an explanation for the peculiar perspective disagreement. If you still think thinking from a subject’s first-person perspective is valid, then you have to give up on Lewisian reasoning, and cannot update the probability of Head to 2⁄3 after learning you are the chosen one who will be wakened up anyway. To me that is a really easy choice.
If you think predicting Heads with a probability of 2⁄3 is correct, there are other issues. For example, you can get a bunch of memory-erasing drugs, then you will have the supernatural predicting power. Before seeing the toss result, you can form the following intentions: if it comes up in tails I would take the drugs N times. To have N near-identical awakenings and revealings. Then by Lewsian Halfer (or SSA), if Heads then I would have been experiencing this revealing for sure. If Tails the probability of me currently experiencing this particular awakening would be 1/N. Therefore the probability of Heads is 1/(N+1). And you can squeeze that number as little or large as you want. All you have to do is to take the appropriate number of drugs after a certain result is revealed. By doing so you can retroactively control the coin toss result.
There’s a general consensus that, although quantum theory has changed our understanding of reality, Newtonian physics remains a reliable short term guide to the macro world. In principle, the vast majority of macro events that are just about to happen are thought to be 99.9999% inevitable, as opposed to 100% like Newton thought. From that I deduce that if a coin is shortly to be flipped, the outcome is unknown but, is as good as determined as makes no odds. Whereas if a coin is flipped farther into the future from a point of prediction, the outcome is proportionately more likely to be undetermined.
I’m willing to concede debate about this. What I do recognise is that Beauty’s answer of 2⁄3 Heads, after she learns it’s Monday, depends on it being an already certain but unknown outcome. Whereas if the equivalent of a quantum coin were to be flipped on Monday night, this makes a difference. In that case, awaking on Monday morning, Beauty would not yet be in a Heads world or a Tails world. Her answer would certainly be 1⁄2 , after she learns it’s Monday. What it would be before she learns it’s Monday would depend on what quantum theory model is used. I can consider this another time.
Perspective disagreement between interacting parties, as a result of someone having more than one possible self-locating identity, is something I can certainly see a reason for. Invalidating someone’s likelihood of what that identity might be, I can’t find a reason for. I’ve looked hard.
I’d like to explore your simplified experiment. First it’s important to distinguish precisely what happens with Heads to the version of me that is not woken during the experiment. If the other me is woken after the experiment and told this fact, then there’s no controversy. On finding myself awake in the experiment, my answer is definitely 1⁄3 for Heads and 2⁄3 for Tails. Furthermore, it should makes no difference which version might have woken inside the experiment and which outside, assuming the coin landed Heads. Nor does it matter if that potential selection was made before the flip and I’m subsequently told what the choice was. I’d argue that this information about my possible identity is irrelevant to my credence for the coin.
This takes us to a controversy at the heart of anthropic debate. In the event of Heads, if the version of me that is not woken in the experiment never wakes up at all, it becomes like standard Sleeping Beauty and the answer is 1⁄2 for Heads or Tails. This is because all awakenings will now be inside the experiment and at least one awakening is guaranteed. Regardless of identity, my mind was certain to continue, so long as at least one version woke up. Whether it’s the original or clone, either share the same memories and there is no qualitative difference for the guaranteed continuity of my consciousness. All that matters is that there is no possible experience outside the experiment.
Even if it is an uncertain event as to which body woke up, that uncertainty doesn’t apply to my mind. This was guaranteed to carry on in whichever body it found itself. For the unconscious body that never wakes up, no mind is present. If that body was the original, it’s former mind now continues in the clone body, complete with memories. There is no qualitative difference if I continue in my original body or if I continue as the clone. In terms of actual consciousness, my primitive self has no greater or lesser claim to identical memories of my past, because of the body I have. For some, this will be controversial.
It’s also irrelevant whether the potential sole awakening of original or clone was decided before the flip or whether I’m told what the choice was. Would you actually claim it’s 1⁄3 for Heads providing that, in the event of that outcome, you know don’t whether you woke as the original or clone? However, if you learn what the potential Heads selection was – regardless of whether this turns out to be original or clone – Heads goes up to 1/2? We’ve touched on this before. It wouldn’t be a perspective disagreement with a third party. It would be a perspective disagreement with yourself.
If I am understanding correctly, you are saying if the sleeping beauty problem does not use a coin toss, but measures the spin of an election instead, then the answer would be different. For the coin’s case, you will give the probability of Heads (yet to be tossed ) as 2⁄3 after learning it is Monday. But for the spin’s case, or a quantum coin, the probability must be 1⁄2 after learning it is Monday as it is a quantum event yet to happen.
That seems very ad-hoc to me. And I think differentiating “true quantum randomness” with something “99.99999% inevitable” in probability theories is a huge can of worms. But anyway, my question is, if the sleeping beauty problem uses a quantum coin, what is the probability of heads when you wake up, before being told what day it is? And what’s your probability after learning “it is Monday now”?
You said the answer depends on the quantum model used. I find it difficult to understand. Quantum models give different interpretations to make sense of the observed probability. The probability part is just experimental observation, not changed by which interpretation one prefers. But anyway, I am interested in your answer. How it can both keeps giving 1⁄2 to a quantum coin yet to be tossed, and obey bayesian probability when learning it is Monday.
As for the clone and waking experiment, you said the answer depends on what happens after the experiment, whether or not there will be further awakenings: If there are, thirding; if not, halving. Again, very ad-hoc. If the awakening depends on a second coin to be tossed after the experiment ends, what then? How should an independent event in the future retroactively affect the probability of the first coin toss? What if both coins are quantum? How can you keep your answer bayesian?
Just to be clear, my answer to cloning and waking P(H)=1/3 when waken up. The probability that I am the randomly chosen one, who would wake up regardless of the coin toss, is 2⁄3. The probability of Heads after learning I am the chosen one is 1⁄2. The answer does not depend on what happens after the experiment. And in all this reasoning, I do not know and do not need to think about, whether I am the original or the clone.
At this point, I find I am focusing more on arguing against SSA rather than explaining PBR. And the discussion is steering away from concrete thought experiments with numbers to metaphysical arguments.
I haven’t followed your arguments all the way here but I saw the comment
If I am understanding correctly, you are saying if the sleeping beauty problem does not use a coin toss, but measures the spin of an election instead, then the answer would be different.
and would just jump in and say that others have made a similar arguments. The one written example I’ve seen is this Master’s Thesis.
I’m not sure if I’m convinced but at least I buy that depending on how the particular selection goes about there can be instances were difference between probabilities as subjective credences or densities of Everett branches can have decision theoretic implications.
The link point back to this post. But I also remember reading similar arguments from halfer before, that the answer changes depending on if it is true quantum randomness, could not remember the source though.
But the problem remains the same: can Halfers keep the probability of a coin yet to be tossed at 1⁄2, and remain Bayesian. Michael Titelbaum showed it cannot be true as long as the probability of “Today is Tuesday” is valid and non-zero. If Lewisian Halfer argues that, unlike true quantum randomness, a coin yet to be tossed can have a probability differing from half, such that they can endorse self-locating probability and remains Bayesian. Then the question can simply be changed to using quantum measurements (or quantum coin for ease of expression). Then Lewisian Halfers faces the counter-argument again: either the probability is 1⁄2 at waking up and remains at 1⁄2 after learning it is Monday, therefore non-Bayesian. Or the probability is indeed 1⁄3 and updates to 1⁄2 after learning it is Monday, therefore non-halving. The latter effectively says SSA is only correct in non-quantum events and SIA is correct only for quantum events. But differentiating the cases between quantum and non-quantum events is no easy job. A detailed analysis of a simple coin toss result can lead to many independent physical causes, which can very well depend on quantum randomness. What shall we do in these cases? It is a very assumption-heavy argument for an initially simple Halfer answer.
Edit: Just gave the linked thesis a quick read. The writer seems to be partial to MWI and thinks it gives a more logical explanation to anthropic questions. He is not keen on the notion of treating probability/chance as that a randomly possible world becomes actualized, but considers all possible worlds ARE real (many-worlds), that the source of probability (or “the illusion of probability” as the writer says) is from which branch-world “I” am in. My problem with that is the “I” in such statements is taken as intrinsically understood, i.e. has no explanation. It does not give any justification on what the probability of “I am in a Heads world” is. For it to give a probability, additional assumptions about “among all the physically-similar agents across the many-branched worlds, which one is I” is needed. And that circles back to anthropics. At the end of the day, it is still using anthropic assumptions to answer anthropic problems, just like SIA or SSA.
I have argued against MWI in anthropics in another post. If you are interested.
There are quite a few points here I disagree with. Allow me to explain.
As I said in the previous reply, a mathematical statement by itself doesn’t have a probability of being right/wrong. It is the process under which someone makes or evaluates said statement that can have a probability attached to it. Maybe the experimenter picked a random number from 1 to 10000 and then check that digit of pi to determine whether to destroy or wake the copy in question. And he picked ten in this case. This process/circumstance enables us to assign a probability to it. Whereas in self-locating probabilities there is no process explaining where the “I” comes from.
Also just because macrophysical objects do not exhibit quantum phenomena such as randomness does not mean the macro world is deterministic. So I would not say Lewisian halfer predicting a fair coin yet to be tossed has the probability of Heads of 2⁄3 is metaphysically problem-free. Furthermore, even if you bite the bullet here, there are problems of probability pumping and retro-causations. I will attach the thought experiment later.
Before going further, I wish to give PBR’s answers to the thought experiment you raised. The probability that I am the clone or original is invalid. As it is a self-locating probability. The probability that today is Monday is 2⁄3. It is valid because both the clone and the original have the same value. There is no need to explain “which person I am”. The probability that the chosen digit of pi falls between 6-0 is 2⁄3. It is valid for the same reason. And the probability that the coin landed heads given “I” am awake is invalid. i.e. we cannot update based on the information “I” am awake. As the value depends on if “I” am the clone or the original.
I don’t think thirders would have any trouble giving a probability of 2⁄3 to the tenth digit of pi. All they have to do is treat “I” as a random sample between the original and clone and then conduct the analysis from a god’s eye perspective: randomly select a subject among the two, then find out she is awakened in the experiment, meaning the chosen digit is twice likely to be 6-0 (2/3). If the chosen subject does not wake up in the experiment then the chosen digit must be 1-5 (100%). They can also have a frequentist model as such no problem.
The thirders think reasoning objectively from a god’s eye view is the only correct way. I think reasoning from any perspective, like that of an experiment subject’s first-person perspective, is just as valid. And from the subject’s first-person perspective the original sleeping beauty problem has a probability of 1⁄2, verifiable with a frequentist model. I thought you agreed with this.
However, if you endorse traditional Lewsian halfer’s reasoning or SSA, which seems clear to me given your latest thought experiment with cloning and waking, then you can’t agree with me on that. Furthermore cannot use the subject’s first-person perspective, or the subsequent perspective disagreement, as a supporting argument for halving.
Consider this experiment. You will be cloned tonight in your sleep with memories preserved. Then a coin would be tossed, if Heads a randomly selected copy would be waked up in the morning. The other one would sleep through the experiment. If Tails both would be waked up. So after waking up the next day, what is the probability of Heads?
Thirders would consider this problem logically identical to the original sleeping beauty. I would guess as a Lewsian Halfer, you would too, and still giving the probability as 1⁄2. But for PBR, this problem is different. The probability of Head would rightfully be 1⁄3 in this case. As “I am awake in this experiment” is no longer guaranteed, it gives legitimate new information. And if I take part in such experiments repeatedly and count on my personal experience, the relative fraction of Heads in experiments where I got waken up would approach 1⁄3. So here you have to make a choice. If you stick with Lewsian Halfer’s reasoning, then you have to give up on thinking from a subject’s first-person perspective. Therefore cannot use it as a supporting argument for halving, nor use it as an explanation for the peculiar perspective disagreement. If you still think thinking from a subject’s first-person perspective is valid, then you have to give up on Lewisian reasoning, and cannot update the probability of Head to 2⁄3 after learning you are the chosen one who will be wakened up anyway. To me that is a really easy choice.
If you think predicting Heads with a probability of 2⁄3 is correct, there are other issues. For example, you can get a bunch of memory-erasing drugs, then you will have the supernatural predicting power. Before seeing the toss result, you can form the following intentions: if it comes up in tails I would take the drugs N times. To have N near-identical awakenings and revealings. Then by Lewsian Halfer (or SSA), if Heads then I would have been experiencing this revealing for sure. If Tails the probability of me currently experiencing this particular awakening would be 1/N. Therefore the probability of Heads is 1/(N+1). And you can squeeze that number as little or large as you want. All you have to do is to take the appropriate number of drugs after a certain result is revealed. By doing so you can retroactively control the coin toss result.
There’s a general consensus that, although quantum theory has changed our understanding of reality, Newtonian physics remains a reliable short term guide to the macro world. In principle, the vast majority of macro events that are just about to happen are thought to be 99.9999% inevitable, as opposed to 100% like Newton thought. From that I deduce that if a coin is shortly to be flipped, the outcome is unknown but, is as good as determined as makes no odds. Whereas if a coin is flipped farther into the future from a point of prediction, the outcome is proportionately more likely to be undetermined.
I’m willing to concede debate about this. What I do recognise is that Beauty’s answer of 2⁄3 Heads, after she learns it’s Monday, depends on it being an already certain but unknown outcome. Whereas if the equivalent of a quantum coin were to be flipped on Monday night, this makes a difference. In that case, awaking on Monday morning, Beauty would not yet be in a Heads world or a Tails world. Her answer would certainly be 1⁄2 , after she learns it’s Monday. What it would be before she learns it’s Monday would depend on what quantum theory model is used. I can consider this another time.
Perspective disagreement between interacting parties, as a result of someone having more than one possible self-locating identity, is something I can certainly see a reason for. Invalidating someone’s likelihood of what that identity might be, I can’t find a reason for. I’ve looked hard.
I’d like to explore your simplified experiment. First it’s important to distinguish precisely what happens with Heads to the version of me that is not woken during the experiment. If the other me is woken after the experiment and told this fact, then there’s no controversy. On finding myself awake in the experiment, my answer is definitely 1⁄3 for Heads and 2⁄3 for Tails. Furthermore, it should makes no difference which version might have woken inside the experiment and which outside, assuming the coin landed Heads. Nor does it matter if that potential selection was made before the flip and I’m subsequently told what the choice was. I’d argue that this information about my possible identity is irrelevant to my credence for the coin.
This takes us to a controversy at the heart of anthropic debate. In the event of Heads, if the version of me that is not woken in the experiment never wakes up at all, it becomes like standard Sleeping Beauty and the answer is 1⁄2 for Heads or Tails. This is because all awakenings will now be inside the experiment and at least one awakening is guaranteed. Regardless of identity, my mind was certain to continue, so long as at least one version woke up. Whether it’s the original or clone, either share the same memories and there is no qualitative difference for the guaranteed continuity of my consciousness. All that matters is that there is no possible experience outside the experiment.
Even if it is an uncertain event as to which body woke up, that uncertainty doesn’t apply to my mind. This was guaranteed to carry on in whichever body it found itself. For the unconscious body that never wakes up, no mind is present. If that body was the original, it’s former mind now continues in the clone body, complete with memories. There is no qualitative difference if I continue in my original body or if I continue as the clone. In terms of actual consciousness, my primitive self has no greater or lesser claim to identical memories of my past, because of the body I have. For some, this will be controversial.
It’s also irrelevant whether the potential sole awakening of original or clone was decided before the flip or whether I’m told what the choice was. Would you actually claim it’s 1⁄3 for Heads providing that, in the event of that outcome, you know don’t whether you woke as the original or clone? However, if you learn what the potential Heads selection was – regardless of whether this turns out to be original or clone – Heads goes up to 1/2? We’ve touched on this before. It wouldn’t be a perspective disagreement with a third party. It would be a perspective disagreement with yourself.
If I am understanding correctly, you are saying if the sleeping beauty problem does not use a coin toss, but measures the spin of an election instead, then the answer would be different. For the coin’s case, you will give the probability of Heads (yet to be tossed ) as 2⁄3 after learning it is Monday. But for the spin’s case, or a quantum coin, the probability must be 1⁄2 after learning it is Monday as it is a quantum event yet to happen.
That seems very ad-hoc to me. And I think differentiating “true quantum randomness” with something “99.99999% inevitable” in probability theories is a huge can of worms. But anyway, my question is, if the sleeping beauty problem uses a quantum coin, what is the probability of heads when you wake up, before being told what day it is? And what’s your probability after learning “it is Monday now”?
You said the answer depends on the quantum model used. I find it difficult to understand. Quantum models give different interpretations to make sense of the observed probability. The probability part is just experimental observation, not changed by which interpretation one prefers. But anyway, I am interested in your answer. How it can both keeps giving 1⁄2 to a quantum coin yet to be tossed, and obey bayesian probability when learning it is Monday.
As for the clone and waking experiment, you said the answer depends on what happens after the experiment, whether or not there will be further awakenings: If there are, thirding; if not, halving. Again, very ad-hoc. If the awakening depends on a second coin to be tossed after the experiment ends, what then? How should an independent event in the future retroactively affect the probability of the first coin toss? What if both coins are quantum? How can you keep your answer bayesian?
Just to be clear, my answer to cloning and waking P(H)=1/3 when waken up. The probability that I am the randomly chosen one, who would wake up regardless of the coin toss, is 2⁄3. The probability of Heads after learning I am the chosen one is 1⁄2. The answer does not depend on what happens after the experiment. And in all this reasoning, I do not know and do not need to think about, whether I am the original or the clone.
At this point, I find I am focusing more on arguing against SSA rather than explaining PBR. And the discussion is steering away from concrete thought experiments with numbers to metaphysical arguments.
I haven’t followed your arguments all the way here but I saw the comment
and would just jump in and say that others have made a similar arguments. The one written example I’ve seen is this Master’s Thesis.
I’m not sure if I’m convinced but at least I buy that depending on how the particular selection goes about there can be instances were difference between probabilities as subjective credences or densities of Everett branches can have decision theoretic implications.
Edit: I’ve fixed the link
The link point back to this post. But I also remember reading similar arguments from halfer before, that the answer changes depending on if it is true quantum randomness, could not remember the source though.
But the problem remains the same: can Halfers keep the probability of a coin yet to be tossed at 1⁄2, and remain Bayesian. Michael Titelbaum showed it cannot be true as long as the probability of “Today is Tuesday” is valid and non-zero. If Lewisian Halfer argues that, unlike true quantum randomness, a coin yet to be tossed can have a probability differing from half, such that they can endorse self-locating probability and remains Bayesian. Then the question can simply be changed to using quantum measurements (or quantum coin for ease of expression). Then Lewisian Halfers faces the counter-argument again: either the probability is 1⁄2 at waking up and remains at 1⁄2 after learning it is Monday, therefore non-Bayesian. Or the probability is indeed 1⁄3 and updates to 1⁄2 after learning it is Monday, therefore non-halving. The latter effectively says SSA is only correct in non-quantum events and SIA is correct only for quantum events. But differentiating the cases between quantum and non-quantum events is no easy job. A detailed analysis of a simple coin toss result can lead to many independent physical causes, which can very well depend on quantum randomness. What shall we do in these cases? It is a very assumption-heavy argument for an initially simple Halfer answer.
Edit: Just gave the linked thesis a quick read. The writer seems to be partial to MWI and thinks it gives a more logical explanation to anthropic questions. He is not keen on the notion of treating probability/chance as that a randomly possible world becomes actualized, but considers all possible worlds ARE real (many-worlds), that the source of probability (or “the illusion of probability” as the writer says) is from which branch-world “I” am in. My problem with that is the “I” in such statements is taken as intrinsically understood, i.e. has no explanation. It does not give any justification on what the probability of “I am in a Heads world” is. For it to give a probability, additional assumptions about “among all the physically-similar agents across the many-branched worlds, which one is I” is needed. And that circles back to anthropics. At the end of the day, it is still using anthropic assumptions to answer anthropic problems, just like SIA or SSA.
I have argued against MWI in anthropics in another post. If you are interested.