I sense that you’re close to being converted to become a ‘pure halfer’, willing to assign probability to self-locations. Let me address what you said.
“Your latter example: after seeing Red/Blue, I will not say Heads’ probability is halved while Tails’ remain the same. I will say there is no way to update. “
I assume you mean that the probability of 1⁄2 for Heads or Tails – before and after she sees either colour – remains correct. We can agree on that. What matters is why it is true. You’re arguing that no updating is possible after she sees a colour. I invite you to examine this again.
The moment she sees – for example – blue, we can agree that Tails/Red and Heads/Red are definitely ruled out. The difference is that Tails/Red reflects a self-location whereas Heads/Red does not. You’re claiming that because Tails/Red reflects a self-location—i.e the room colour isn’t a random event—no updating is allowed. But you can’t make the same claim about Heads/Red. With Heads, the room being painted red is a random event. For Beauty, the Heads/Red outcome had an unequivocal 1⁄4 chance of being encountered and it has just been eliminated by her seeing the colour blue. So how can Heads not be halved?
Suppose Beauty had her eyes closed with the same set up. Suppose, before she opens her eyes, she must be told straight away whether she’s in a Heads/Red awakening. It’s confirmed that she’s not. You’d agree that the probability of Heads is now halved while for Tails it isn’t. No problem updating. It’s 1⁄3 heads 2⁄3 Tails. Next, suppose she must be told whether she’s in a Tails/Tuesday awakening. It’s confirmed that she’s not. By your reasoning, this would not reduce the probability of Tails. Moreover, Heads has already been halved and there is no reason to change it back. Therefore, if self-locations are un-updatable, her probability must still be 1⁄3 for Heads and 2⁄3 for Tails. What’s more, the information she just received is the same as she would have got by opening her eyes and seeing blue.
The only logical reason the coin outcome is still 1⁄2, after she sees either colour, is because both Heads and Tails must both get halved. This means ignorance or information about self-location status, such as Monday/Tuesday or Original/Clone, are subject to the normal rules of probability and conditionalisation.
Hi Simon, before anything let me say I like this discussion with you- using concrete examples. I find it helps to pinpoint the contention making thinking easier.
I think our difference is due to this here: “I assume you mean that the probability of 1⁄2 for Heads or Tails – before and after she sees either colour – remains correct.” Yes and No, but most importantly not in the way you have in mind.
Just to quickly reiterate: I argue the first-person perspective is a primitive axiomatic fact. No way to explain or reason. It just is. Therefore no probability. Everything follows from here.
It means everything using self-locating probability is invalid. And that includes things like P(Heads|Red). So there is no “probability of Heads given I see a red room”. Red cannot be conditioned on because it involves the probability “Now is Tuesday” vs “Now is Monday”.
Let me follow your steps. If it is Heads then seeing the color is a random event, there is no problem at all, halving the chance is Ok. In the case of Tails, traditional updating would eliminate half the chance too because they give equal probability to Now is Monday vs Now is Tuesday. But because self-locating probability does not exist, there is no basis to split it evenly, or any other way for the matter of fact. There is no valid way to split it at all, that includes a 0-100 split.
So what I meant by “there is no way to update” is not saying the correct value of Tails remains unchanged at 1⁄2. I meant there is no correct value period. And you can’t renormalize it with the Heads’ chance of 1⁄4 to get anything.
This is why I suggested using the repeatable example with long-run frequencies. It makes the problem clearer. If you follow the first-person perspective of a subject then the long-run frequency of Heads is 1⁄2. However, the long-run frequency for Red or Blue would not converge to any particular value. And as you suggested, if you are always told if it is Heads and Red, andonly counting iterations not being told so, then the relative frequency of Heads would indeed approach 1⁄3 as you suggested. Yet if you are always told whether it is Tails and Red and only counting the iterations not being told so, then there is still no particular long-run frequency for Tails. Overall there is no long-run frequency for Tails or Heads when you only count iterations of a particular color.
Back to your example, what is the probability of Heads after waking up? It is 1⁄2. That is the best you can give. But it is not the probability of Heads given Red. Using the color of the room in probability analysis won’t yield any valid result. Because as the experiment is set up, the color of the room involves self-locating probability: it involves explaining why the first-person perspective is this particular observer/moment.
Ok let’s see if we can pin this down! Either Beauty learns something relevant to the probability of the coin flip, or she doesn’t. We can agree on this, even if you think updating can’t happen with self-location.
Let’s go back to a straightforward version of the original problem. It’s similar to one you came up with. If Heads there is one awakening, if Tails there are two. If Heads, it will be decided randomly whether she wakes on Monday or Tuesday. If Tails, she will be woken on both days with amnesia in between. She is told in advance that, regardless of the coin flip, Bob and Peter will be in the room. Bob will be wake on Monday, while Peter is asleep. Peter will be awake on Tuesday, while Bob asleep. Whichever of the men wakes up, it will be two minutes before Beauty (if she’s woken). All this is known in advance. Neither man knows the coin result, nor will they undergo amnesia. Beauty has not met either of them, so although she knows the protocol, she won’t know the name of who’s awake with her or the day unless he reveals it.
Bob and Peter’s perspective when they wake up is not controversial. Each is is guaranteed to find the other guy asleep. In the first two minutes, each will find Beauty asleep. During that time, both men’s probability is 1⁄2 for Heads and Tails. If Beauty is still asleep after two minutes, its definitely Heads. If she wakes up, it’s 1⁄3 Heads and 2⁄3 Tails.
A Thirder believes that Beauty shares the same credence as Bob or Peter when she wakes. As a Halfer, I endorse perspective disagreement. Unlike the guys, Beauty was guaranteed to encounter someone awake. Her credence therefore remains 1⁄2 for Heads and Tails, regardless of who she enounters.
What happens when the man awake reveals his name—say Bob? This reveals to Beauty that today is Monday. I would say that the probability of the coin remains unchanged from whatever it was before she got this information. For a Thirder this is 1⁄3 Heads, 2⁄3 Tails. For a Halfer it is still 1⁄2. I submit that the reason the probability remains unchanged is because something in both Heads and Tails was eliminated with parity. But suppose she got the information in stages.
She first asks the experimenter: is it true that the coin landed Tails and I’m talking to Peter? She’s told that this is not true. I regard this as a legitimate update that halves the probability of Tails, whereas you don’t. Such an update would make her credence 2⁄3 Heads and 1⁄3 Tails. You would claim that no update is possible because what’s been ruled out is the self-location Tuesday/Tails. For you, ruling out Tuesday/Tails says nothing new about the coin. You argue that, whether the coin landed Heads or Tails, there is only one ‘me’ for Beauty and a guaranteed awakening applied to her. So the probability of Heads or Tails must be 1⁄2 before and after Tuesday/Tails is ruled out.
We might disagree whether ruling out the self-location Tuesday/Tails permitted an update or whether her credence must remain 1⁄2. But we can agree that, if she gets further information about a random event that had prior probability, she must update in the normal way. Even if ruling out a self-location told her nothing about the coin probability, it can’t prevent her from updating if she does get this information.
So now she asks the experimenter: is it true that the coin landed Heads and I’m talking to Peter? She’s told that this is not true. This tells her that Bob is the one she’s interacting with, plus she knows it’s Monday. What’s more, this is not a self-location that’s been ruled out like before. The prior possibility of the coin landing Heads plus the prior possibility of her encountering Peter was a random sequence that has just been eliminated. It definitely requires an update. If her credence immediately before was 1⁄2 for Heads, it must be 1⁄3 now. If her credence was 2⁄3 for Heads – which I think was correct—then it is 1⁄2 now. Which is it?
That brings us back to Beauty’s position before the man says his name. Her credence for the coin is 1⁄2, before she learns who she’s with. Learning his identity rules out a possibility in both coin outcomes, as described above. The order in which she got the information makes no difference to what she now beleives. The fact remains that a random event with Heads, and a self-location with Tails, were both ruled out. It’s the parity in updating that makes her credence still 1⁄2, whoever turns out to be awake with her.
Let me lay out the difference between my argument and yours, following your example.
After learning the name/day of the week, halfer’s probability is still 1⁄2. You said because something in both Heads and Tails was eliminated with parity. My argument is different, I would say there is no way to update the probability based on the name because it would involve using self-locating probability.
Let’s break it down in steps as you suggested.
Suppose I ask: “is it true that the coin landed Tails and I’m talking to Peter? ” and get a negative answer. You say this would eliminate half the probability for Tails. I say there is no way to say how much probability of Tails is eliminated (because it involves self-locating probability). So we cannot update the probability for this information. You say considering the answer, Tails is reduced to 1⁄4. I say considering the answer is wrong, if you consider it you get nothing meaningful, no value.
Suppose I ask: “is it true that the coin landed Heads and I’m talking to Peter?” and get a negative answer. You say this would eliminate half the probability for Heads making it 1⁄4. I agree with this.
Seeing Bob would effectively be the same as getting two negative answers altogether. How does it combine? You say Heads and Tails both eliminate half the probability (both 1⁄4 now), so after renormalizing the probability of heads remains unchanged at 1⁄2. I say since one of the steps is invalid the combined calculation would be invalid too. There is no probability condition on seeing Bob (again because it involves self-locating probability).
I suppose your next question would be: if the first question is invalid for updating, wouldn’t I just update based on the second question alone? which will give a probability of Heads to 1/3?
That is correct as long as I indeed asked the question and got the answer. Like I said before: the long-run frequency for these cases would converge on 1⁄3. But that is not how the example is set up, it only gives information about which person is awake. If I actually askes this question then I would get a positive or negative answer. But no matter which person I see I could never get a positive result: even if I see Peter, there is still the possible case of a Tails-first awakening (which do not have a valid probability), so no positive answer. Conversely seeing Bob would mean the answer is negative but it also eliminates the case of Peter-tails (again no valid probability). So the combined probability of tails after seeing Bob and the probability of tails of getting a negative answer are not the same thing.
That is also the case for non-anthropic probability questions. For example, a couple has two children and you want to give the probability both of them are boys. Suppose you have a question in mind ” is there a boy born on Sunday?”. However, the couple is only willing to answer whether a child is born on a weekend. Meaning the question you have in mind would never have a positive answer. Anyway, the couple says “there is no boy born on a weekend”. So your question got a negative answer. But that does not mean the probability of two boys given no boy born on the weekend is the same as the probability given no boy born on Sunday. You have to combine the case of no boy born on Saturday as well. This is straightforward. The only difference is in the anthropic example, the other part that needs to be combined together has no valid value. I hope this helps to pin down the difference. :)
I doubt we’ll persuade each other :) As I understand it, in my example you’re saying that the moment a self-location is ruled out, any present and future updating is impossible – but the last known probability of the coin stands. So if Beauty rules out Heads/Peter and nothing else, she must update Heads from 1⁄2 to 1⁄3. Then if she subsequently rules out Tails/Peter, you say she can’t update, so she will stay with the last known valid probability of 1⁄3. On the other hand, if she rules out Tails/Peter first, you say she can’t update so it’s 1⁄2 for Heads. However, you also say no further updating is possible even if if she then rules out Heads/Peter, so her credence will remain 1⁄2, even though she ends up with identical information. That is strange, to say the least.
I’ll make the following argument. When it comes to probability or credence from a first person perspective, what matters is knowledge or lack of it. Poeple can use that knowledge to judge what is likely to be true for them at that moment. Their ignorance doesn’t discriminate between unknown external events and unknown current realities, including self-locations. Likewise, their knowledge is not invalidated just because a first person perspective might happen to conflict with a third person perspective or because the same credence may not be objectively verfiable in the frequentist senes. In either case, it’s their personal evidence for what’s true that gives them their credence, not what kind of truth it is. That credence, based on ignorance and knowledge, might or might not correspond to an actual random event. It might reflect a pre-existing reality - such as there is a 1⁄10 chance that the 9th digit of Pi is 6. Or it might reflect an unknown self-location – such as “today is Monday or Tuesday”, or “I’m the original or clone”. Whatever they’re unsure about doesn’t change the validity of what they consider likely.
You could have exactly the same original Sleeping Beauty problem, but translated entirely to self-location without the need for a coin flip. Consider this version. Beauty enters the experiment on a Saturday and is put to sleep. She will be woken in both Week 1 and in Week 2. A memory wipe will prevent her from knowing which week it is, but whenever she wakes, she will definitely be in one or the other. She also knows the precise protocol for each week. In Week 1, she will be woken on Sunday, questioned, put back to sleep without a memory wipe, woken on Monday and questioned. This completes Week 1. She will return the following Saturday and be put to sleep as before. She is now given a memory wipe of both her awakenings from Week 1. She is then woken on Sunday and questioned but her last memory is of the previous Saturday when she entered the experiment. She doesn’t know whether this is the Sunday from Week 1 or Week 2. Next she is put to sleep without a memory wipe, woken on Monday and questioned. Her last memory is the Sunday just gone, but she still doesn’t know if it’s Week 1 or Week 2. Next she is put back to sleep and given a memory wipe, but only of todays’s awakening. Finally she woken on Tuesday and questioned. Her last memory is the most recent Sunday. She still won’t know which week she’s in.
The questions asked of her are as follows. When she awakens for what seems to be the first time – always a Sunday – what is her credence for being in Week 1 or Week 2? When she awakens for what seems to be the second time – which might be a Monday or Tuesday – what is her credence for being in Week 1 or Week 2?
Essentially this is the same Sleeping Beauty problem. The fact that she has uncertainty of which week she’s in rather than a coin flip, doesn’t prevent her from assigning a credence/probability based on the evidence she has. On her Sunday awakening, she has equal evidence to favour Week 1 and Week 2, so it is valid for her to assign 1⁄2 to both. On her weekday awakenings, Halfers and Thirders will disagree whether it’s 1⁄2 or 1⁄3 that she’s in Week 1. If she’s told that today is Monday, they will disagree whether it’s 2⁄3 or 1⁄2 that she’s in Week 1.
We could add Bob to the experiment. Like Beauty, Bob enters the experiment on Saturday. His protocol is the same, except that he is kept asleep on the Sundays in both week. He is only woken Monday of Week 1, then Monday and Tuesday of Week 2. Each time he’s woken, his last memory is the Saturday he entered the experiment. He therefore disagrees with Beauty. From his point of view, it’s 1⁄3 that they’re in Week 1, whereas she says it’s 1⁄2. If told it’s Monday, for him it’s 1⁄2 that they’re in Week 1, and for her it’s 2⁄3.
This recreates perspective disagreement – but exclusively using self-location. You might be tempted to argue that neither Beauty or Bob can ever assign any probability or likelihood as to which week they’re in. I say it’s legitimate for them to do so, and to disagree.
Let’s not dive into another example right away. Something is amiss here. I never said anything about the order of getting the answers to “Is it Tails and Peter?” and “Is it Heads Peter?” would change the probability. I said we cannot update based on the negative answer of “Is it Tails Peter?” because it involves using self-locating probability. Whichever the order is, we can nevertheless update the probability of Heads to 1⁄3 when we get the negative answer to “Is it Heads and Peter?”, because there is no self-locating probability involved here. But 1⁄3 is the correct probability only if Beauty did actually ask the question and get the negative response. I.E. There has to be a real question to update based on its answer.That does not mean Beauty would inevitably update P(Head) to 1⁄3 no matter what.
Before Beauty opens her eyes, she could ask: “Is it Heads and Peter?”. If she gets a positive answer then the probability of Heads would be 1. If she gets a negative answer the probability of Heads would update to 1⁄3. She could also ask “Is it Heads and Bob?”. And the result would be the same. Positive answer: P(Head)=1, negative answer: P(Head)=1/3. So no matter which of the two symmetrical questions she is asking, she can only update her probability after getting the answer to it. I think we can agree on this.
The argument saying my approach would always update P(Heads) to 1⁄3 no matter which person I see is as follows: first of all, no real question is asked, look at whether it is Peter or Bob. If I see Bob, then retroactively pose the question as “Is it Heads and Peter?” and get a negative answer. If I see Peter, then retroactively pose the question as “Is it Heads and Bob?” and get a negative answer. Playing the game like this would guarantee a negative answer no matter what. But clearly, you get the negative answer because you actively changing the question to look for it. We cannot update the probability this way.
My entire solution is suggesting there is a primitive axiom in reasoning: first-person perspective. Recognizing it resolves the paradoxes in anthropics and more. I cannot argue why perspective is axiomatic except it intuitively appears to be right, i.e. “I naturally know I am this person, and there seems to be no explanation or reason behind it.” Accepting it would overturn Doomsday Argument (SSA), Presompetous Philosopher (SIA), it means no way to think about self-locating probability which resolves the reason for no-update after learning it’s Monday, it explains why perspective disagreement in anthropics is correct, and it results in the agreement between Bayesian and frequentist interpretation in anthropics.
Because I regard it as primitive, if you disagree with it and argue there are good ways to reason and explain the first-person perspective and furthermore assign probabilities to it, then I don’t really have a counter-argument. Except you would have to resolve the paradoxes your own way. For example, why do Bob and Beauty answer differently to the same question? My reason for perspective disagreement in anthropics is that the first-person perspective is unexplainable so a counterparty cannot comprehend it. What is your reason? (Just to be clear, I do not think there is a probability for it is Week 1 / Week 2, for either Beauty or Bob.) Do you think Doomsday Argument is right? What about Nick Bostrom’s simulation argument, what is the correct reference class for oneself, etc.
Conversely, I don’t think regarding the first-person perspective as not primitive, and subsequently, stating there are valid ways to think and assign self-locating probabilities is a counter-argument against my approach either. The merit of different camps should be judged on how well they resolve the paradoxes. So I like discussions involving concrete examples, to check if my approach would result in paradoxes of its own. I do not see any problem when applied to your thought experiments. The probability of Heads won’t change to 1⁄3 no matter which person/color I see. I actually think that is very straightforward in my head. But I’m sensing I am not doing a good job explaining it despite my best effort to convince you :)
I’ve allowed some time to digest on this occasion. Let’s go with this example.
A clone of you is created when you’re asleep. Both of you are woken with identical memories. Under your pillow are two envelopes, call them A and B. You are told that inside Envelope A is the title ‘original’ or ‘copy’, reflecting your body’s status. Inside Envelope B is also one of those titles, but the selection was random and regardless of status. You are asked the likelihood that each envelope contains ‘original’ as the title.
I’m guessing you’d say you can’t assign any valid probability to the contents of Envelope A. However, you’d say it’s legitimate to assign a 1⁄2 probability that Envelope B contains ‘original’.
Is there a fundamental difference here, from your point of view? Admittedly if Envelope A contains ‘original’, this reflects a pre-existing self-location that was previously known but became unknown while you were asleep. Whereas if Envelope B contains ‘original’, this reflects an independent random selection that occured while you were asleep. However, your available evidence is identical for what could be inside each envelope. You therfore have identical grounds to assign likelihood about what is true in both.
Suppose it’s revealed that both envelopes contain the same word. You are asked again the likelihood that the envelopes contain ‘original’. What rules do you follow? Would you apply the non-existence of Envelope A’s probability to Envelope B? Or would you extend the legitimacy of Envelope B’s probability to Envelope A?
I’m guessing you would continue to distinguish the two, stating that 1⁄2 was a still valid probability for Envelope B containing ‘original’ but no such likelihood existed for Envelope A—even knowing that whatever is true for Envelope B is true for Envelope A. If so, then it appears to be a semantic difference. Indeed, from a first person perspective, it seems like a difference that makes no difference. :)
Here is the conclusion based on my positions: the probability of Original for Envelop A does not exist; probability of Original for Envelop B is 1⁄2; probability of Original for Envelop B given contents are the same does not exist. Just like previous thought experiments, it is invalid to update based on that information.
Remember my position says first-person perspective is a primitive axiomatic fact? e.g. “I naturally know I am this particular person. But there is no reason or explanation for it. I just am.” This means arguments that need to explain the first-person perspective, such as treating it as a random sample, are invalid.
And the difference between Envelop A and B is that probability regarding B does not need to explain the first-person perspective, it can just use “I am this person” as given. My envelope’s content is decided by a coin toss. While probability A needs to explain the first-person perspective (like treating it as a random sample.)
Again this is easier seen with a frequentist approach.
If you repeat the same clone experiment a lot of times and keep recording whether you are the Original of the Clone in each iteration. Then even as time goes on there is no reason for the relative fraction of “I am the Original” to converge to any particular value. Of course, we can count everyone from these experiments and the combined relative fraction would be 1⁄2. But without additional assumptions such as “I am a random sample from all copies.” that is not the same as the long-run frequency for the first person.
In contrast, I can repeat the cloning experiment many times, and the long-run frequency for Envelop B is going to converge to 1⁄2 for me, as it is the result of a fair coin toss. There is no need to explain why “I am this person” here. So from a first-person perspective, the probability for B describes my experience and is verifiable. While probability about A is not unless considering every copy together, which is not about the first-person anyway.
Even though you didn’t ask this (and I am not 100% set on this myself) I would say the “probability that the Envelop contents are the same” is also 1⁄2. I am either the Orignal or the Clone, there is no way to reason which one I am. But luckily it doesn’t matter which is the case, the probability that Envelop B got it right is still an even toss. And long-run frequency would validate that too.
But what Envelop B says given it got it right depends on which physical copy I am. There is no valid value for P(Envelop B says Orignal|the contents are the same). For example, say I repeat the experiment 1000 times. And it turns out I was the Clone in 300 of those experiments and the Original in 700 experiments. Due to fair coin tosses, I would have seen about an equal number of Original vs Clone (500 each) in Envelop B which corresponds to the probability of 1⁄2. And the contents would be the same about 150 times in experiments where I was the Clone and 350 times when I was the Original. (500 total which corresponds to the probability of “same content” being 1⁄2). Yet for all iterations where the contents are the same, 30% of which “I am the Clone”.
But if I am the Original in all 1000 experiments, then for all iterations where the contents are the same, I am still 100% the Original. (The other two probabilities of 1⁄2 above still hold). i.e. the relative fraction of Orignal in Envelop B given contents are the same depends on which physical I am. And there is no way to reason about it unless some additional assumptions are made. (e.g. treating I as a random sample would cause it to be 1⁄2.)
In another word, to assign a probability to envelop B given contents are the same we must first find a way to assign self-locating probabilities. And that cannot be done.
I’ll come back with a deeper debate in good time. Meanwhile I’ll point out one immediate anomaly.
I was genuinely unsure which position you’d take when you learnt the two envelopes were the same. I expected you to maintain there was no probability assigned to Envelope A. I didn’t expect you to invalidate probability for the contents of Envelope B. You argued this because any statement about the contents of Envelope B was now linked to your self-location – even though Envelope B’s selection was unmistakably random and your status remained unknown.
It becomes even stranger when you consider your position if told that the two envelopes were different. In that event, any statement about the contents of Envelope B refers just as much to self-location. If the two envelopes are different, a hypothesis that Envelope B contains ‘copy’ is the same hypothesis that you’re the original—and vice versa. Your reasoning would equally compel you to abandon probability for Envelope B.
Therein lies the contradiction. The two envelopes are known in advance to be either the same or different. Whichever turns out to be true will neither reveal or suggest the contents of Envelope B. Before you discover whether the envelopes are the same or different, Envelope B definitely had a random 1⁄2 chance of containing ‘original’ or ‘copy’. Once you find out whether they’re the same or different, regardless of which turn out to be true, you’re saying that Envelope B can no longer be assigned probability.
Haha, that’s OK. I admit I am not the best communicator nor anthropic an easy topic to explain. So I understand the frustration.
But I really think my position is not complicated at all. I genuinely believe that. It just says take whatever the first-person perspective is as given, and don’t come up with any assumptions to attempt explaining it.
Also want to point out I didn’t say P(B say Original) as invalid, I said P(P says Original|contents are the same) is invalid. Some of your sentencing seems to suggest the other way. Just want to clear that up.
And I’m not playing any tricks. Remember Peter/Bob. I said the probability of Heads is 1⁄2, but you cannot update on the information that you have seen Peter? The reason being it involves using self-locating probability? It’s the same argument here. There was a valid P(Heads) while no valid P(Heads|Peter). There is a valid P(B says Original) but no valid P(B says Original|Same) for the exact same reason.
And You can’t update the probability given you saw Bob either. But just because you are either going to see Peter or Bob, that does not mean P(Heads) is invalidated, just can’t update on Peter/bob that’s all. Similarly, just because envelopes are either “same” or “different”, doesn’t mean P(B says Orignal) is invalid. Just cannot update on either.
And the coin toss and Envelop B are both random/unknown processes. So I am not trying to trick you. It’s the same old argument.
And by suggesting think of repeating experiments and counting the long-run frequencies, I didn’t leave much to interpretation. If you imagine repeating the experiments as a first-person and can get a long-run frequency, then the probability is valid. If there is no long-run frequency unless you come up with some way to explain the first-person perspective, then there is no valid probability. You can deduce what my position says quite easily like that. There aren’t any surprises.
Anyway, I would still say arguing using concrete examples with you is an enjoyment. It pushes me to articulate my thoughts. Though I am hesitant to guess if you would say that’s enjoyable :) I will wait for your rebuttal in good time.
Ok here’s some rebuttal. :) I don’t think it’s your communication that’s wrong. I believe it’s the actual concept. You once said that yours is a view that no-one else shares. This does not in itself make it wrong. I genuinely have an open mind to understand a new insight if I’m missing it. However I’ve examined this from many angles. I believe I understand what you’ve put forward.
In anthropic problems, issues of self-location and first person perspective lie at the heart. In anthropic problems, a statement about a person’s self-location, such as ‘’”today is Monday” or “I am the original”, is indeed a first person perspective. Such a statement, if found to be true, is a fact that could not have been otherwise. It was not a random event that had a chance of not happening. From this, you’ve extrapolated – wrongly in my view—that normal rules of credence and Bayesian updating based on information you have or don’t yet have, are invalid when applied to self-location.
I’m reminded of the many worlds quantum interpretation. If we exist in a multiverse, all outcomes take place in different realities and are objectively certain. In a multiverse, credences would be deciding which world your first-person-self is in, not whether events happened. The multiverse is the ultimate self-location model. It denies objective probability. What you have instead are observers with knowledge or uncertainty about their place in the multiverse.
Whether theories of the multiverse prove to be correct or not, there are many who endorse them. In such a model – where probability doesn’t exist – it is still considered both legitimate and necessary for observers to assign likelihood and credence about what is be true for them, and apply rules of updating based on information available.
I have a realistic example that tests your position. Imagine you’re an adopted child. The only information you and your adopted family were given is that your natural parents had three children and that all three were adopted by different families. What is the likelihood that you were the first born natural child? For your adopted parents, it’s straightforward. They assign a 1⁄3 probability that they adopted the oldest. According to you, as it’s a first-person perspective question about self-location, no likelihood can be assigned.
It won’t surprise you to learn that here I find no grounds for you to disagree with your adopted parents, much less to invalidate a credence. Everyone agrees that you are one of three children. Everyone shares the same uncertainty of whether you’re the oldest. Therefore the credence 1⁄3 that this is the case must also be shared.
I could tweak this situation to allow perspective disagreement with your adopted family, making it closer to Sleeping Beauty—and introducing a coin flip. I may do that later.
While there isn’t anything wrong with your summarization of my position. I wouldn’t call it an extrapolation. Instead, I think it is other camps like SSA and SIA that are doing the extrapolation. “I know I am this person but have no explanation or reason for it” seems right, and I stick to it in reasoning. In my opinion, it is SSA and SIA that try to use random sampling in an unrelated domain to give answers that do not exist. Which leads to paradoxes.
I recognized my solution (PBR) as incompatible with MWI since the very beginning of forming my solution. I even explicitly wrote about it, right after the solution to anthropics on my website. The deeper reason is PBR actually has a different account of what scientific objective means. Self-locating probability is only the most obvious manifestation of this difference. I wrote about it in a previous post.
I would suggest not to deny PBR just because one likes MWI. Since nobody can be certain that MWI is the correct interpretation. Furthermore, there is no reason to be alarmed just because a solution of anthropics is connected with quantum interpretations. The two topics are connected, no matter which solution one prefers. For example, there is a series of debates between Darren Bradly and Alastair Wilson about whether or not SIA would naively confirm MWI.
Regarding the disagreement between the adopted son and parents about being firstborn, I agree there is no probability to “I am a firstborn”. There simply is no way to reason about it. The question is set up so that it is easy to substitute the question with “what is the probability that the adopted child is firstborn.” And the answer would be 1⁄3. But then it is not a question defined by your first-person perspective. It is about a particular unknown process (the adoption process). Which can be analyzed without additional assumptions explaining the perspective.
To the parents, not the firstborn means they got a different child in the adoption process. But for you, what does “not being the firstborn” even mean? Does my soul get incarnated into one of the other two children? Do I become someone/something else? Or maybe never came into existence completely? None of these makes any sense unless you come up with some assumptions explaining the first-person perspective.
I’m certain this answer won’t convince you. It will be better there is a thought experiment with numbers. I will eagerly wait for your example for it. And maybe this time you can predict what my response would be by using the frequentist approach. :)
edit: Also please don’t take it as criticizing you. The reason I think I am not doing a good job communicating is that I feel I have been giving the same explanation again and again. Yet my answer to the thought experiments seems to always surprise you. When I thought you would have already guessed what my response would be. And sometimes (like the order of the questions in Peter/Bob), your understanding is simply not what I thought I wrote.
I have always thought repeating the experiment and counting long-run frequencies is the surest way to communicate. It takes all theoretical metaphysical aspects out of the question and just presents solid numbers. But that didn’t work very well here. Do you have any insights about how I should present my argument going forward? I need some advice.
Ok that’s fine. I agree MWI is not proven. My point was only that it is the absolute self-location model. Those endorsing it propose the non-existence of probability, but still assign the mathematics of likelihood based on uncertainty from an observer. Forgive me for stumbling onto the implications of arguments you made elsewhere. I have read much of what you’ve written over time.
I especially agree that perspective disagreement can happen. That’s what makes me a Halfer. Self-location is at the heart of this, but I would say it is not because credences are denied. I would say disagreement arises when sampling spaces are different and lead to conflicting first-person information that can’t be shared. I would also say that, whenever you don’t know which pre-existing time or identity applies to you, assigning subjective likelihood has much meaning and legitimacy as it does to an unknown random event.I submit that it’s precisely because you do have credences based on uncertainty about self-location that perspective disagreement can happen.
You can also have a situation where a reality might be interpreted subjectively both as random and self-location. Consider a version of Sleeping Beauty where we combine uncertainty of being the original and clone with uncertainty about what day it is.
Beauty is put to sleep and informed of the following. A coin will be (or has already been) flipped. If the coin lands Heads, her original self will be woken on Monday and that is all. If the coin lands Tails, she will be cloned; if that happens, it will be randomly decided whether the original wakes on Monday and the clone on Tuesday, or the other way round.
Is it valid here for Beauty to assign probability to the proposition “Today is Monday” or Today is Tuesday”? I’m guessing you will agree that, in this case, it is. If the coin landed Heads, being woken on Monday was certain and so was being the original. If it landed Tails, being woken on Monday or Tuesday was a separate random event from whichever version of herself she happens to be. Therefore she should assign 3⁄4 that today is Monday and 1⁄4 that today is Tuesday. We can also agree as halfers that the coin flip is 1⁄2 but, once she learns it’s Monday, she would update 2⁄3 to Heads and 1⁄3 to Tails. However, if instead of being told it’s Monday, she’s told that she’s the original, then double halfing kicks in for you.
From your PBR position, the day she’s woken does have indpendent probability in this example, but her status as original or clone is a matter of self-location. Whereas I question whether there’s any significant difference between the two kinds of determination. Also, in this example, the day she’s woken can be regarded as both externally random and a case of self-location. Whichever order the original and clone wake up if the coin landed Tails, there are still two versions of Beauty with identical memories of Sunday; one of these wakes on Monday, the other on Tuesday. If told that, in the event of Tails, the two awakenings were prearranged to correspond to her original and clone status instead of being randomly assigned, the number of awakenings are the same and nothing is altered for Beauty’s knowledge of what day it is. I submit that in both scenarios, validity for credence about the day should the same.
But my explanation for perspective disagreement is based on the primitive nature of the first-person perspective. i.e. it cannot be explained therefore incommunicable. If we say there is A GOOD WAY to understand and explain it, and we must use assign self-locating probabilities this way, then why don’t we explain our perspectives to each other as such, so we can have the exact same information and eliminate the disagreement?
If we say the question has different sample spaces for different people, which is shown by repeating the experiment from their respective perspective gives different relative frequencies. Then why say when there is no relative frequency from a perspective there is still a valid probability? This is not self-consistent.
To my knowledge, halfers have not provided a satisfactory explanation to perspective disagreement. Even though Katja Grace and John Pittard have pointed out for quite some time already. And if halfers want to use my explanation for the disagreement, and at the same time, reject my premises of primitive perspectives, then they are just putting assumptions over assumptions to preserve self-locating probability. To me, that’s just because our nature of dislike saying “I don’t know”, even when there is no way to think about it.
And what do halfers get by preserving self-locating probability? Nothing but paradoxes. Either we say there is a special rule of updating which keeps the double-halving instinct, or we update to 1⁄3. The former has been quite conclusively countered by Michael Titelbaum: as long as you assign a non-zero value to the probability to “today is Tuesday” it will result in paradoxes. The latter has to deal with Adam Elga’s question from the paper which jump-started the whole debate: After knowing it is Monday, I will put a coin into your hand. You will toss it. The result determines whether you will be wake again tomorrow with a memory wipe. Are you really comfortable saying “I believe this is a fair coin and the probability it will land Heads is 1/3?”.
I personally think if one wants to reject PBR and embrace self-locating probability the better choice would be SIA thus becoming a thirders. It still has serious problems but not so glaring.
What you said about my position in the thought experiment is correct. And I still think the difference between self-locating and random/unknown processes is significant (which can be shown by relative frequencies’ existence). Regarding this part: “the two awakenings were prearranged to correspond to her original and clone status instead of being randomly assigned”, if by that it means it was decided by some rule that I do not know, then the probability is still valid. If I am disclosed what the rule is, e.g. original=Monday and clone=Tuesday, then there is no longer a valid probability.
Hi Dadarren. I haven’t forgotten our discussion and wanted to offer further food for thought. It might be helpful to explore definitions. As I see it, there are three kinds of reality about which someone can have knowledge or ignorance.
Contingent – an event in the world that is true or false based on whether it did or did not happen.
Analytic – a mathematical statement or expression that is true or false a priori.
Self-location – an identity or experience in space-time that is true or false at a given moment for an observer.
I’d like to ask two questions that may be relevant.
a) When it comes to mathematical propositions, are credences valid? For example, if I ask whether the tenth digit of Pi is either between 1-5 or 6-0, and you don’t know, is it valid for you to use a principal of indifference and assign a personal credence of 1/2?
b) Suppose you’re told that a clone of you will be created when you’re asleep. A coin will be flipped. If it lands Head, the clone will be destroyed and the original version of you will be woken. If it lands Tails, the original will be destroyed and the clone will be woken. Finding yourself awake in this scenario, is it valid to assign a 1⁄2 probability that you’re either the original or clone?
I would say that both these are valid and normal Bayesian conditioning applies. The answer to b) reflects both identity and a contingent event, the coin flip. For a), it would be easy to construct a probability puzzle with updatable credences about outcomes determined by mathematical propositions.
However I’m curious what your view is, before I dive further in.
For a, my opinion is while objectively there is no probability for the value of a specific digit of Pi, we can rightly say there is an attached probability in a specific context.
For example, it is reasonable to ask why I am focusing on the tenth digit of Pi specifically? Maybe I just happen to memorize up to the ninth digit, and I am thinking about the immediate next one. Or maybe I just arbitartily choose the number 10. Anyway, there is a process leading to the focus of that particular digit. If that process does not contain any information about what that value is, then a principle of indifference is warranted. From a frequentist approach, we can think of repeating such processes and checking the selected digit, which can give a long-run relative frequency.
For b, the probability is 1⁄2. It is valid because it is not a self-locating probability. Self-locating probability is about what is the first-person perspective in a given world, or the centered position of oneself in a possible world. But problem b is about different possible worlds. Another hint is the problem can be easily described without employing the subject’s first-person perspective: what is the probability that the awakened person is the clone? Compare that to the self-locating probabilities we have been discussing: there are multiple agents and the one in question is specified by the first-person “I”, which must be comprehended from the agent’s perspective.
From a frequentist approach, that experiment can be repeated many times and the long-run frequency would be 1⁄2. Both from the first-person perspective of the experiment subject, and from the perspective of any non-participating observers.
Well in that case, it narrows down what we agree about. Mathematical propositions aren’t events that happen. However, someone who doesn’t know a specific digit of Pi would assign likelihood to it’s value with the same rules of probability as they would to an event they don’t know about. I define credence merely as someone’s rational estimate of what’s likely to be true, based on knowledge or ignorance. Credence has no reason to discriminate between the three types of reality I talked about, much less get invalidated.
I would also highlight that almost all external outcomes in the macro world, whether known or unknown, are already determined, as opposed to being truly random. In that sense, an unknown coin flip outcome is just as certain as an unknown mathematical proposition. In the case of Sleeping Beauty being told it’s Monday and that a coin will be flipped tonight, she is arguably already in a Heads world or a Tails world. It’s just that no-one knows which way the coin will land. If so, Lewis’s version of halfing is not as outlandish as it appeared. Beauty’s 2⁄3 update on Monday that the coin will land Heads is not actually an update on a future random event. It is an update on a reality that already exists but is unknown. From Beauty’s perspective, if she’s in a Heads world, she is certain it is Monday. If the world is Tails, she doesn’t have that certainty. Therefore an increased likelihood in Heads, once she learns it’s Monday, is reasonable – assuming that self-locations allow credences. I submit that, since she was previously not certain that the day she found herself awake on was Monday, a non-zero credence that this day was Tuesday legitmately existed before being eliminated.
Below is a version of Sleeping Beauty that mixes the three types of reality I described – contingent, analytic and self-location.
On Sunday night, Beauty is put to sleep and cloned. A coin is flipped. If it lands Heads, the original is woken on Monday and questioned, while the clone stays asleep. If it lands Tails, the clone is woken on Monday and questioned, while the original stays asleep.
The rest of the protocol concerns the Beauty that was not woken, and is determined by the tenth digit of Pi. If it’s between 1-5, the other Beauty is never woken and destroyed. If it’s between 6-0, the other Beauty is woken on Tuesday and questioned.
For any Beauty finding herself awake, do any of the following questions have valid answers?
What is the likelihood that the tenth digit of Pie is 1-5 or 6-0?
What is the likelihood that the coin landed Heads or Tails?
What is the likelihood that today is Monday or Tuesday?
What is the likelihood that the she is the original or clone?
You’ll be unsurprised that I think all these credences are valid. In this example, any credence about her identity or the day happens to be tied in with credence about the other realities.
You’ll also notice how tempting it is for Beauty to apply thirder reasoning to the tenth digit of Pi – i.e it’s 1⁄3 that it’s 1-5 and 2⁄3 that it’s 6-0. The plausible thirder argument is that, whatever identity the waking Beauty turns out to have, either original or clone, it was only 50⁄50 this version would be woken if the Pi digit was 1-5, whereas it was certain this version would be woken if the Pi digit was 6-0. However, I would say that, uniquely from her perspective, her identity is not relevant to her continued conciousness or the likelihood of the tenth digit of Pi. At least one awakening with memories of Sunday was guaranteed. Her status as original or clone doesn’t change her prior certainty of this. All that matters is that one iteration of her consciousness woke up if the digit was 1-5, while two iterations of her consciousness woke up if the digit was 6-0. In either case there is a guaranteed awakening with continuity from Sunday, and no information to indicate whether there are one or two awakenings. This is why I would remain a halfer.
Each question can be asked conditionally, with Beauty being told the answer to one or more of the others. In particular, if she’s told it’s Monday, the likelihood of Pi’s tenth digit being 1-5 must surely increase, whether she’s a thirder or halfer. Her reasoning is that if the tenth digit of Pi was 1-5, whatever body she has was certain to wake up on Monday, regardless of the coin. Whereas if the tenth digit of Pi was 6-0, the body she has could have woken on either day, determined by the coin. It would be hard to argue otherwise, since the day she wakes up is a contingent event, not just reflecting a self-location.
My conclusion is that rules of credence and assignments of probability are applicable in all cases where there is uncertainty about what’s true from a first person perspective, regardless of the nature of the reality. This includes self-locations. Self-locations can give rise to different sampling and perspective disagreement between interacting parties, in situations where one party might have more self-locating experiences than the other.
There are quite a few points here I disagree with. Allow me to explain.
As I said in the previous reply, a mathematical statement by itself doesn’t have a probability of being right/wrong. It is the process under which someone makes or evaluates said statement that can have a probability attached to it. Maybe the experimenter picked a random number from 1 to 10000 and then check that digit of pi to determine whether to destroy or wake the copy in question. And he picked ten in this case. This process/circumstance enables us to assign a probability to it. Whereas in self-locating probabilities there is no process explaining where the “I” comes from.
Also just because macrophysical objects do not exhibit quantum phenomena such as randomness does not mean the macro world is deterministic. So I would not say Lewisian halfer predicting a fair coin yet to be tossed has the probability of Heads of 2⁄3 is metaphysically problem-free. Furthermore, even if you bite the bullet here, there are problems of probability pumping and retro-causations. I will attach the thought experiment later.
Before going further, I wish to give PBR’s answers to the thought experiment you raised. The probability that I am the clone or original is invalid. As it is a self-locating probability. The probability that today is Monday is 2⁄3. It is valid because both the clone and the original have the same value. There is no need to explain “which person I am”. The probability that the chosen digit of pi falls between 6-0 is 2⁄3. It is valid for the same reason. And the probability that the coin landed heads given “I” am awake is invalid. i.e. we cannot update based on the information “I” am awake. As the value depends on if “I” am the clone or the original.
I don’t think thirders would have any trouble giving a probability of 2⁄3 to the tenth digit of pi. All they have to do is treat “I” as a random sample between the original and clone and then conduct the analysis from a god’s eye perspective: randomly select a subject among the two, then find out she is awakened in the experiment, meaning the chosen digit is twice likely to be 6-0 (2/3). If the chosen subject does not wake up in the experiment then the chosen digit must be 1-5 (100%). They can also have a frequentist model as such no problem.
The thirders think reasoning objectively from a god’s eye view is the only correct way. I think reasoning from any perspective, like that of an experiment subject’s first-person perspective, is just as valid. And from the subject’s first-person perspective the original sleeping beauty problem has a probability of 1⁄2, verifiable with a frequentist model. I thought you agreed with this.
However, if you endorse traditional Lewsian halfer’s reasoning or SSA, which seems clear to me given your latest thought experiment with cloning and waking, then you can’t agree with me on that. Furthermore cannot use the subject’s first-person perspective, or the subsequent perspective disagreement, as a supporting argument for halving.
Consider this experiment. You will be cloned tonight in your sleep with memories preserved. Then a coin would be tossed, if Heads a randomly selected copy would be waked up in the morning. The other one would sleep through the experiment. If Tails both would be waked up. So after waking up the next day, what is the probability of Heads?
Thirders would consider this problem logically identical to the original sleeping beauty. I would guess as a Lewsian Halfer, you would too, and still giving the probability as 1⁄2. But for PBR, this problem is different. The probability of Head would rightfully be 1⁄3 in this case. As “I am awake in this experiment” is no longer guaranteed, it gives legitimate new information. And if I take part in such experiments repeatedly and count on my personal experience, the relative fraction of Heads in experiments where I got waken up would approach 1⁄3. So here you have to make a choice. If you stick with Lewsian Halfer’s reasoning, then you have to give up on thinking from a subject’s first-person perspective. Therefore cannot use it as a supporting argument for halving, nor use it as an explanation for the peculiar perspective disagreement. If you still think thinking from a subject’s first-person perspective is valid, then you have to give up on Lewisian reasoning, and cannot update the probability of Head to 2⁄3 after learning you are the chosen one who will be wakened up anyway. To me that is a really easy choice.
If you think predicting Heads with a probability of 2⁄3 is correct, there are other issues. For example, you can get a bunch of memory-erasing drugs, then you will have the supernatural predicting power. Before seeing the toss result, you can form the following intentions: if it comes up in tails I would take the drugs N times. To have N near-identical awakenings and revealings. Then by Lewsian Halfer (or SSA), if Heads then I would have been experiencing this revealing for sure. If Tails the probability of me currently experiencing this particular awakening would be 1/N. Therefore the probability of Heads is 1/(N+1). And you can squeeze that number as little or large as you want. All you have to do is to take the appropriate number of drugs after a certain result is revealed. By doing so you can retroactively control the coin toss result.
There’s a general consensus that, although quantum theory has changed our understanding of reality, Newtonian physics remains a reliable short term guide to the macro world. In principle, the vast majority of macro events that are just about to happen are thought to be 99.9999% inevitable, as opposed to 100% like Newton thought. From that I deduce that if a coin is shortly to be flipped, the outcome is unknown but, is as good as determined as makes no odds. Whereas if a coin is flipped farther into the future from a point of prediction, the outcome is proportionately more likely to be undetermined.
I’m willing to concede debate about this. What I do recognise is that Beauty’s answer of 2⁄3 Heads, after she learns it’s Monday, depends on it being an already certain but unknown outcome. Whereas if the equivalent of a quantum coin were to be flipped on Monday night, this makes a difference. In that case, awaking on Monday morning, Beauty would not yet be in a Heads world or a Tails world. Her answer would certainly be 1⁄2 , after she learns it’s Monday. What it would be before she learns it’s Monday would depend on what quantum theory model is used. I can consider this another time.
Perspective disagreement between interacting parties, as a result of someone having more than one possible self-locating identity, is something I can certainly see a reason for. Invalidating someone’s likelihood of what that identity might be, I can’t find a reason for. I’ve looked hard.
I’d like to explore your simplified experiment. First it’s important to distinguish precisely what happens with Heads to the version of me that is not woken during the experiment. If the other me is woken after the experiment and told this fact, then there’s no controversy. On finding myself awake in the experiment, my answer is definitely 1⁄3 for Heads and 2⁄3 for Tails. Furthermore, it should makes no difference which version might have woken inside the experiment and which outside, assuming the coin landed Heads. Nor does it matter if that potential selection was made before the flip and I’m subsequently told what the choice was. I’d argue that this information about my possible identity is irrelevant to my credence for the coin.
This takes us to a controversy at the heart of anthropic debate. In the event of Heads, if the version of me that is not woken in the experiment never wakes up at all, it becomes like standard Sleeping Beauty and the answer is 1⁄2 for Heads or Tails. This is because all awakenings will now be inside the experiment and at least one awakening is guaranteed. Regardless of identity, my mind was certain to continue, so long as at least one version woke up. Whether it’s the original or clone, either share the same memories and there is no qualitative difference for the guaranteed continuity of my consciousness. All that matters is that there is no possible experience outside the experiment.
Even if it is an uncertain event as to which body woke up, that uncertainty doesn’t apply to my mind. This was guaranteed to carry on in whichever body it found itself. For the unconscious body that never wakes up, no mind is present. If that body was the original, it’s former mind now continues in the clone body, complete with memories. There is no qualitative difference if I continue in my original body or if I continue as the clone. In terms of actual consciousness, my primitive self has no greater or lesser claim to identical memories of my past, because of the body I have. For some, this will be controversial.
It’s also irrelevant whether the potential sole awakening of original or clone was decided before the flip or whether I’m told what the choice was. Would you actually claim it’s 1⁄3 for Heads providing that, in the event of that outcome, you know don’t whether you woke as the original or clone? However, if you learn what the potential Heads selection was – regardless of whether this turns out to be original or clone – Heads goes up to 1/2? We’ve touched on this before. It wouldn’t be a perspective disagreement with a third party. It would be a perspective disagreement with yourself.
If I am understanding correctly, you are saying if the sleeping beauty problem does not use a coin toss, but measures the spin of an election instead, then the answer would be different. For the coin’s case, you will give the probability of Heads (yet to be tossed ) as 2⁄3 after learning it is Monday. But for the spin’s case, or a quantum coin, the probability must be 1⁄2 after learning it is Monday as it is a quantum event yet to happen.
That seems very ad-hoc to me. And I think differentiating “true quantum randomness” with something “99.99999% inevitable” in probability theories is a huge can of worms. But anyway, my question is, if the sleeping beauty problem uses a quantum coin, what is the probability of heads when you wake up, before being told what day it is? And what’s your probability after learning “it is Monday now”?
You said the answer depends on the quantum model used. I find it difficult to understand. Quantum models give different interpretations to make sense of the observed probability. The probability part is just experimental observation, not changed by which interpretation one prefers. But anyway, I am interested in your answer. How it can both keeps giving 1⁄2 to a quantum coin yet to be tossed, and obey bayesian probability when learning it is Monday.
As for the clone and waking experiment, you said the answer depends on what happens after the experiment, whether or not there will be further awakenings: If there are, thirding; if not, halving. Again, very ad-hoc. If the awakening depends on a second coin to be tossed after the experiment ends, what then? How should an independent event in the future retroactively affect the probability of the first coin toss? What if both coins are quantum? How can you keep your answer bayesian?
Just to be clear, my answer to cloning and waking P(H)=1/3 when waken up. The probability that I am the randomly chosen one, who would wake up regardless of the coin toss, is 2⁄3. The probability of Heads after learning I am the chosen one is 1⁄2. The answer does not depend on what happens after the experiment. And in all this reasoning, I do not know and do not need to think about, whether I am the original or the clone.
At this point, I find I am focusing more on arguing against SSA rather than explaining PBR. And the discussion is steering away from concrete thought experiments with numbers to metaphysical arguments.
I haven’t followed your arguments all the way here but I saw the comment
If I am understanding correctly, you are saying if the sleeping beauty problem does not use a coin toss, but measures the spin of an election instead, then the answer would be different.
and would just jump in and say that others have made a similar arguments. The one written example I’ve seen is this Master’s Thesis.
I’m not sure if I’m convinced but at least I buy that depending on how the particular selection goes about there can be instances were difference between probabilities as subjective credences or densities of Everett branches can have decision theoretic implications.
The link point back to this post. But I also remember reading similar arguments from halfer before, that the answer changes depending on if it is true quantum randomness, could not remember the source though.
But the problem remains the same: can Halfers keep the probability of a coin yet to be tossed at 1⁄2, and remain Bayesian. Michael Titelbaum showed it cannot be true as long as the probability of “Today is Tuesday” is valid and non-zero. If Lewisian Halfer argues that, unlike true quantum randomness, a coin yet to be tossed can have a probability differing from half, such that they can endorse self-locating probability and remains Bayesian. Then the question can simply be changed to using quantum measurements (or quantum coin for ease of expression). Then Lewisian Halfers faces the counter-argument again: either the probability is 1⁄2 at waking up and remains at 1⁄2 after learning it is Monday, therefore non-Bayesian. Or the probability is indeed 1⁄3 and updates to 1⁄2 after learning it is Monday, therefore non-halving. The latter effectively says SSA is only correct in non-quantum events and SIA is correct only for quantum events. But differentiating the cases between quantum and non-quantum events is no easy job. A detailed analysis of a simple coin toss result can lead to many independent physical causes, which can very well depend on quantum randomness. What shall we do in these cases? It is a very assumption-heavy argument for an initially simple Halfer answer.
Edit: Just gave the linked thesis a quick read. The writer seems to be partial to MWI and thinks it gives a more logical explanation to anthropic questions. He is not keen on the notion of treating probability/chance as that a randomly possible world becomes actualized, but considers all possible worlds ARE real (many-worlds), that the source of probability (or “the illusion of probability” as the writer says) is from which branch-world “I” am in. My problem with that is the “I” in such statements is taken as intrinsically understood, i.e. has no explanation. It does not give any justification on what the probability of “I am in a Heads world” is. For it to give a probability, additional assumptions about “among all the physically-similar agents across the many-branched worlds, which one is I” is needed. And that circles back to anthropics. At the end of the day, it is still using anthropic assumptions to answer anthropic problems, just like SIA or SSA.
I have argued against MWI in anthropics in another post. If you are interested.
Interesting Dadarren.
I sense that you’re close to being converted to become a ‘pure halfer’, willing to assign probability to self-locations. Let me address what you said.
“Your latter example: after seeing Red/Blue, I will not say Heads’ probability is halved while Tails’ remain the same. I will say there is no way to update. “
I assume you mean that the probability of 1⁄2 for Heads or Tails – before and after she sees either colour – remains correct. We can agree on that. What matters is why it is true. You’re arguing that no updating is possible after she sees a colour. I invite you to examine this again.
The moment she sees – for example – blue, we can agree that Tails/Red and Heads/Red are definitely ruled out. The difference is that Tails/Red reflects a self-location whereas Heads/Red does not. You’re claiming that because Tails/Red reflects a self-location—i.e the room colour isn’t a random event—no updating is allowed. But you can’t make the same claim about Heads/Red. With Heads, the room being painted red is a random event. For Beauty, the Heads/Red outcome had an unequivocal 1⁄4 chance of being encountered and it has just been eliminated by her seeing the colour blue. So how can Heads not be halved?
Suppose Beauty had her eyes closed with the same set up. Suppose, before she opens her eyes, she must be told straight away whether she’s in a Heads/Red awakening. It’s confirmed that she’s not. You’d agree that the probability of Heads is now halved while for Tails it isn’t. No problem updating. It’s 1⁄3 heads 2⁄3 Tails. Next, suppose she must be told whether she’s in a Tails/Tuesday awakening. It’s confirmed that she’s not. By your reasoning, this would not reduce the probability of Tails. Moreover, Heads has already been halved and there is no reason to change it back. Therefore, if self-locations are un-updatable, her probability must still be 1⁄3 for Heads and 2⁄3 for Tails. What’s more, the information she just received is the same as she would have got by opening her eyes and seeing blue.
The only logical reason the coin outcome is still 1⁄2, after she sees either colour, is because both Heads and Tails must both get halved. This means ignorance or information about self-location status, such as Monday/Tuesday or Original/Clone, are subject to the normal rules of probability and conditionalisation.
Hi Simon, before anything let me say I like this discussion with you- using concrete examples. I find it helps to pinpoint the contention making thinking easier.
I think our difference is due to this here: “I assume you mean that the probability of 1⁄2 for Heads or Tails – before and after she sees either colour – remains correct.” Yes and No, but most importantly not in the way you have in mind.
Just to quickly reiterate: I argue the first-person perspective is a primitive axiomatic fact. No way to explain or reason. It just is. Therefore no probability. Everything follows from here.
It means everything using self-locating probability is invalid. And that includes things like P(Heads|Red). So there is no “probability of Heads given I see a red room”. Red cannot be conditioned on because it involves the probability “Now is Tuesday” vs “Now is Monday”.
Let me follow your steps. If it is Heads then seeing the color is a random event, there is no problem at all, halving the chance is Ok. In the case of Tails, traditional updating would eliminate half the chance too because they give equal probability to Now is Monday vs Now is Tuesday. But because self-locating probability does not exist, there is no basis to split it evenly, or any other way for the matter of fact. There is no valid way to split it at all, that includes a 0-100 split.
So what I meant by “there is no way to update” is not saying the correct value of Tails remains unchanged at 1⁄2. I meant there is no correct value period. And you can’t renormalize it with the Heads’ chance of 1⁄4 to get anything.
This is why I suggested using the repeatable example with long-run frequencies. It makes the problem clearer. If you follow the first-person perspective of a subject then the long-run frequency of Heads is 1⁄2. However, the long-run frequency for Red or Blue would not converge to any particular value. And as you suggested, if you are always told if it is Heads and Red, and only counting iterations not being told so, then the relative frequency of Heads would indeed approach 1⁄3 as you suggested. Yet if you are always told whether it is Tails and Red and only counting the iterations not being told so, then there is still no particular long-run frequency for Tails. Overall there is no long-run frequency for Tails or Heads when you only count iterations of a particular color.
Back to your example, what is the probability of Heads after waking up? It is 1⁄2. That is the best you can give. But it is not the probability of Heads given Red. Using the color of the room in probability analysis won’t yield any valid result. Because as the experiment is set up, the color of the room involves self-locating probability: it involves explaining why the first-person perspective is this particular observer/moment.
Ok let’s see if we can pin this down! Either Beauty learns something relevant to the probability of the coin flip, or she doesn’t. We can agree on this, even if you think updating can’t happen with self-location.
Let’s go back to a straightforward version of the original problem. It’s similar to one you came up with. If Heads there is one awakening, if Tails there are two. If Heads, it will be decided randomly whether she wakes on Monday or Tuesday. If Tails, she will be woken on both days with amnesia in between. She is told in advance that, regardless of the coin flip, Bob and Peter will be in the room. Bob will be wake on Monday, while Peter is asleep. Peter will be awake on Tuesday, while Bob asleep. Whichever of the men wakes up, it will be two minutes before Beauty (if she’s woken). All this is known in advance. Neither man knows the coin result, nor will they undergo amnesia. Beauty has not met either of them, so although she knows the protocol, she won’t know the name of who’s awake with her or the day unless he reveals it.
Bob and Peter’s perspective when they wake up is not controversial. Each is is guaranteed to find the other guy asleep. In the first two minutes, each will find Beauty asleep. During that time, both men’s probability is 1⁄2 for Heads and Tails. If Beauty is still asleep after two minutes, its definitely Heads. If she wakes up, it’s 1⁄3 Heads and 2⁄3 Tails.
A Thirder believes that Beauty shares the same credence as Bob or Peter when she wakes. As a Halfer, I endorse perspective disagreement. Unlike the guys, Beauty was guaranteed to encounter someone awake. Her credence therefore remains 1⁄2 for Heads and Tails, regardless of who she enounters.
What happens when the man awake reveals his name—say Bob? This reveals to Beauty that today is Monday. I would say that the probability of the coin remains unchanged from whatever it was before she got this information. For a Thirder this is 1⁄3 Heads, 2⁄3 Tails. For a Halfer it is still 1⁄2. I submit that the reason the probability remains unchanged is because something in both Heads and Tails was eliminated with parity. But suppose she got the information in stages.
She first asks the experimenter: is it true that the coin landed Tails and I’m talking to Peter? She’s told that this is not true. I regard this as a legitimate update that halves the probability of Tails, whereas you don’t. Such an update would make her credence 2⁄3 Heads and 1⁄3 Tails. You would claim that no update is possible because what’s been ruled out is the self-location Tuesday/Tails. For you, ruling out Tuesday/Tails says nothing new about the coin. You argue that, whether the coin landed Heads or Tails, there is only one ‘me’ for Beauty and a guaranteed awakening applied to her. So the probability of Heads or Tails must be 1⁄2 before and after Tuesday/Tails is ruled out.
We might disagree whether ruling out the self-location Tuesday/Tails permitted an update or whether her credence must remain 1⁄2. But we can agree that, if she gets further information about a random event that had prior probability, she must update in the normal way. Even if ruling out a self-location told her nothing about the coin probability, it can’t prevent her from updating if she does get this information.
So now she asks the experimenter: is it true that the coin landed Heads and I’m talking to Peter? She’s told that this is not true. This tells her that Bob is the one she’s interacting with, plus she knows it’s Monday. What’s more, this is not a self-location that’s been ruled out like before. The prior possibility of the coin landing Heads plus the prior possibility of her encountering Peter was a random sequence that has just been eliminated. It definitely requires an update. If her credence immediately before was 1⁄2 for Heads, it must be 1⁄3 now. If her credence was 2⁄3 for Heads – which I think was correct—then it is 1⁄2 now. Which is it?
That brings us back to Beauty’s position before the man says his name. Her credence for the coin is 1⁄2, before she learns who she’s with. Learning his identity rules out a possibility in both coin outcomes, as described above. The order in which she got the information makes no difference to what she now beleives. The fact remains that a random event with Heads, and a self-location with Tails, were both ruled out. It’s the parity in updating that makes her credence still 1⁄2, whoever turns out to be awake with her.
Let me lay out the difference between my argument and yours, following your example.
After learning the name/day of the week, halfer’s probability is still 1⁄2. You said because something in both Heads and Tails was eliminated with parity. My argument is different, I would say there is no way to update the probability based on the name because it would involve using self-locating probability.
Let’s break it down in steps as you suggested.
Suppose I ask: “is it true that the coin landed Tails and I’m talking to Peter? ” and get a negative answer. You say this would eliminate half the probability for Tails. I say there is no way to say how much probability of Tails is eliminated (because it involves self-locating probability). So we cannot update the probability for this information. You say considering the answer, Tails is reduced to 1⁄4. I say considering the answer is wrong, if you consider it you get nothing meaningful, no value.
Suppose I ask: “is it true that the coin landed Heads and I’m talking to Peter?” and get a negative answer. You say this would eliminate half the probability for Heads making it 1⁄4. I agree with this.
Seeing Bob would effectively be the same as getting two negative answers altogether. How does it combine? You say Heads and Tails both eliminate half the probability (both 1⁄4 now), so after renormalizing the probability of heads remains unchanged at 1⁄2. I say since one of the steps is invalid the combined calculation would be invalid too. There is no probability condition on seeing Bob (again because it involves self-locating probability).
I suppose your next question would be: if the first question is invalid for updating, wouldn’t I just update based on the second question alone? which will give a probability of Heads to 1/3?
That is correct as long as I indeed asked the question and got the answer. Like I said before: the long-run frequency for these cases would converge on 1⁄3. But that is not how the example is set up, it only gives information about which person is awake. If I actually askes this question then I would get a positive or negative answer. But no matter which person I see I could never get a positive result: even if I see Peter, there is still the possible case of a Tails-first awakening (which do not have a valid probability), so no positive answer. Conversely seeing Bob would mean the answer is negative but it also eliminates the case of Peter-tails (again no valid probability). So the combined probability of tails after seeing Bob and the probability of tails of getting a negative answer are not the same thing.
That is also the case for non-anthropic probability questions. For example, a couple has two children and you want to give the probability both of them are boys. Suppose you have a question in mind ” is there a boy born on Sunday?”. However, the couple is only willing to answer whether a child is born on a weekend. Meaning the question you have in mind would never have a positive answer. Anyway, the couple says “there is no boy born on a weekend”. So your question got a negative answer. But that does not mean the probability of two boys given no boy born on the weekend is the same as the probability given no boy born on Sunday. You have to combine the case of no boy born on Saturday as well. This is straightforward. The only difference is in the anthropic example, the other part that needs to be combined together has no valid value. I hope this helps to pin down the difference. :)
I doubt we’ll persuade each other :) As I understand it, in my example you’re saying that the moment a self-location is ruled out, any present and future updating is impossible – but the last known probability of the coin stands. So if Beauty rules out Heads/Peter and nothing else, she must update Heads from 1⁄2 to 1⁄3. Then if she subsequently rules out Tails/Peter, you say she can’t update, so she will stay with the last known valid probability of 1⁄3. On the other hand, if she rules out Tails/Peter first, you say she can’t update so it’s 1⁄2 for Heads. However, you also say no further updating is possible even if if she then rules out Heads/Peter, so her credence will remain 1⁄2, even though she ends up with identical information. That is strange, to say the least.
I’ll make the following argument. When it comes to probability or credence from a first person perspective, what matters is knowledge or lack of it. Poeple can use that knowledge to judge what is likely to be true for them at that moment. Their ignorance doesn’t discriminate between unknown external events and unknown current realities, including self-locations. Likewise, their knowledge is not invalidated just because a first person perspective might happen to conflict with a third person perspective or because the same credence may not be objectively verfiable in the frequentist senes. In either case, it’s their personal evidence for what’s true that gives them their credence, not what kind of truth it is. That credence, based on ignorance and knowledge, might or might not correspond to an actual random event. It might reflect a pre-existing reality - such as there is a 1⁄10 chance that the 9th digit of Pi is 6. Or it might reflect an unknown self-location – such as “today is Monday or Tuesday”, or “I’m the original or clone”. Whatever they’re unsure about doesn’t change the validity of what they consider likely.
You could have exactly the same original Sleeping Beauty problem, but translated entirely to self-location without the need for a coin flip. Consider this version. Beauty enters the experiment on a Saturday and is put to sleep. She will be woken in both Week 1 and in Week 2. A memory wipe will prevent her from knowing which week it is, but whenever she wakes, she will definitely be in one or the other. She also knows the precise protocol for each week. In Week 1, she will be woken on Sunday, questioned, put back to sleep without a memory wipe, woken on Monday and questioned. This completes Week 1. She will return the following Saturday and be put to sleep as before. She is now given a memory wipe of both her awakenings from Week 1. She is then woken on Sunday and questioned but her last memory is of the previous Saturday when she entered the experiment. She doesn’t know whether this is the Sunday from Week 1 or Week 2. Next she is put to sleep without a memory wipe, woken on Monday and questioned. Her last memory is the Sunday just gone, but she still doesn’t know if it’s Week 1 or Week 2. Next she is put back to sleep and given a memory wipe, but only of todays’s awakening. Finally she woken on Tuesday and questioned. Her last memory is the most recent Sunday. She still won’t know which week she’s in.
The questions asked of her are as follows. When she awakens for what seems to be the first time – always a Sunday – what is her credence for being in Week 1 or Week 2? When she awakens for what seems to be the second time – which might be a Monday or Tuesday – what is her credence for being in Week 1 or Week 2?
Essentially this is the same Sleeping Beauty problem. The fact that she has uncertainty of which week she’s in rather than a coin flip, doesn’t prevent her from assigning a credence/probability based on the evidence she has. On her Sunday awakening, she has equal evidence to favour Week 1 and Week 2, so it is valid for her to assign 1⁄2 to both. On her weekday awakenings, Halfers and Thirders will disagree whether it’s 1⁄2 or 1⁄3 that she’s in Week 1. If she’s told that today is Monday, they will disagree whether it’s 2⁄3 or 1⁄2 that she’s in Week 1.
We could add Bob to the experiment. Like Beauty, Bob enters the experiment on Saturday. His protocol is the same, except that he is kept asleep on the Sundays in both week. He is only woken Monday of Week 1, then Monday and Tuesday of Week 2. Each time he’s woken, his last memory is the Saturday he entered the experiment. He therefore disagrees with Beauty. From his point of view, it’s 1⁄3 that they’re in Week 1, whereas she says it’s 1⁄2. If told it’s Monday, for him it’s 1⁄2 that they’re in Week 1, and for her it’s 2⁄3.
This recreates perspective disagreement – but exclusively using self-location. You might be tempted to argue that neither Beauty or Bob can ever assign any probability or likelihood as to which week they’re in. I say it’s legitimate for them to do so, and to disagree.
Let’s not dive into another example right away. Something is amiss here. I never said anything about the order of getting the answers to “Is it Tails and Peter?” and “Is it Heads Peter?” would change the probability. I said we cannot update based on the negative answer of “Is it Tails Peter?” because it involves using self-locating probability. Whichever the order is, we can nevertheless update the probability of Heads to 1⁄3 when we get the negative answer to “Is it Heads and Peter?”, because there is no self-locating probability involved here. But 1⁄3 is the correct probability only if Beauty did actually ask the question and get the negative response. I.E. There has to be a real question to update based on its answer. That does not mean Beauty would inevitably update P(Head) to 1⁄3 no matter what.
Before Beauty opens her eyes, she could ask: “Is it Heads and Peter?”. If she gets a positive answer then the probability of Heads would be 1. If she gets a negative answer the probability of Heads would update to 1⁄3. She could also ask “Is it Heads and Bob?”. And the result would be the same. Positive answer: P(Head)=1, negative answer: P(Head)=1/3. So no matter which of the two symmetrical questions she is asking, she can only update her probability after getting the answer to it. I think we can agree on this.
The argument saying my approach would always update P(Heads) to 1⁄3 no matter which person I see is as follows: first of all, no real question is asked, look at whether it is Peter or Bob. If I see Bob, then retroactively pose the question as “Is it Heads and Peter?” and get a negative answer. If I see Peter, then retroactively pose the question as “Is it Heads and Bob?” and get a negative answer. Playing the game like this would guarantee a negative answer no matter what. But clearly, you get the negative answer because you actively changing the question to look for it. We cannot update the probability this way.
My entire solution is suggesting there is a primitive axiom in reasoning: first-person perspective. Recognizing it resolves the paradoxes in anthropics and more. I cannot argue why perspective is axiomatic except it intuitively appears to be right, i.e. “I naturally know I am this person, and there seems to be no explanation or reason behind it.” Accepting it would overturn Doomsday Argument (SSA), Presompetous Philosopher (SIA), it means no way to think about self-locating probability which resolves the reason for no-update after learning it’s Monday, it explains why perspective disagreement in anthropics is correct, and it results in the agreement between Bayesian and frequentist interpretation in anthropics.
Because I regard it as primitive, if you disagree with it and argue there are good ways to reason and explain the first-person perspective and furthermore assign probabilities to it, then I don’t really have a counter-argument. Except you would have to resolve the paradoxes your own way. For example, why do Bob and Beauty answer differently to the same question? My reason for perspective disagreement in anthropics is that the first-person perspective is unexplainable so a counterparty cannot comprehend it. What is your reason? (Just to be clear, I do not think there is a probability for it is Week 1 / Week 2, for either Beauty or Bob.) Do you think Doomsday Argument is right? What about Nick Bostrom’s simulation argument, what is the correct reference class for oneself, etc.
Conversely, I don’t think regarding the first-person perspective as not primitive, and subsequently, stating there are valid ways to think and assign self-locating probabilities is a counter-argument against my approach either. The merit of different camps should be judged on how well they resolve the paradoxes. So I like discussions involving concrete examples, to check if my approach would result in paradoxes of its own. I do not see any problem when applied to your thought experiments. The probability of Heads won’t change to 1⁄3 no matter which person/color I see. I actually think that is very straightforward in my head. But I’m sensing I am not doing a good job explaining it despite my best effort to convince you :)
I’ve allowed some time to digest on this occasion. Let’s go with this example.
A clone of you is created when you’re asleep. Both of you are woken with identical memories. Under your pillow are two envelopes, call them A and B. You are told that inside Envelope A is the title ‘original’ or ‘copy’, reflecting your body’s status. Inside Envelope B is also one of those titles, but the selection was random and regardless of status. You are asked the likelihood that each envelope contains ‘original’ as the title.
I’m guessing you’d say you can’t assign any valid probability to the contents of Envelope A. However, you’d say it’s legitimate to assign a 1⁄2 probability that Envelope B contains ‘original’.
Is there a fundamental difference here, from your point of view? Admittedly if Envelope A contains ‘original’, this reflects a pre-existing self-location that was previously known but became unknown while you were asleep. Whereas if Envelope B contains ‘original’, this reflects an independent random selection that occured while you were asleep. However, your available evidence is identical for what could be inside each envelope. You therfore have identical grounds to assign likelihood about what is true in both.
Suppose it’s revealed that both envelopes contain the same word. You are asked again the likelihood that the envelopes contain ‘original’. What rules do you follow? Would you apply the non-existence of Envelope A’s probability to Envelope B? Or would you extend the legitimacy of Envelope B’s probability to Envelope A?
I’m guessing you would continue to distinguish the two, stating that 1⁄2 was a still valid probability for Envelope B containing ‘original’ but no such likelihood existed for Envelope A—even knowing that whatever is true for Envelope B is true for Envelope A. If so, then it appears to be a semantic difference. Indeed, from a first person perspective, it seems like a difference that makes no difference. :)
Here is the conclusion based on my positions: the probability of Original for Envelop A does not exist; probability of Original for Envelop B is 1⁄2; probability of Original for Envelop B given contents are the same does not exist. Just like previous thought experiments, it is invalid to update based on that information.
Remember my position says first-person perspective is a primitive axiomatic fact? e.g. “I naturally know I am this particular person. But there is no reason or explanation for it. I just am.” This means arguments that need to explain the first-person perspective, such as treating it as a random sample, are invalid.
And the difference between Envelop A and B is that probability regarding B does not need to explain the first-person perspective, it can just use “I am this person” as given. My envelope’s content is decided by a coin toss. While probability A needs to explain the first-person perspective (like treating it as a random sample.)
Again this is easier seen with a frequentist approach.
If you repeat the same clone experiment a lot of times and keep recording whether you are the Original of the Clone in each iteration. Then even as time goes on there is no reason for the relative fraction of “I am the Original” to converge to any particular value. Of course, we can count everyone from these experiments and the combined relative fraction would be 1⁄2. But without additional assumptions such as “I am a random sample from all copies.” that is not the same as the long-run frequency for the first person.
In contrast, I can repeat the cloning experiment many times, and the long-run frequency for Envelop B is going to converge to 1⁄2 for me, as it is the result of a fair coin toss. There is no need to explain why “I am this person” here. So from a first-person perspective, the probability for B describes my experience and is verifiable. While probability about A is not unless considering every copy together, which is not about the first-person anyway.
Even though you didn’t ask this (and I am not 100% set on this myself) I would say the “probability that the Envelop contents are the same” is also 1⁄2. I am either the Orignal or the Clone, there is no way to reason which one I am. But luckily it doesn’t matter which is the case, the probability that Envelop B got it right is still an even toss. And long-run frequency would validate that too.
But what Envelop B says given it got it right depends on which physical copy I am. There is no valid value for P(Envelop B says Orignal|the contents are the same). For example, say I repeat the experiment 1000 times. And it turns out I was the Clone in 300 of those experiments and the Original in 700 experiments. Due to fair coin tosses, I would have seen about an equal number of Original vs Clone (500 each) in Envelop B which corresponds to the probability of 1⁄2. And the contents would be the same about 150 times in experiments where I was the Clone and 350 times when I was the Original. (500 total which corresponds to the probability of “same content” being 1⁄2). Yet for all iterations where the contents are the same, 30% of which “I am the Clone”.
But if I am the Original in all 1000 experiments, then for all iterations where the contents are the same, I am still 100% the Original. (The other two probabilities of 1⁄2 above still hold). i.e. the relative fraction of Orignal in Envelop B given contents are the same depends on which physical I am. And there is no way to reason about it unless some additional assumptions are made. (e.g. treating I as a random sample would cause it to be 1⁄2.)
In another word, to assign a probability to envelop B given contents are the same we must first find a way to assign self-locating probabilities. And that cannot be done.
I’ll come back with a deeper debate in good time. Meanwhile I’ll point out one immediate anomaly.
I was genuinely unsure which position you’d take when you learnt the two envelopes were the same. I expected you to maintain there was no probability assigned to Envelope A. I didn’t expect you to invalidate probability for the contents of Envelope B. You argued this because any statement about the contents of Envelope B was now linked to your self-location – even though Envelope B’s selection was unmistakably random and your status remained unknown.
It becomes even stranger when you consider your position if told that the two envelopes were different. In that event, any statement about the contents of Envelope B refers just as much to self-location. If the two envelopes are different, a hypothesis that Envelope B contains ‘copy’ is the same hypothesis that you’re the original—and vice versa. Your reasoning would equally compel you to abandon probability for Envelope B.
Therein lies the contradiction. The two envelopes are known in advance to be either the same or different. Whichever turns out to be true will neither reveal or suggest the contents of Envelope B. Before you discover whether the envelopes are the same or different, Envelope B definitely had a random 1⁄2 chance of containing ‘original’ or ‘copy’. Once you find out whether they’re the same or different, regardless of which turn out to be true, you’re saying that Envelope B can no longer be assigned probability.
Something is wrong… :)
Haha, that’s OK. I admit I am not the best communicator nor anthropic an easy topic to explain. So I understand the frustration.
But I really think my position is not complicated at all. I genuinely believe that. It just says take whatever the first-person perspective is as given, and don’t come up with any assumptions to attempt explaining it.
Also want to point out I didn’t say P(B say Original) as invalid, I said P(P says Original|contents are the same) is invalid. Some of your sentencing seems to suggest the other way. Just want to clear that up.
And I’m not playing any tricks. Remember Peter/Bob. I said the probability of Heads is 1⁄2, but you cannot update on the information that you have seen Peter? The reason being it involves using self-locating probability? It’s the same argument here. There was a valid P(Heads) while no valid P(Heads|Peter). There is a valid P(B says Original) but no valid P(B says Original|Same) for the exact same reason.
And You can’t update the probability given you saw Bob either. But just because you are either going to see Peter or Bob, that does not mean P(Heads) is invalidated, just can’t update on Peter/bob that’s all. Similarly, just because envelopes are either “same” or “different”, doesn’t mean P(B says Orignal) is invalid. Just cannot update on either.
And the coin toss and Envelop B are both random/unknown processes. So I am not trying to trick you. It’s the same old argument.
And by suggesting think of repeating experiments and counting the long-run frequencies, I didn’t leave much to interpretation. If you imagine repeating the experiments as a first-person and can get a long-run frequency, then the probability is valid. If there is no long-run frequency unless you come up with some way to explain the first-person perspective, then there is no valid probability. You can deduce what my position says quite easily like that. There aren’t any surprises.
Anyway, I would still say arguing using concrete examples with you is an enjoyment. It pushes me to articulate my thoughts. Though I am hesitant to guess if you would say that’s enjoyable :) I will wait for your rebuttal in good time.
Ok here’s some rebuttal. :) I don’t think it’s your communication that’s wrong. I believe it’s the actual concept. You once said that yours is a view that no-one else shares. This does not in itself make it wrong. I genuinely have an open mind to understand a new insight if I’m missing it. However I’ve examined this from many angles. I believe I understand what you’ve put forward.
In anthropic problems, issues of self-location and first person perspective lie at the heart. In anthropic problems, a statement about a person’s self-location, such as ‘’”today is Monday” or “I am the original”, is indeed a first person perspective. Such a statement, if found to be true, is a fact that could not have been otherwise. It was not a random event that had a chance of not happening. From this, you’ve extrapolated – wrongly in my view—that normal rules of credence and Bayesian updating based on information you have or don’t yet have, are invalid when applied to self-location.
I’m reminded of the many worlds quantum interpretation. If we exist in a multiverse, all outcomes take place in different realities and are objectively certain. In a multiverse, credences would be deciding which world your first-person-self is in, not whether events happened. The multiverse is the ultimate self-location model. It denies objective probability. What you have instead are observers with knowledge or uncertainty about their place in the multiverse.
Whether theories of the multiverse prove to be correct or not, there are many who endorse them. In such a model – where probability doesn’t exist – it is still considered both legitimate and necessary for observers to assign likelihood and credence about what is be true for them, and apply rules of updating based on information available.
I have a realistic example that tests your position. Imagine you’re an adopted child. The only information you and your adopted family were given is that your natural parents had three children and that all three were adopted by different families. What is the likelihood that you were the first born natural child? For your adopted parents, it’s straightforward. They assign a 1⁄3 probability that they adopted the oldest. According to you, as it’s a first-person perspective question about self-location, no likelihood can be assigned.
It won’t surprise you to learn that here I find no grounds for you to disagree with your adopted parents, much less to invalidate a credence. Everyone agrees that you are one of three children. Everyone shares the same uncertainty of whether you’re the oldest. Therefore the credence 1⁄3 that this is the case must also be shared.
I could tweak this situation to allow perspective disagreement with your adopted family, making it closer to Sleeping Beauty—and introducing a coin flip. I may do that later.
While there isn’t anything wrong with your summarization of my position. I wouldn’t call it an extrapolation. Instead, I think it is other camps like SSA and SIA that are doing the extrapolation. “I know I am this person but have no explanation or reason for it” seems right, and I stick to it in reasoning. In my opinion, it is SSA and SIA that try to use random sampling in an unrelated domain to give answers that do not exist. Which leads to paradoxes.
I recognized my solution (PBR) as incompatible with MWI since the very beginning of forming my solution. I even explicitly wrote about it, right after the solution to anthropics on my website. The deeper reason is PBR actually has a different account of what scientific objective means. Self-locating probability is only the most obvious manifestation of this difference. I wrote about it in a previous post.
Nonetheless, the source of probability is the most criticized point about MWI. “why does a bifurcating world with equal coefficients guarantee I would experience roughly the same frequencies? What forces the mapping between the “worlds” and my first-person experience?” Even avid MWI supporters like Sean Carrol regard this as the most telling criticism of MWI. And especially hard to answer.
I would suggest not to deny PBR just because one likes MWI. Since nobody can be certain that MWI is the correct interpretation. Furthermore, there is no reason to be alarmed just because a solution of anthropics is connected with quantum interpretations. The two topics are connected, no matter which solution one prefers. For example, there is a series of debates between Darren Bradly and Alastair Wilson about whether or not SIA would naively confirm MWI.
Regarding the disagreement between the adopted son and parents about being firstborn, I agree there is no probability to “I am a firstborn”. There simply is no way to reason about it. The question is set up so that it is easy to substitute the question with “what is the probability that the adopted child is firstborn.” And the answer would be 1⁄3. But then it is not a question defined by your first-person perspective. It is about a particular unknown process (the adoption process). Which can be analyzed without additional assumptions explaining the perspective.
To the parents, not the firstborn means they got a different child in the adoption process. But for you, what does “not being the firstborn” even mean? Does my soul get incarnated into one of the other two children? Do I become someone/something else? Or maybe never came into existence completely? None of these makes any sense unless you come up with some assumptions explaining the first-person perspective.
I’m certain this answer won’t convince you. It will be better there is a thought experiment with numbers. I will eagerly wait for your example for it. And maybe this time you can predict what my response would be by using the frequentist approach. :)
edit: Also please don’t take it as criticizing you. The reason I think I am not doing a good job communicating is that I feel I have been giving the same explanation again and again. Yet my answer to the thought experiments seems to always surprise you. When I thought you would have already guessed what my response would be. And sometimes (like the order of the questions in Peter/Bob), your understanding is simply not what I thought I wrote.
I have always thought repeating the experiment and counting long-run frequencies is the surest way to communicate. It takes all theoretical metaphysical aspects out of the question and just presents solid numbers. But that didn’t work very well here. Do you have any insights about how I should present my argument going forward? I need some advice.
Ok that’s fine. I agree MWI is not proven. My point was only that it is the absolute self-location model. Those endorsing it propose the non-existence of probability, but still assign the mathematics of likelihood based on uncertainty from an observer. Forgive me for stumbling onto the implications of arguments you made elsewhere. I have read much of what you’ve written over time.
I especially agree that perspective disagreement can happen. That’s what makes me a Halfer. Self-location is at the heart of this, but I would say it is not because credences are denied. I would say disagreement arises when sampling spaces are different and lead to conflicting first-person information that can’t be shared. I would also say that, whenever you don’t know which pre-existing time or identity applies to you, assigning subjective likelihood has much meaning and legitimacy as it does to an unknown random event. I submit that it’s precisely because you do have credences based on uncertainty about self-location that perspective disagreement can happen.
You can also have a situation where a reality might be interpreted subjectively both as random and self-location. Consider a version of Sleeping Beauty where we combine uncertainty of being the original and clone with uncertainty about what day it is.
Beauty is put to sleep and informed of the following. A coin will be (or has already been) flipped. If the coin lands Heads, her original self will be woken on Monday and that is all. If the coin lands Tails, she will be cloned; if that happens, it will be randomly decided whether the original wakes on Monday and the clone on Tuesday, or the other way round.
Is it valid here for Beauty to assign probability to the proposition “Today is Monday” or Today is Tuesday”? I’m guessing you will agree that, in this case, it is. If the coin landed Heads, being woken on Monday was certain and so was being the original. If it landed Tails, being woken on Monday or Tuesday was a separate random event from whichever version of herself she happens to be. Therefore she should assign 3⁄4 that today is Monday and 1⁄4 that today is Tuesday. We can also agree as halfers that the coin flip is 1⁄2 but, once she learns it’s Monday, she would update 2⁄3 to Heads and 1⁄3 to Tails. However, if instead of being told it’s Monday, she’s told that she’s the original, then double halfing kicks in for you.
From your PBR position, the day she’s woken does have indpendent probability in this example, but her status as original or clone is a matter of self-location. Whereas I question whether there’s any significant difference between the two kinds of determination. Also, in this example, the day she’s woken can be regarded as both externally random and a case of self-location. Whichever order the original and clone wake up if the coin landed Tails, there are still two versions of Beauty with identical memories of Sunday; one of these wakes on Monday, the other on Tuesday. If told that, in the event of Tails, the two awakenings were prearranged to correspond to her original and clone status instead of being randomly assigned, the number of awakenings are the same and nothing is altered for Beauty’s knowledge of what day it is. I submit that in both scenarios, validity for credence about the day should the same.
But my explanation for perspective disagreement is based on the primitive nature of the first-person perspective. i.e. it cannot be explained therefore incommunicable. If we say there is A GOOD WAY to understand and explain it, and we must use assign self-locating probabilities this way, then why don’t we explain our perspectives to each other as such, so we can have the exact same information and eliminate the disagreement?
If we say the question has different sample spaces for different people, which is shown by repeating the experiment from their respective perspective gives different relative frequencies. Then why say when there is no relative frequency from a perspective there is still a valid probability? This is not self-consistent.
To my knowledge, halfers have not provided a satisfactory explanation to perspective disagreement. Even though Katja Grace and John Pittard have pointed out for quite some time already. And if halfers want to use my explanation for the disagreement, and at the same time, reject my premises of primitive perspectives, then they are just putting assumptions over assumptions to preserve self-locating probability. To me, that’s just because our nature of dislike saying “I don’t know”, even when there is no way to think about it.
And what do halfers get by preserving self-locating probability? Nothing but paradoxes. Either we say there is a special rule of updating which keeps the double-halving instinct, or we update to 1⁄3. The former has been quite conclusively countered by Michael Titelbaum: as long as you assign a non-zero value to the probability to “today is Tuesday” it will result in paradoxes. The latter has to deal with Adam Elga’s question from the paper which jump-started the whole debate: After knowing it is Monday, I will put a coin into your hand. You will toss it. The result determines whether you will be wake again tomorrow with a memory wipe. Are you really comfortable saying “I believe this is a fair coin and the probability it will land Heads is 1/3?”.
I personally think if one wants to reject PBR and embrace self-locating probability the better choice would be SIA thus becoming a thirders. It still has serious problems but not so glaring.
What you said about my position in the thought experiment is correct. And I still think the difference between self-locating and random/unknown processes is significant (which can be shown by relative frequencies’ existence). Regarding this part: “the two awakenings were prearranged to correspond to her original and clone status instead of being randomly assigned”, if by that it means it was decided by some rule that I do not know, then the probability is still valid. If I am disclosed what the rule is, e.g. original=Monday and clone=Tuesday, then there is no longer a valid probability.
Hi Dadarren. I haven’t forgotten our discussion and wanted to offer further food for thought. It might be helpful to explore definitions. As I see it, there are three kinds of reality about which someone can have knowledge or ignorance.
Contingent – an event in the world that is true or false based on whether it did or did not happen.
Analytic – a mathematical statement or expression that is true or false a priori.
Self-location – an identity or experience in space-time that is true or false at a given moment for an observer.
I’d like to ask two questions that may be relevant.
a) When it comes to mathematical propositions, are credences valid? For example, if I ask whether the tenth digit of Pi is either between 1-5 or 6-0, and you don’t know, is it valid for you to use a principal of indifference and assign a personal credence of 1/2?
b) Suppose you’re told that a clone of you will be created when you’re asleep. A coin will be flipped. If it lands Head, the clone will be destroyed and the original version of you will be woken. If it lands Tails, the original will be destroyed and the clone will be woken. Finding yourself awake in this scenario, is it valid to assign a 1⁄2 probability that you’re either the original or clone?
I would say that both these are valid and normal Bayesian conditioning applies. The answer to b) reflects both identity and a contingent event, the coin flip. For a), it would be easy to construct a probability puzzle with updatable credences about outcomes determined by mathematical propositions.
However I’m curious what your view is, before I dive further in.
For a, my opinion is while objectively there is no probability for the value of a specific digit of Pi, we can rightly say there is an attached probability in a specific context.
For example, it is reasonable to ask why I am focusing on the tenth digit of Pi specifically? Maybe I just happen to memorize up to the ninth digit, and I am thinking about the immediate next one. Or maybe I just arbitartily choose the number 10. Anyway, there is a process leading to the focus of that particular digit. If that process does not contain any information about what that value is, then a principle of indifference is warranted. From a frequentist approach, we can think of repeating such processes and checking the selected digit, which can give a long-run relative frequency.
For b, the probability is 1⁄2. It is valid because it is not a self-locating probability. Self-locating probability is about what is the first-person perspective in a given world, or the centered position of oneself in a possible world. But problem b is about different possible worlds. Another hint is the problem can be easily described without employing the subject’s first-person perspective: what is the probability that the awakened person is the clone? Compare that to the self-locating probabilities we have been discussing: there are multiple agents and the one in question is specified by the first-person “I”, which must be comprehended from the agent’s perspective.
From a frequentist approach, that experiment can be repeated many times and the long-run frequency would be 1⁄2. Both from the first-person perspective of the experiment subject, and from the perspective of any non-participating observers.
Well in that case, it narrows down what we agree about. Mathematical propositions aren’t events that happen. However, someone who doesn’t know a specific digit of Pi would assign likelihood to it’s value with the same rules of probability as they would to an event they don’t know about. I define credence merely as someone’s rational estimate of what’s likely to be true, based on knowledge or ignorance. Credence has no reason to discriminate between the three types of reality I talked about, much less get invalidated.
I would also highlight that almost all external outcomes in the macro world, whether known or unknown, are already determined, as opposed to being truly random. In that sense, an unknown coin flip outcome is just as certain as an unknown mathematical proposition. In the case of Sleeping Beauty being told it’s Monday and that a coin will be flipped tonight, she is arguably already in a Heads world or a Tails world. It’s just that no-one knows which way the coin will land. If so, Lewis’s version of halfing is not as outlandish as it appeared. Beauty’s 2⁄3 update on Monday that the coin will land Heads is not actually an update on a future random event. It is an update on a reality that already exists but is unknown. From Beauty’s perspective, if she’s in a Heads world, she is certain it is Monday. If the world is Tails, she doesn’t have that certainty. Therefore an increased likelihood in Heads, once she learns it’s Monday, is reasonable – assuming that self-locations allow credences. I submit that, since she was previously not certain that the day she found herself awake on was Monday, a non-zero credence that this day was Tuesday legitmately existed before being eliminated.
Below is a version of Sleeping Beauty that mixes the three types of reality I described – contingent, analytic and self-location.
On Sunday night, Beauty is put to sleep and cloned. A coin is flipped. If it lands Heads, the original is woken on Monday and questioned, while the clone stays asleep. If it lands Tails, the clone is woken on Monday and questioned, while the original stays asleep.
The rest of the protocol concerns the Beauty that was not woken, and is determined by the tenth digit of Pi. If it’s between 1-5, the other Beauty is never woken and destroyed. If it’s between 6-0, the other Beauty is woken on Tuesday and questioned.
For any Beauty finding herself awake, do any of the following questions have valid answers?
What is the likelihood that the tenth digit of Pie is 1-5 or 6-0?
What is the likelihood that the coin landed Heads or Tails?
What is the likelihood that today is Monday or Tuesday?
What is the likelihood that the she is the original or clone?
You’ll be unsurprised that I think all these credences are valid. In this example, any credence about her identity or the day happens to be tied in with credence about the other realities.
You’ll also notice how tempting it is for Beauty to apply thirder reasoning to the tenth digit of Pi – i.e it’s 1⁄3 that it’s 1-5 and 2⁄3 that it’s 6-0. The plausible thirder argument is that, whatever identity the waking Beauty turns out to have, either original or clone, it was only 50⁄50 this version would be woken if the Pi digit was 1-5, whereas it was certain this version would be woken if the Pi digit was 6-0. However, I would say that, uniquely from her perspective, her identity is not relevant to her continued conciousness or the likelihood of the tenth digit of Pi. At least one awakening with memories of Sunday was guaranteed. Her status as original or clone doesn’t change her prior certainty of this. All that matters is that one iteration of her consciousness woke up if the digit was 1-5, while two iterations of her consciousness woke up if the digit was 6-0. In either case there is a guaranteed awakening with continuity from Sunday, and no information to indicate whether there are one or two awakenings. This is why I would remain a halfer.
Each question can be asked conditionally, with Beauty being told the answer to one or more of the others. In particular, if she’s told it’s Monday, the likelihood of Pi’s tenth digit being 1-5 must surely increase, whether she’s a thirder or halfer. Her reasoning is that if the tenth digit of Pi was 1-5, whatever body she has was certain to wake up on Monday, regardless of the coin. Whereas if the tenth digit of Pi was 6-0, the body she has could have woken on either day, determined by the coin. It would be hard to argue otherwise, since the day she wakes up is a contingent event, not just reflecting a self-location.
My conclusion is that rules of credence and assignments of probability are applicable in all cases where there is uncertainty about what’s true from a first person perspective, regardless of the nature of the reality. This includes self-locations. Self-locations can give rise to different sampling and perspective disagreement between interacting parties, in situations where one party might have more self-locating experiences than the other.
There are quite a few points here I disagree with. Allow me to explain.
As I said in the previous reply, a mathematical statement by itself doesn’t have a probability of being right/wrong. It is the process under which someone makes or evaluates said statement that can have a probability attached to it. Maybe the experimenter picked a random number from 1 to 10000 and then check that digit of pi to determine whether to destroy or wake the copy in question. And he picked ten in this case. This process/circumstance enables us to assign a probability to it. Whereas in self-locating probabilities there is no process explaining where the “I” comes from.
Also just because macrophysical objects do not exhibit quantum phenomena such as randomness does not mean the macro world is deterministic. So I would not say Lewisian halfer predicting a fair coin yet to be tossed has the probability of Heads of 2⁄3 is metaphysically problem-free. Furthermore, even if you bite the bullet here, there are problems of probability pumping and retro-causations. I will attach the thought experiment later.
Before going further, I wish to give PBR’s answers to the thought experiment you raised. The probability that I am the clone or original is invalid. As it is a self-locating probability. The probability that today is Monday is 2⁄3. It is valid because both the clone and the original have the same value. There is no need to explain “which person I am”. The probability that the chosen digit of pi falls between 6-0 is 2⁄3. It is valid for the same reason. And the probability that the coin landed heads given “I” am awake is invalid. i.e. we cannot update based on the information “I” am awake. As the value depends on if “I” am the clone or the original.
I don’t think thirders would have any trouble giving a probability of 2⁄3 to the tenth digit of pi. All they have to do is treat “I” as a random sample between the original and clone and then conduct the analysis from a god’s eye perspective: randomly select a subject among the two, then find out she is awakened in the experiment, meaning the chosen digit is twice likely to be 6-0 (2/3). If the chosen subject does not wake up in the experiment then the chosen digit must be 1-5 (100%). They can also have a frequentist model as such no problem.
The thirders think reasoning objectively from a god’s eye view is the only correct way. I think reasoning from any perspective, like that of an experiment subject’s first-person perspective, is just as valid. And from the subject’s first-person perspective the original sleeping beauty problem has a probability of 1⁄2, verifiable with a frequentist model. I thought you agreed with this.
However, if you endorse traditional Lewsian halfer’s reasoning or SSA, which seems clear to me given your latest thought experiment with cloning and waking, then you can’t agree with me on that. Furthermore cannot use the subject’s first-person perspective, or the subsequent perspective disagreement, as a supporting argument for halving.
Consider this experiment. You will be cloned tonight in your sleep with memories preserved. Then a coin would be tossed, if Heads a randomly selected copy would be waked up in the morning. The other one would sleep through the experiment. If Tails both would be waked up. So after waking up the next day, what is the probability of Heads?
Thirders would consider this problem logically identical to the original sleeping beauty. I would guess as a Lewsian Halfer, you would too, and still giving the probability as 1⁄2. But for PBR, this problem is different. The probability of Head would rightfully be 1⁄3 in this case. As “I am awake in this experiment” is no longer guaranteed, it gives legitimate new information. And if I take part in such experiments repeatedly and count on my personal experience, the relative fraction of Heads in experiments where I got waken up would approach 1⁄3. So here you have to make a choice. If you stick with Lewsian Halfer’s reasoning, then you have to give up on thinking from a subject’s first-person perspective. Therefore cannot use it as a supporting argument for halving, nor use it as an explanation for the peculiar perspective disagreement. If you still think thinking from a subject’s first-person perspective is valid, then you have to give up on Lewisian reasoning, and cannot update the probability of Head to 2⁄3 after learning you are the chosen one who will be wakened up anyway. To me that is a really easy choice.
If you think predicting Heads with a probability of 2⁄3 is correct, there are other issues. For example, you can get a bunch of memory-erasing drugs, then you will have the supernatural predicting power. Before seeing the toss result, you can form the following intentions: if it comes up in tails I would take the drugs N times. To have N near-identical awakenings and revealings. Then by Lewsian Halfer (or SSA), if Heads then I would have been experiencing this revealing for sure. If Tails the probability of me currently experiencing this particular awakening would be 1/N. Therefore the probability of Heads is 1/(N+1). And you can squeeze that number as little or large as you want. All you have to do is to take the appropriate number of drugs after a certain result is revealed. By doing so you can retroactively control the coin toss result.
There’s a general consensus that, although quantum theory has changed our understanding of reality, Newtonian physics remains a reliable short term guide to the macro world. In principle, the vast majority of macro events that are just about to happen are thought to be 99.9999% inevitable, as opposed to 100% like Newton thought. From that I deduce that if a coin is shortly to be flipped, the outcome is unknown but, is as good as determined as makes no odds. Whereas if a coin is flipped farther into the future from a point of prediction, the outcome is proportionately more likely to be undetermined.
I’m willing to concede debate about this. What I do recognise is that Beauty’s answer of 2⁄3 Heads, after she learns it’s Monday, depends on it being an already certain but unknown outcome. Whereas if the equivalent of a quantum coin were to be flipped on Monday night, this makes a difference. In that case, awaking on Monday morning, Beauty would not yet be in a Heads world or a Tails world. Her answer would certainly be 1⁄2 , after she learns it’s Monday. What it would be before she learns it’s Monday would depend on what quantum theory model is used. I can consider this another time.
Perspective disagreement between interacting parties, as a result of someone having more than one possible self-locating identity, is something I can certainly see a reason for. Invalidating someone’s likelihood of what that identity might be, I can’t find a reason for. I’ve looked hard.
I’d like to explore your simplified experiment. First it’s important to distinguish precisely what happens with Heads to the version of me that is not woken during the experiment. If the other me is woken after the experiment and told this fact, then there’s no controversy. On finding myself awake in the experiment, my answer is definitely 1⁄3 for Heads and 2⁄3 for Tails. Furthermore, it should makes no difference which version might have woken inside the experiment and which outside, assuming the coin landed Heads. Nor does it matter if that potential selection was made before the flip and I’m subsequently told what the choice was. I’d argue that this information about my possible identity is irrelevant to my credence for the coin.
This takes us to a controversy at the heart of anthropic debate. In the event of Heads, if the version of me that is not woken in the experiment never wakes up at all, it becomes like standard Sleeping Beauty and the answer is 1⁄2 for Heads or Tails. This is because all awakenings will now be inside the experiment and at least one awakening is guaranteed. Regardless of identity, my mind was certain to continue, so long as at least one version woke up. Whether it’s the original or clone, either share the same memories and there is no qualitative difference for the guaranteed continuity of my consciousness. All that matters is that there is no possible experience outside the experiment.
Even if it is an uncertain event as to which body woke up, that uncertainty doesn’t apply to my mind. This was guaranteed to carry on in whichever body it found itself. For the unconscious body that never wakes up, no mind is present. If that body was the original, it’s former mind now continues in the clone body, complete with memories. There is no qualitative difference if I continue in my original body or if I continue as the clone. In terms of actual consciousness, my primitive self has no greater or lesser claim to identical memories of my past, because of the body I have. For some, this will be controversial.
It’s also irrelevant whether the potential sole awakening of original or clone was decided before the flip or whether I’m told what the choice was. Would you actually claim it’s 1⁄3 for Heads providing that, in the event of that outcome, you know don’t whether you woke as the original or clone? However, if you learn what the potential Heads selection was – regardless of whether this turns out to be original or clone – Heads goes up to 1/2? We’ve touched on this before. It wouldn’t be a perspective disagreement with a third party. It would be a perspective disagreement with yourself.
If I am understanding correctly, you are saying if the sleeping beauty problem does not use a coin toss, but measures the spin of an election instead, then the answer would be different. For the coin’s case, you will give the probability of Heads (yet to be tossed ) as 2⁄3 after learning it is Monday. But for the spin’s case, or a quantum coin, the probability must be 1⁄2 after learning it is Monday as it is a quantum event yet to happen.
That seems very ad-hoc to me. And I think differentiating “true quantum randomness” with something “99.99999% inevitable” in probability theories is a huge can of worms. But anyway, my question is, if the sleeping beauty problem uses a quantum coin, what is the probability of heads when you wake up, before being told what day it is? And what’s your probability after learning “it is Monday now”?
You said the answer depends on the quantum model used. I find it difficult to understand. Quantum models give different interpretations to make sense of the observed probability. The probability part is just experimental observation, not changed by which interpretation one prefers. But anyway, I am interested in your answer. How it can both keeps giving 1⁄2 to a quantum coin yet to be tossed, and obey bayesian probability when learning it is Monday.
As for the clone and waking experiment, you said the answer depends on what happens after the experiment, whether or not there will be further awakenings: If there are, thirding; if not, halving. Again, very ad-hoc. If the awakening depends on a second coin to be tossed after the experiment ends, what then? How should an independent event in the future retroactively affect the probability of the first coin toss? What if both coins are quantum? How can you keep your answer bayesian?
Just to be clear, my answer to cloning and waking P(H)=1/3 when waken up. The probability that I am the randomly chosen one, who would wake up regardless of the coin toss, is 2⁄3. The probability of Heads after learning I am the chosen one is 1⁄2. The answer does not depend on what happens after the experiment. And in all this reasoning, I do not know and do not need to think about, whether I am the original or the clone.
At this point, I find I am focusing more on arguing against SSA rather than explaining PBR. And the discussion is steering away from concrete thought experiments with numbers to metaphysical arguments.
I haven’t followed your arguments all the way here but I saw the comment
and would just jump in and say that others have made a similar arguments. The one written example I’ve seen is this Master’s Thesis.
I’m not sure if I’m convinced but at least I buy that depending on how the particular selection goes about there can be instances were difference between probabilities as subjective credences or densities of Everett branches can have decision theoretic implications.
Edit: I’ve fixed the link
The link point back to this post. But I also remember reading similar arguments from halfer before, that the answer changes depending on if it is true quantum randomness, could not remember the source though.
But the problem remains the same: can Halfers keep the probability of a coin yet to be tossed at 1⁄2, and remain Bayesian. Michael Titelbaum showed it cannot be true as long as the probability of “Today is Tuesday” is valid and non-zero. If Lewisian Halfer argues that, unlike true quantum randomness, a coin yet to be tossed can have a probability differing from half, such that they can endorse self-locating probability and remains Bayesian. Then the question can simply be changed to using quantum measurements (or quantum coin for ease of expression). Then Lewisian Halfers faces the counter-argument again: either the probability is 1⁄2 at waking up and remains at 1⁄2 after learning it is Monday, therefore non-Bayesian. Or the probability is indeed 1⁄3 and updates to 1⁄2 after learning it is Monday, therefore non-halving. The latter effectively says SSA is only correct in non-quantum events and SIA is correct only for quantum events. But differentiating the cases between quantum and non-quantum events is no easy job. A detailed analysis of a simple coin toss result can lead to many independent physical causes, which can very well depend on quantum randomness. What shall we do in these cases? It is a very assumption-heavy argument for an initially simple Halfer answer.
Edit: Just gave the linked thesis a quick read. The writer seems to be partial to MWI and thinks it gives a more logical explanation to anthropic questions. He is not keen on the notion of treating probability/chance as that a randomly possible world becomes actualized, but considers all possible worlds ARE real (many-worlds), that the source of probability (or “the illusion of probability” as the writer says) is from which branch-world “I” am in. My problem with that is the “I” in such statements is taken as intrinsically understood, i.e. has no explanation. It does not give any justification on what the probability of “I am in a Heads world” is. For it to give a probability, additional assumptions about “among all the physically-similar agents across the many-branched worlds, which one is I” is needed. And that circles back to anthropics. At the end of the day, it is still using anthropic assumptions to answer anthropic problems, just like SIA or SSA.
I have argued against MWI in anthropics in another post. If you are interested.