Now if Alice and Bob do meet, then Bob believes Alice’s coin came up heads with probability 2⁄3. If Alice is a thirder, she agrees. But if Alice is a halfer, they have an unresolvable disagreement.
That’s actually not the case. Halfer Alice agrees with Bob that her coin came Heads with 2⁄3 probability when they meet. Meeting with Bob is a new evidence that she couldn’t be certain to expect. She knows that when the coin is Heads this event has 100% probability, while on Tails it’s only 50%. So she updates as normal. And indeed, if we repeat the experiment multiple times and write down the state of the coin everytime Alice meets Bob, about 2⁄3 of them will be Heads.
Thirder Alice, on the contrary, doesn’t update at meeting Bob at all. She has already “updated on awakening” that the coin is 2⁄3 Heads and ignores new evidence. This serves her good when she meets Bob, in the same sense that a broken clock is right twice a day. But in a general case about 1⁄2 of coin tosses are Heads.
Here’s another thought experiment I came up with sometime ago
Suppose Alice is a thirder and doesn’t write herself a note. Now she has two awakenings on both Heads and Tails which she can’t destinguish between and thus agrees with a halfer Alice. But why stop here? Suppose that on Tails she recreates not one but two awakenings. Now thirder Alice will have 3⁄5 credence in favour of Tails. This way a thirder Alice can make herself arbitrary confident in the result of the coin toss just by a precommitment! Such shenanigans, however, will not work with a halfer.
She knows that when the coin is Heads this event has 100% probability, while on Tails it’s only 50%.
I might be missing something in your argument, but I think in my setup as stated, it should be 50% in both cases. When Alice’s coin is heads, she wakes up on both days, but Bob wakes up on only one of them, depending on his own coin. So no matter if Alice is a halfer or a thirder, meeting Bob doesn’t give her any new information about her coin. While Bob, in case of meeting Alice, does update to 2⁄3 about Alice’s coin. So if the Alice he’s meeting is a halfer, they have an unresolvable disagreement about her coin.
This way a thirder Alice can make herself arbitrary confident in the result of the coin toss just by a precommitment!
Yeah, also known as “algorithm for winning the lottery”: precommit to make many copies of yourself if you win. I guess we thirders have learned to live with it.
Think about it in terms of what happens in the experiment as a whole—that’s what halfing is about. If the coin is Heads Alice always meets Bob, either on Monday or on Tuesday, depending on when Bob is awake. If the coin is Tails Alice can meet Bob only on Monday, which happens only 50% of time. Run the experiment multiple times and there will be iterations of it where Alice and Bob do not meet and all of them happen when the coin is Tails.
That’s how halfer Alice thinks:
I’m awake. But this doesn’t give me any new information because I was expecting to be awake in this experiment with 100%, regardless of the outcome of the coin toss:
This is the same update that Bob makes when he meets Alice. On the other hand, Thirder Alice belives that probability for Heads is 2⁄3 even before she meets Bob. Which makes her overconfident in the wrong answer when the meeting doesn’t happen and the coin is Tails.
Likewise, you can remove Bob from the problem alltogether and just make it so Alice has only 50% chance to be awake on Monday, and if it didn’t happen and the coin is Heads she will be Awake on Tuesday. Here awakening is new evidence because it doesn’t happen in every experiment.
It seems to me that the correct Bayesian updating is a bit different.
Let’s denote Alice and Bob’s coins as A and B, each taking values H or T, and denote the current day as D, taking values 1 or 2. Then, just after waking up but before learning whether Bob is awake, Halfer Alice has this prior: P(A=H∧D=1) = 1⁄4, P(A=H∧D=2) = 1⁄4, P(A=T∧D=1) = 1⁄2, and independently P(B=H) = P(B=T) = 1⁄2.
After that, meeting Bob gives new her information N = (A=H∧D=1∧B=H) ∨ (A=H∧D=2∧B=T) ∨ (A=T∧D=1∧B=H). These are three mutually exclusive clauses, and we can compute each of them according to Alice’s prior above: P(A=H∧D=1∧B=H) = 1⁄4 * 1⁄2 = 1⁄8, P(A=H∧D=2∧B=T) = 1⁄4 * 1⁄2 = 1⁄8, P(A=T∧D=1∧B=H) = 1⁄2 * 1⁄2 = 1⁄4. The probability mass of N is split equally between A=H and A=T, so observing N shouldn’t make Halfer Alice update about her coin.
You are describing Lewisian Halfer’s model. It indeed produces incorrect results for Sleeping Beauty, I’m currently working on a post that is going to dive deeper into it, but for now, suffice to say, that according to it P(Heads|Monday) = 3⁄2, which is clearly wrong. It also fails at betting.
Correct halfer position for Sleeping Beauty, also known as double halfing, claims that both P(Heads|Awake) = 1⁄2 and P(Heads|Monday) = 1⁄2. Once again, I’m planning to have a deep dive into how it is possible and why it’s the right answer in a separate post. For now you can just see that it’s about probabilities averaged per iteration of experiment, not per awakening and that my comment is talking about how a Beauty, who is this kind of halfer, is supposed to reason.
suffice to say, that according to it P(Heads|Monday) = 3⁄2, which is clearly wrong
That sentence confuses me. The formulas from my comment imply that P(A=H|D=1) = 1⁄3 and P(A=T|D=1) = 2⁄3, which looks like an ok halfer position (modulo the fact that I accidentally swapped heads and tails in the very first comment and now am still sticking to that for coherence—sorry!)
About the double halfer position, not sure I understand it. What probabilities does it give for these three conjunctions: P(A=H∧D=1), P(A=H∧D=2), P(A=T∧D=1)? It seems to me (maybe naively) that these three numbers should be enough, any conditionals can be calculated from them by Bayes’ theorem.
Think about it. In Sleeping Beauty, Monday awakening always happens. So the coin toss can be made already after this awakening. If 3⁄2 had been a correct estimate, the Beauty would have been able to predict the future coin toss better than chance. Which would’ve been… quite peculiar, to say the least. Of course one can also just run a simulation of the experiment and check.
Lewisian Halfers, however, are right for the Fissure mind experiment, where a person is either left in Room1 or splited in two people, one of which goes to Room1 and the second goes to Room2 - at random. In such experiment, you can’t be certain to be in Room1 and so being in Room1 is indeed more likely if there was no spliting. And if a visitor comes to a random room and you meet there you will indeed have disagreement about probabilities.
About the double halfer position, not sure I understand it. What probabilities does it give for these three conjunctions: P(A=H∧D=1), P(A=H∧D=2), P(A=T∧D=1)?
Answering your direct question: 1⁄2 for all three.
That’s because (A=H∧D=1) and (A=H∧D=2) are actually the same outcome. P(D=1) = 1; P(D=2) = 1⁄2.
Here (D=1) doesn’t mean “this awakening is happening in the first day of the experiment” but rather “awakening in the first day of the experiment has happened”. Likewise with (D=2). (D=1) and (D=2) are not exclusonary here, but intersecting. If (D=2) is true then (D=1) is also always true. And if you want to talk specifically about “this awakening is happening in the first day of the experiment”, then such probability is undefined for the Sleeping Beauty setting.
If it gave you more questions than answers—I’m hopefully going to answer them all when I finish my next couple of posts, where I attempt to rigorously justify all of it. For now you can just notice that double halfer position doesn’t have problems with betting, doesn’t updates out of nowhere and doesn’t allow you to arbitrary manipulate your probabilities by precommitments.
And if you want to talk specifically about “this awakening is happening in the first day of the experiment”, then such probability is undefined for the Sleeping Beauty setting.
Yeah, I don’t know if “undefined” is a good answer.
To be fair, there are some decision-theoretic situations where “undefined” is a good answer. For example, let’s say Alice wakes up with amnesia on 10 consecutive days, and each day she’s presented with a choice of envelope A or envelope B, one of which contains money. And she knows that whichever envelope she chooses on day 1, the experimenter will put money in the other envelope on days 2-10. This case is truly undefined: the contents of the envelopes on the desk in front of Alice are eerily dependent on how Alice will make the choice. For example, if she always chooses envelope A, then she should believe that the money is in envelope A with probability 10% and in B with probability 90%. But she can’t use that knowledge to say “oh I’ll choose B then”, because that’ll change the probabilities again.
But the Sleeping Beauty problem is not like that. Alice doesn’t make any decisions during the experiment that could feed back into her probabilities. If each day we put a sealed envelope in front of Alice, containing a note saying which day it is, then Alice really ought to have some probability distribution over what’s in the envelope. Undefined doesn’t cut it for me yet. Maybe I should just wait for your post :-)
Why “this awakening is happening during Monday” isn’t a valid event indeed requires some careful justifications and I wasn’t planning to go into more details in this comment section. But the example of an undefined event you brough up is actually very helpful to get the right intuition. Because yes, despite the fact that it’s less obvious, Sleeping Beauty problem is very much like this.
Lets look again at the envelope experiment. There are two outcomes for money placement: ABBBBBBBBB and BAAAAAAAAA. If you run the experiment and write down where the money were at each awakening you will always notice these long lines of As or Bs. If we do the same in a different experimental setting, where for every awakening there is some non-zero chance that money are either in envelope A or in envelope B, we won’t be able to always be sure to observe this streak of the same values.
There is the exact same behaviour with Alice and Bob’s awakenings. Run the experiment multiple times and write down which day the awakenings happen. Bob will have his Monday and Tuesday awakenings going in random order, while Alice will always have her Tuesday awakening predecessed by Monday awakening. Compare experiments with more awakenings, for example where Alice wakes up on every day of the week on Heads and Bob wakes up on a random day of the week, and it will be more obvious.
Just like when Alice picks envelope A on day one money will always be placed in the envelope B on day two, When Alice wakes up on Heads&Monday her next awakening will always happen on Heads&Tuesday. In both cases, previous awakening affects the future awakening.
Hmm. But in the envelope experiment, once Alice commits to a decision (e.g. choose A), her probabilities are well-defined. So in Sleeping Beauty, if we make it so the day is automatically disclosed to Alice at 5pm let’s say, it seems like her probabilities about it should be well-defined from the get go. Or at least, the envelope experiment doesn’t seem to shed light why they should be undefined. Am I missing something?
Hmm. But in the envelope experiment, once Alice commits to a decision (to e.g. choose A), the probabilities are well-defined. So in Sleeping Beauty, if the day is automatically disclosed to Alice at 5pm let’s say, it seems like her probabilities about it should be well-defined from the get go.
Do you mean that conditional probabilities should be well defined? They indeed are.
P(Heads|Monday) = 1⁄2; P(Heads|Tuesday)=1. But as P(Monday) and P(Tuesday) are not defined you can’t use them to arrive to P(Heads&Monday) and P(Heads&Tuesday) via Bayes theorem.
If you say things like “P(X|Y) is defined but P(Y) isn’t”, doesn’t that call for a reformulation of all probability theory? Like, if I take the interpretation of probability theory based on sigma-algebras (which is quite popular), then P(Y) gotta be defined, no way around it. The very definition of P(X|Y) depends on P(X∧Y) and P(Y). You can say “let’s kick out this leg from this table”, but the math tells me pretty insistently that the table can’t stand without that particular leg. Or at least, if there’s a version of probability theory where P(Y) can be undefined but P(X|Y) defined, I’d want to see more details about that theory and how it doesn’t trip over itself. Does that make sense?
This equation for a conditional probability, although mathematically equivalent, may be intuitively easier to understand. It can be interpreted as “the probability of B occurring multiplied by the probability of A occurring, provided that B has occurred, is equal to the probability of the A and B occurrences together, although not necessarily occurring at the same time”. Additionally, this may be preferred philosophically; under major probability interpretations, such as the subjective theory, conditional probability is considered a primitive entity. Moreover, this “multiplication rule” can be practically useful in computing the probability of
and introduces a symmetry with the summation axiom for Poincaré Formula
Not sure I understand. My question was, what kind of probability theory can support things like “P(X|Y) is defined but P(Y) isn’t”. The snippet you give doesn’t seem relevant to that, as it assumes both values are defined.
The kind of probability theory that defines P(X|Y) axiomatically as a primitive entity and only then defines P(X&Y) as a multiplication of P(X|Y) and P(Y), instead of defining conditional probability as a ratio between P(X&Y) and P(Y).
While it’s mathematically equivalent, the former method is more resembling the way people deal with probabilities in practice—usually conditional probability is known and probability of an intersection isn’t—and formally allows us to talk about conditional probabilities, even when the probability of an event we condition on is not defined.
I think this highlights our difference at least in the numerical sense in this example. I would say Alex and Bob would disagree (provided Alex is a halfer, which is the correct answer in my opinion). The disagreement is again based on the perspective-based self identification. From Alex’s perspective, there is an inherent difference between “today’s awakening” and “the other day’s awakening” (provided there is actually two awakenings). But to Bob, either of those is “today’s awakening”, Alex cannot communicate the inherent difference from her perspective to Bob.
In another word, after waking up during the experiment, the two alternatives are “I see Bob today” or “I do not see Bob today.” Both at 0.5 chance regardless of the coin toss result.
I think this highlights our difference at least in the numerical sense in this example.
Yes! This is one of the few objective disagreements we have and I’m very excited to figure it out!
You seem to treat different awakenings of Alice as if they were different people in attempts to preserve the similarity between memory erasure sleeping beauty type of problems and fissure type of problems. While I notice that these problems are different.
The difference is that in Sleeping Beauty P(Heads|Monday) = 1⁄2 while in Fissure, where non-fissured person is always in Room1 and fissured people are randomly assigned either Room1 or Room2, P(Heads|Room1) = 2⁄3. Is it our crux?
I maintain the memory erasure and fission problem are similar because I regard the first-person identification equally applies to both questions. Both the inherent identifications of “NOW” and “I” are based on the primitive perspective. I.E., to Alice, today’s awakening is not the other day’s awakening, she can naturally tell them apart because she is experiencing the one today.
I don’t think our difference comes from the non-fissured person always stays in Room1 while the fissure person are randomly assigned either Room 1 or Room 2. Even if the experiment is changed, so that the non-fissured person is randomly assigned among the two rooms, and the fissured person with the original left body always stays in Room 1 and the fissured person with the original right body always in Room 2 my answer wouldn’t change.
Our difference still lies in the primitivity of perspective. In this current problem by cousin-it, I would say Alice should not update the probability after meeting Bob, because from her first-person perspective, the only thing she can observe is “I see Bob (today)” vs “I don’t see Bob (today)”, and her probability shall be calculated accordingly. She is not in the vantage point to observe whether “I see Bob on one of the two days” vs “I don’t see Bob on any of the two days”, so she should not update that way.
to Alice, today’s awakening is not the other day’s awakening, she can naturally tell them apart because she is experiencing the one today.
Well, sure but nothing is preventing her from also realizing that both of the awakenings are happening to her, not some other person. That both today’s and tomorrow awakening are casually connected to each other even if she has her memory erased, contrary to the fissure problem where there are actually two different people in two rooms with their own causal history hence forth.
I would say Alice should not update the probability after meeting Bob, because from her first-person perspective, the only thing she can observe is “I see Bob (today)” vs “I don’t see Bob (today)”, and her probability shall be calculated accordingly. She is not in the vantage point to observe whether “I see Bob on one of the two days” vs “I don’t see Bob on any of the two days”, so she should not update that way.
Alice is indeed unable to observe the event “I didn’t see Bob at all”. Due to the memory erasure she can’t distinguish between “I don’t observe Bob today but will observe him tomorrow/observed him yesterday” and “I do not observe Bob in this experiment at all”. So when Alice doesn’t see Bob she keeps her credence at 50%.
But why doesn’t she also observe “I see Bob on one of the two days”, if she sees Bob on a specific day? Surely today is one of the two days. This seems like logical necessity.
Suppose there is no Bob. Suppose:
The Beauty is awakened on Monday with 50% chance. If she wasn’t awakened a fair coin is tossed. On Tails the Beauty is awakened on Tuesday.
Do you also think that the Beauty isn’t supposed to update in favor of Tails when she awakes in this case?
This post highlights my problem with your approach: I just don’t see a clear logic dictating which interpretation to use in a given problem—whether it’s the specific first-person instance or any instance in some reference class.
When Alice meets Bob, you are saying she should construe it as “I meet Bob in the experiment (on any day)” instead of “I meet Bob today” because—”both awakening are happening to her, not another person”. This personhood continuity, in your opinion, is based on what? Given you have distinguished the memory erasure problem from the fission problem, I would venture to guess you identify personhood by the physical body. If that’s the case, would it be correct to say you regard anthropic problems utilizing memory erasures fundamentally different from problems with fissures or clones? Entertain me this, what if the exact procedural is not disclose to you, then what? E.g. there is a chance that the “memory erasure” is actually achieved by creating a clone of Alice and wake that clone on Monday, then destroy it. Then wake the original on Tuesday. What would Alice’s probability calculation then? Anything changes if the fissure is used instead of cloning? What would Alice’s probability of Tails when she sees Bob when she is unsure of the exact procedure?
Furthermore you are holding that if saw Bob, Alice should interpret “I have met Bob (on some day) in the experiment”. But if if she didn’t see Bob, she shall interpret “I haven’t met Bob specifically for Today”. In another word, whether to use “specifically today” or “someday” depends on whether or not she sees Bob or not. Does this not seem problematic at all to you?
I’m not sure about what you mean in your example, Beauty is awakened on Monday with 50% chance, if she is awaken then what happens? Nothing? The experiment just ends, perhaps with a non-consequential fair coin toss anyway? If she is not awakened then if the coin toss is Tails then she wakes on Tuesday? Is that the setup? I fail to see there is any anthropic elements in this question at all. Of course I would update the probability to favour Tails in this case upon awakening. Because that is new information for me. I wasn’t sure that I would find myself awake during the experiment at all.
This personhood continuity, in your opinion, is based on what?
Causality. Two time states of a single person a causally connected, while two clones are not. Probability theory treats independent and non-independent events differently. The fact that it fits the basic intuition for personal identity is a nice bonus.
If that’s the case, would it be correct to say you regard anthropic problems utilizing memory erasures fundamentally different from problems with fissures or clones?
Yes it would. I find the fact that these problems are put in the same category of “anthropic problems” quite unfortunate as they have testably different probability theoretic properties. For example for Sleeping Beauty correct position is double halfism, while for fissure—lewisian halfism.
Entertain me this, what if the exact procedural is not disclose to you, then what? E.g. there is a chance that the “memory erasure” is actually achieved by creating a clone of Alice and wake that clone on Monday, then destroy it. Then wake the original on Tuesday. What would Alice’s probability calculation then? Anything changes if the fissure is used instead of cloning? What would Alice’s probability of Tails when she sees Bob when she is unsure of the exact procedure?
Okay, that sounds as an interesting problem. Let’s formulate it like this:
Alice if put to sleep then the coin is tossed. On Heads she is awaken on Monday. On Tails another coin is tossed:
Either she is awakened both on Monday and on Tuesday with memory erasure
Or fissure happens. Alice1 is awakened on Monday, Alice2 is awakened on Tuesday
What do we have probability wise, on an awakening on the unknown day?
50% for Heads, 50% for Tails, 25% fissure, 25% memory erasure, 12.5% to be Alice1/Alice2
Now, suppose Alice meets Bob, who is awaken on a random day. Bob updates 2⁄3 in favor of Tails as he meets an Alice in the experiment with 75% probability.
But for a particular Alice the probability to meet Bob in the experiment is only 1⁄4 + 2⁄8 + 1⁄8 = 5⁄8
So her probability that the initial coin is Heads:
Now, I think in this particular case there is not much difference between fissure and cloning. There would apparently be difference if we were talking about a person who was about to participate in the experiment, instead of a person in the middle of it. Because current participator can be in the state of uncertanity whether she is a clone or not, while future participator is pretty sure the she is not going to be a clone, thus can omit this possibility from the calculations.
But yeah, I should probably write a separate post about such scenarios, after I’m done with the Sleeping Beauty case.
Furthermore you are holding that if saw Bob, Alice should interpret “I have met Bob (on some day) in the experiment”. But if if she didn’t see Bob, she shall interpret “I haven’t met Bob specifically for Today”. In another word, whether to use “specifically today” or “someday” depends on whether or not she sees Bob or not. Does this not seem problematic at all to you?
As a matter of fact, it doesn’t. You seem to be thinking that I’m switching between two different mathematical models here. But actually, we can use a single probability space.
“I see Bob in the experiment” is equal to “I see Bob on either Monday or Tuesday” it’s an event that consist of two outcomes: “seeing Bob on Monday” and “seeing Bob on Tuesday”. When an outcome is realized it means that every event which this outcome is part of is realized. So when Alice sees Bob on Monday she both observes “I see Bob on Monday” and “I see Bob in the experiment”. And, likewise, when Alice sees Bob on Tuesday. Just one observation of Bob on any day of the experiment is enough to be certain that Bob was observed on either Monday or Tuesday.
On the other hand, “I don’t see Bob in the experiment” happens only when Bob was neither observed on Monday, nor on Tuesday. Not observing him only on one day isn’t enough. To observe this event Alice has to accumulate information between two days.
All this is true, regardless of whether there is memory erasure or not. What is different with memory erasure is that now Alice is made unable to accumulate information between days. So she can’t observe event “I don’t see Bob in the experiment”. However, she is still perfectly able to observe event “I see Bob in the experiment”. She is supposed to update her credence for Heads based on it. And until her memory is erased she can act on this information.
What if problematic, on the other hand, is the “today”, “this awakening” and similar categories which can’t be formally mathematically specified in Sleeping Beauty. This is the reason why probability of an event “today is Monday” is undefined, “today” is not just some variable that takes a specific value from the {Monday, Tuesday}, on Tails it have to be both! It’s not a fixed thing throughout the experiment and so reasoning as if it is leads to confusion and paradoxes.
I fail to see there is any anthropic elements in this question at all. Of course I would update the probability to favour Tails in this case upon awakening. Because that is new information for me. I wasn’t sure that I would find myself awake during the experiment at all.
As I keep saying, this whole “anthropic problems” category is silly to begin with. All of these are just plain probability theory problems. And these two problems are isomorphic to each other. If being awaken on Tails is twice as likely than being awaken on Heads, awakening is an evidence in favor of Tails. If meeting Bob is twice as likely on Tails than on Heads, then meeting Bob is an evidence in favor of Tails. The same basic principle that gives you answer in one problem gives you the answer to the other. You don’t need to search for any “anthropic elements” in these problems. The math works the same way.
That’s actually not the case. Halfer Alice agrees with Bob that her coin came Heads with 2⁄3 probability when they meet. Meeting with Bob is a new evidence that she couldn’t be certain to expect. She knows that when the coin is Heads this event has 100% probability, while on Tails it’s only 50%. So she updates as normal. And indeed, if we repeat the experiment multiple times and write down the state of the coin everytime Alice meets Bob, about 2⁄3 of them will be Heads.
Thirder Alice, on the contrary, doesn’t update at meeting Bob at all. She has already “updated on awakening” that the coin is 2⁄3 Heads and ignores new evidence. This serves her good when she meets Bob, in the same sense that a broken clock is right twice a day. But in a general case about 1⁄2 of coin tosses are Heads.
Suppose Alice is a thirder and doesn’t write herself a note. Now she has two awakenings on both Heads and Tails which she can’t destinguish between and thus agrees with a halfer Alice. But why stop here? Suppose that on Tails she recreates not one but two awakenings. Now thirder Alice will have 3⁄5 credence in favour of Tails. This way a thirder Alice can make herself arbitrary confident in the result of the coin toss just by a precommitment! Such shenanigans, however, will not work with a halfer.
I might be missing something in your argument, but I think in my setup as stated, it should be 50% in both cases. When Alice’s coin is heads, she wakes up on both days, but Bob wakes up on only one of them, depending on his own coin. So no matter if Alice is a halfer or a thirder, meeting Bob doesn’t give her any new information about her coin. While Bob, in case of meeting Alice, does update to 2⁄3 about Alice’s coin. So if the Alice he’s meeting is a halfer, they have an unresolvable disagreement about her coin.
Yeah, also known as “algorithm for winning the lottery”: precommit to make many copies of yourself if you win. I guess we thirders have learned to live with it.
Think about it in terms of what happens in the experiment as a whole—that’s what halfing is about. If the coin is Heads Alice always meets Bob, either on Monday or on Tuesday, depending on when Bob is awake. If the coin is Tails Alice can meet Bob only on Monday, which happens only 50% of time. Run the experiment multiple times and there will be iterations of it where Alice and Bob do not meet and all of them happen when the coin is Tails.
That’s how halfer Alice thinks:
I’m awake. But this doesn’t give me any new information because I was expecting to be awake in this experiment with 100%, regardless of the outcome of the coin toss:
P(Heads|Awake)=P(Awake|Heads)P(Heads)/P(Awake)=1∗1/2∗1=1/2
Oh, hi Bob! Now this doesn’t happen every experiment. This is new evidence in favour of Heads, so I update:
P(Heads|MeetsBob)=P(MeetsBob|Heads)P(Heads)/P(MeetsBob)=1∗1/2∗4/3=2/3
This is the same update that Bob makes when he meets Alice. On the other hand, Thirder Alice belives that probability for Heads is 2⁄3 even before she meets Bob. Which makes her overconfident in the wrong answer when the meeting doesn’t happen and the coin is Tails.
Likewise, you can remove Bob from the problem alltogether and just make it so Alice has only 50% chance to be awake on Monday, and if it didn’t happen and the coin is Heads she will be Awake on Tuesday. Here awakening is new evidence because it doesn’t happen in every experiment.
It seems to me that the correct Bayesian updating is a bit different.
Let’s denote Alice and Bob’s coins as A and B, each taking values H or T, and denote the current day as D, taking values 1 or 2. Then, just after waking up but before learning whether Bob is awake, Halfer Alice has this prior: P(A=H∧D=1) = 1⁄4, P(A=H∧D=2) = 1⁄4, P(A=T∧D=1) = 1⁄2, and independently P(B=H) = P(B=T) = 1⁄2.
After that, meeting Bob gives new her information N = (A=H∧D=1∧B=H) ∨ (A=H∧D=2∧B=T) ∨ (A=T∧D=1∧B=H). These are three mutually exclusive clauses, and we can compute each of them according to Alice’s prior above: P(A=H∧D=1∧B=H) = 1⁄4 * 1⁄2 = 1⁄8, P(A=H∧D=2∧B=T) = 1⁄4 * 1⁄2 = 1⁄8, P(A=T∧D=1∧B=H) = 1⁄2 * 1⁄2 = 1⁄4. The probability mass of N is split equally between A=H and A=T, so observing N shouldn’t make Halfer Alice update about her coin.
You are describing Lewisian Halfer’s model. It indeed produces incorrect results for Sleeping Beauty, I’m currently working on a post that is going to dive deeper into it, but for now, suffice to say, that according to it P(Heads|Monday) = 3⁄2, which is clearly wrong. It also fails at betting.
Correct halfer position for Sleeping Beauty, also known as double halfing, claims that both P(Heads|Awake) = 1⁄2 and P(Heads|Monday) = 1⁄2. Once again, I’m planning to have a deep dive into how it is possible and why it’s the right answer in a separate post. For now you can just see that it’s about probabilities averaged per iteration of experiment, not per awakening and that my comment is talking about how a Beauty, who is this kind of halfer, is supposed to reason.
That sentence confuses me. The formulas from my comment imply that P(A=H|D=1) = 1⁄3 and P(A=T|D=1) = 2⁄3, which looks like an ok halfer position (modulo the fact that I accidentally swapped heads and tails in the very first comment and now am still sticking to that for coherence—sorry!)
About the double halfer position, not sure I understand it. What probabilities does it give for these three conjunctions: P(A=H∧D=1), P(A=H∧D=2), P(A=T∧D=1)? It seems to me (maybe naively) that these three numbers should be enough, any conditionals can be calculated from them by Bayes’ theorem.
Think about it. In Sleeping Beauty, Monday awakening always happens. So the coin toss can be made already after this awakening. If 3⁄2 had been a correct estimate, the Beauty would have been able to predict the future coin toss better than chance. Which would’ve been… quite peculiar, to say the least. Of course one can also just run a simulation of the experiment and check.
Lewisian Halfers, however, are right for the Fissure mind experiment, where a person is either left in Room1 or splited in two people, one of which goes to Room1 and the second goes to Room2 - at random. In such experiment, you can’t be certain to be in Room1 and so being in Room1 is indeed more likely if there was no spliting. And if a visitor comes to a random room and you meet there you will indeed have disagreement about probabilities.
Answering your direct question: 1⁄2 for all three.
That’s because (A=H∧D=1) and (A=H∧D=2) are actually the same outcome. P(D=1) = 1; P(D=2) = 1⁄2.
Here (D=1) doesn’t mean “this awakening is happening in the first day of the experiment” but rather “awakening in the first day of the experiment has happened”. Likewise with (D=2). (D=1) and (D=2) are not exclusonary here, but intersecting. If (D=2) is true then (D=1) is also always true.
And if you want to talk specifically about “this awakening is happening in the first day of the experiment”, then such probability is undefined for the Sleeping Beauty setting.
If it gave you more questions than answers—I’m hopefully going to answer them all when I finish my next couple of posts, where I attempt to rigorously justify all of it. For now you can just notice that double halfer position doesn’t have problems with betting, doesn’t updates out of nowhere and doesn’t allow you to arbitrary manipulate your probabilities by precommitments.
Yeah, I don’t know if “undefined” is a good answer.
To be fair, there are some decision-theoretic situations where “undefined” is a good answer. For example, let’s say Alice wakes up with amnesia on 10 consecutive days, and each day she’s presented with a choice of envelope A or envelope B, one of which contains money. And she knows that whichever envelope she chooses on day 1, the experimenter will put money in the other envelope on days 2-10. This case is truly undefined: the contents of the envelopes on the desk in front of Alice are eerily dependent on how Alice will make the choice. For example, if she always chooses envelope A, then she should believe that the money is in envelope A with probability 10% and in B with probability 90%. But she can’t use that knowledge to say “oh I’ll choose B then”, because that’ll change the probabilities again.
But the Sleeping Beauty problem is not like that. Alice doesn’t make any decisions during the experiment that could feed back into her probabilities. If each day we put a sealed envelope in front of Alice, containing a note saying which day it is, then Alice really ought to have some probability distribution over what’s in the envelope. Undefined doesn’t cut it for me yet. Maybe I should just wait for your post :-)
Why “this awakening is happening during Monday” isn’t a valid event indeed requires some careful justifications and I wasn’t planning to go into more details in this comment section. But the example of an undefined event you brough up is actually very helpful to get the right intuition. Because yes, despite the fact that it’s less obvious, Sleeping Beauty problem is very much like this.
Lets look again at the envelope experiment. There are two outcomes for money placement: ABBBBBBBBB and BAAAAAAAAA. If you run the experiment and write down where the money were at each awakening you will always notice these long lines of As or Bs. If we do the same in a different experimental setting, where for every awakening there is some non-zero chance that money are either in envelope A or in envelope B, we won’t be able to always be sure to observe this streak of the same values.
There is the exact same behaviour with Alice and Bob’s awakenings. Run the experiment multiple times and write down which day the awakenings happen. Bob will have his Monday and Tuesday awakenings going in random order, while Alice will always have her Tuesday awakening predecessed by Monday awakening. Compare experiments with more awakenings, for example where Alice wakes up on every day of the week on Heads and Bob wakes up on a random day of the week, and it will be more obvious.
Just like when Alice picks envelope A on day one money will always be placed in the envelope B on day two, When Alice wakes up on Heads&Monday her next awakening will always happen on Heads&Tuesday. In both cases, previous awakening affects the future awakening.
Hmm. But in the envelope experiment, once Alice commits to a decision (e.g. choose A), her probabilities are well-defined. So in Sleeping Beauty, if we make it so the day is automatically disclosed to Alice at 5pm let’s say, it seems like her probabilities about it should be well-defined from the get go. Or at least, the envelope experiment doesn’t seem to shed light why they should be undefined. Am I missing something?
Do you mean that conditional probabilities should be well defined? They indeed are.
P(Heads|Monday) = 1⁄2; P(Heads|Tuesday)=1. But as P(Monday) and P(Tuesday) are not defined you can’t use them to arrive to P(Heads&Monday) and P(Heads&Tuesday) via Bayes theorem.
If you say things like “P(X|Y) is defined but P(Y) isn’t”, doesn’t that call for a reformulation of all probability theory? Like, if I take the interpretation of probability theory based on sigma-algebras (which is quite popular), then P(Y) gotta be defined, no way around it. The very definition of P(X|Y) depends on P(X∧Y) and P(Y). You can say “let’s kick out this leg from this table”, but the math tells me pretty insistently that the table can’t stand without that particular leg. Or at least, if there’s a version of probability theory where P(Y) can be undefined but P(X|Y) defined, I’d want to see more details about that theory and how it doesn’t trip over itself. Does that make sense?
Sure. But this has already been done and took much less trouble than you might have though. Citing Wikipedia on Conditional Probability:
Not sure I understand. My question was, what kind of probability theory can support things like “P(X|Y) is defined but P(Y) isn’t”. The snippet you give doesn’t seem relevant to that, as it assumes both values are defined.
The kind of probability theory that defines P(X|Y) axiomatically as a primitive entity and only then defines P(X&Y) as a multiplication of P(X|Y) and P(Y), instead of defining conditional probability as a ratio between P(X&Y) and P(Y).
While it’s mathematically equivalent, the former method is more resembling the way people deal with probabilities in practice—usually conditional probability is known and probability of an intersection isn’t—and formally allows us to talk about conditional probabilities, even when the probability of an event we condition on is not defined.
I think this highlights our difference at least in the numerical sense in this example. I would say Alex and Bob would disagree (provided Alex is a halfer, which is the correct answer in my opinion). The disagreement is again based on the perspective-based self identification. From Alex’s perspective, there is an inherent difference between “today’s awakening” and “the other day’s awakening” (provided there is actually two awakenings). But to Bob, either of those is “today’s awakening”, Alex cannot communicate the inherent difference from her perspective to Bob.
In another word, after waking up during the experiment, the two alternatives are “I see Bob today” or “I do not see Bob today.” Both at 0.5 chance regardless of the coin toss result.
Yes! This is one of the few objective disagreements we have and I’m very excited to figure it out!
You seem to treat different awakenings of Alice as if they were different people in attempts to preserve the similarity between memory erasure sleeping beauty type of problems and fissure type of problems. While I notice that these problems are different.
The difference is that in Sleeping Beauty P(Heads|Monday) = 1⁄2 while in Fissure, where non-fissured person is always in Room1 and fissured people are randomly assigned either Room1 or Room2, P(Heads|Room1) = 2⁄3. Is it our crux?
I maintain the memory erasure and fission problem are similar because I regard the first-person identification equally applies to both questions. Both the inherent identifications of “NOW” and “I” are based on the primitive perspective. I.E., to Alice, today’s awakening is not the other day’s awakening, she can naturally tell them apart because she is experiencing the one today.
I don’t think our difference comes from the non-fissured person always stays in Room1 while the fissure person are randomly assigned either Room 1 or Room 2. Even if the experiment is changed, so that the non-fissured person is randomly assigned among the two rooms, and the fissured person with the original left body always stays in Room 1 and the fissured person with the original right body always in Room 2 my answer wouldn’t change.
Our difference still lies in the primitivity of perspective. In this current problem by cousin-it, I would say Alice should not update the probability after meeting Bob, because from her first-person perspective, the only thing she can observe is “I see Bob (today)” vs “I don’t see Bob (today)”, and her probability shall be calculated accordingly. She is not in the vantage point to observe whether “I see Bob on one of the two days” vs “I don’t see Bob on any of the two days”, so she should not update that way.
Well, sure but nothing is preventing her from also realizing that both of the awakenings are happening to her, not some other person. That both today’s and tomorrow awakening are casually connected to each other even if she has her memory erased, contrary to the fissure problem where there are actually two different people in two rooms with their own causal history hence forth.
Alice is indeed unable to observe the event “I didn’t see Bob at all”. Due to the memory erasure she can’t distinguish between “I don’t observe Bob today but will observe him tomorrow/observed him yesterday” and “I do not observe Bob in this experiment at all”. So when Alice doesn’t see Bob she keeps her credence at 50%.
But why doesn’t she also observe “I see Bob on one of the two days”, if she sees Bob on a specific day? Surely today is one of the two days. This seems like logical necessity.
Suppose there is no Bob. Suppose:
The Beauty is awakened on Monday with 50% chance. If she wasn’t awakened a fair coin is tossed. On Tails the Beauty is awakened on Tuesday.
Do you also think that the Beauty isn’t supposed to update in favor of Tails when she awakes in this case?
This post highlights my problem with your approach: I just don’t see a clear logic dictating which interpretation to use in a given problem—whether it’s the specific first-person instance or any instance in some reference class.
When Alice meets Bob, you are saying she should construe it as “I meet Bob in the experiment (on any day)” instead of “I meet Bob today” because—”both awakening are happening to her, not another person”. This personhood continuity, in your opinion, is based on what? Given you have distinguished the memory erasure problem from the fission problem, I would venture to guess you identify personhood by the physical body. If that’s the case, would it be correct to say you regard anthropic problems utilizing memory erasures fundamentally different from problems with fissures or clones? Entertain me this, what if the exact procedural is not disclose to you, then what? E.g. there is a chance that the “memory erasure” is actually achieved by creating a clone of Alice and wake that clone on Monday, then destroy it. Then wake the original on Tuesday. What would Alice’s probability calculation then? Anything changes if the fissure is used instead of cloning? What would Alice’s probability of Tails when she sees Bob when she is unsure of the exact procedure?
Furthermore you are holding that if saw Bob, Alice should interpret “I have met Bob (on some day) in the experiment”. But if if she didn’t see Bob, she shall interpret “I haven’t met Bob specifically for Today”. In another word, whether to use “specifically today” or “someday” depends on whether or not she sees Bob or not. Does this not seem problematic at all to you?
I’m not sure about what you mean in your example, Beauty is awakened on Monday with 50% chance, if she is awaken then what happens? Nothing? The experiment just ends, perhaps with a non-consequential fair coin toss anyway? If she is not awakened then if the coin toss is Tails then she wakes on Tuesday? Is that the setup? I fail to see there is any anthropic elements in this question at all. Of course I would update the probability to favour Tails in this case upon awakening. Because that is new information for me. I wasn’t sure that I would find myself awake during the experiment at all.
Causality. Two time states of a single person a causally connected, while two clones are not. Probability theory treats independent and non-independent events differently. The fact that it fits the basic intuition for personal identity is a nice bonus.
Yes it would. I find the fact that these problems are put in the same category of “anthropic problems” quite unfortunate as they have testably different probability theoretic properties. For example for Sleeping Beauty correct position is double halfism, while for fissure—lewisian halfism.
Okay, that sounds as an interesting problem. Let’s formulate it like this:
Alice if put to sleep then the coin is tossed. On Heads she is awaken on Monday. On Tails another coin is tossed:
Either she is awakened both on Monday and on Tuesday with memory erasure
Or fissure happens. Alice1 is awakened on Monday, Alice2 is awakened on Tuesday
What do we have probability wise, on an awakening on the unknown day?
50% for Heads, 50% for Tails, 25% fissure, 25% memory erasure, 12.5% to be Alice1/Alice2
Now, suppose Alice meets Bob, who is awaken on a random day. Bob updates 2⁄3 in favor of Tails as he meets an Alice in the experiment with 75% probability.
But for a particular Alice the probability to meet Bob in the experiment is only 1⁄4 + 2⁄8 + 1⁄8 = 5⁄8
So her probability that the initial coin is Heads:
P(H1|MeetsBob)=P(MeetsBob|H1)P(H1)/P(MeetsBob)=1/2∗1/2∗8/5=40%
Now, I think in this particular case there is not much difference between fissure and cloning. There would apparently be difference if we were talking about a person who was about to participate in the experiment, instead of a person in the middle of it. Because current participator can be in the state of uncertanity whether she is a clone or not, while future participator is pretty sure the she is not going to be a clone, thus can omit this possibility from the calculations.
But yeah, I should probably write a separate post about such scenarios, after I’m done with the Sleeping Beauty case.
As a matter of fact, it doesn’t. You seem to be thinking that I’m switching between two different mathematical models here. But actually, we can use a single probability space.
“I see Bob in the experiment” is equal to “I see Bob on either Monday or Tuesday” it’s an event that consist of two outcomes: “seeing Bob on Monday” and “seeing Bob on Tuesday”. When an outcome is realized it means that every event which this outcome is part of is realized. So when Alice sees Bob on Monday she both observes “I see Bob on Monday” and “I see Bob in the experiment”. And, likewise, when Alice sees Bob on Tuesday. Just one observation of Bob on any day of the experiment is enough to be certain that Bob was observed on either Monday or Tuesday.
On the other hand, “I don’t see Bob in the experiment” happens only when Bob was neither observed on Monday, nor on Tuesday. Not observing him only on one day isn’t enough. To observe this event Alice has to accumulate information between two days.
All this is true, regardless of whether there is memory erasure or not. What is different with memory erasure is that now Alice is made unable to accumulate information between days. So she can’t observe event “I don’t see Bob in the experiment”. However, she is still perfectly able to observe event “I see Bob in the experiment”. She is supposed to update her credence for Heads based on it. And until her memory is erased she can act on this information.
What if problematic, on the other hand, is the “today”, “this awakening” and similar categories which can’t be formally mathematically specified in Sleeping Beauty. This is the reason why probability of an event “today is Monday” is undefined, “today” is not just some variable that takes a specific value from the {Monday, Tuesday}, on Tails it have to be both! It’s not a fixed thing throughout the experiment and so reasoning as if it is leads to confusion and paradoxes.
As I keep saying, this whole “anthropic problems” category is silly to begin with. All of these are just plain probability theory problems. And these two problems are isomorphic to each other. If being awaken on Tails is twice as likely than being awaken on Heads, awakening is an evidence in favor of Tails. If meeting Bob is twice as likely on Tails than on Heads, then meeting Bob is an evidence in favor of Tails. The same basic principle that gives you answer in one problem gives you the answer to the other. You don’t need to search for any “anthropic elements” in these problems. The math works the same way.