Preamble
Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:
If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday.
In either case, she will be awakened on Wednesday without interview and the experiment ends.
Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Sleeping Beauty is asked: “What is your credence now for the proposition that the coin landed heads?”
Motivation
I was recently introduced to the canonical Sleeping Beauty problem and initially was a halfer but confused. Or more like I thought the halfer position was correct, but smart people seemed to be thirders and I was worried I was misunderstanding something about the problem, or confused myself or similar.
I debated the problem extensively on the LW Discord server and with some LLMs and strongly updated towards “thirders are just engaging in gross epistemic malpractice”.
A message I sent in the LW server:
Alternatively I started out confused.
Debating this problem here and with LLMs convinced me that I’m not confused and the thirders are actually just doing epistemological nonsense.
It feels arrogant, but it’s not a poor reflection of my epistemic state?
I still have some meta level uncertainty re: the nonsense allegations.
I want to be convinced that the thirder position is not nonsense and there is a legitimate disagreement/debate to be had here.
I have read some of the LW posts on the canonical problem here. I won’t be linking them due to laziness.
I have not yet read Joe Carlsmith’s blog posts or Nick Bostrom’s book as at posting this question. I’ll probably be listening to them after posting the question.
I asked Sonnet 3.5 to distill my position/rejections from our debate and below is its summary[1]
Comprehensive Position on the Sleeping Beauty Problem
1. Core Position
The correct answer to the Sleeping Beauty problem is the Halfer position (1/2 probability for Heads).
The Thirder position (1/3 probability for Heads) is based on misapplications of both Bayesian and frequentist reasoning.
2. Bayesian Reasoning
Probability is a property of the map (agent’s beliefs), not the territory (environment).
For an observation O to be evidence for a hypothesis H, P(O|H) must be > P(O|¬H).
The wake-up event is equally likely under both Heads and Tails scenarios, thus provides no new information to update priors.
The original 50⁄50 probability should remain unchanged after waking up.
3. Frequentist Critique
The Thirder position often relies on a misapplication of frequentist probability.
Key Issues with Frequentist Approach:
-
Misunderstanding Indistinguishable Events:
Thirders wrongly treat multiple indistinguishable wake-ups as distinct evidence.
Beauty’s subjective experience is identical whether woken once or a million times.
-
Conflating Processes with Outcomes:
Two mutually exclusive processes (Heads: one wake-up, Tails: multiple wake-ups) are incorrectly treated as a single sample space.
Multiple Tails wake-ups collapse into one indistinguishable experience.
-
Misapplying Frequentist Logic:
Standard frequentist approach increases sample size with multiple observations.
This logic fails here as wake-ups are not independent data points.
-
Ignoring Problem Structure:
Each experiment (coin flip + wake-ups) is one trial.
The coin’s 50⁄50 probability remains unchanged regardless of wake-up protocol.
Counterargument to Thirder Position:
Thirder Claim: “Beauty would find herself in a Tails wake-up twice as often as a Heads wake-up.”
Rebuttal: This incorrectly treats each wake-up as a separate trial, rather than considering the entire experiment as one trial.
4. Self-Locating Beliefs
Self-locating information (which wake-up you’re experiencing) is irrelevant to the coin flip probability.
The question “What is the probability of Heads?” is about the coin, not about your location in time or possible worlds.
5. Anthropic Reasoning Rejection
Anthropic arguments that treat all possible wake-ups as equally likely samples are rejected.
This approach incorrectly combines outcomes from distinct events (coin flip and wake-up protocol).
Expanded Argument:
Anthropic reasoning in this context suggests that Beauty should consider herself as randomly selected from all possible wake-up events.
This reasoning is flawed because:
It treats the wake-up events as the primary random process, when the actual random process is the coin flip.
It conflates the sampling process (how Beauty is woken up) with the event we’re trying to determine the probability of (the coin flip).
Specific Anthropic Argument and Counterargument:
Anthropic Argument: “When Beauty wakes up, she is essentially sampling from the space of all possible wake-ups. There are twice as many Tails wake-ups as Heads wake-ups, so the probability of Heads is 1⁄3.”
Counterargument:
This incorrectly assumes that each wake-up is an independent event, when they are actually dependent on a single coin flip.
It ignores the fact that the probability we’re interested in is that of the coin flip, not the wake-up event.
This reasoning would lead to absurd conclusions if we changed the wake-up protocol (e.g., waking Beauty a million times for Tails would make Heads virtually impossible, which is clearly wrong).
6. Distinguishability vs. Probability
Subjective indistinguishability of events doesn’t imply equal probability of the underlying states.
However, indistinguishability means the events can’t provide evidence for updating probabilities.
7. Betting Strategies vs. Probabilities
Optimal betting strategies (e.g., always bet on Tails) don’t necessarily reflect true probabilities.
Asymmetric payoffs can justify betting on Tails without changing the underlying 50⁄50 probability.
Expanded Argument:
The Sleeping Beauty problem presents a scenario where the optimal betting strategy (always betting on Tails) seems to contradict the claimed 50⁄50 probability. This apparent contradiction is resolved by recognizing that:
Betting strategies can be influenced by factors other than pure probability, such as payoff structures.
The expected value of a bet is not solely determined by the probability of an event, but also by the payoff for each outcome.
In this case, the Tails outcome provides more opportunities to bet, creating an asymmetry in the payoff structure.
Specific Example:
Consider a simplified version of the problem where:
If the coin lands Heads, Beauty is woken once and can bet $1.
If the coin lands Tails, Beauty is woken twice and can bet $1 each time.
The payoff for a correct bet is 1:2 (you double your money).
The optimal strategy is to always bet on Tails, because:
Betting on Heads: 50% chance of winning $1, 50% chance of losing $1 = $0.5 - $0.5 = $0 expected value
Betting on Tails: 50% chance of winning $2 (betting twice) vs 50% chance of losing $1 = $1 - $0.5 = $0.5 expected value
However, this doesn’t mean the probability of Tails is higher. It’s still 50%, but the payoff structure makes betting on Tails more profitable.
Analogy to Clarify:
Imagine a fair coin flip where you’re offered the following bet:
If you bet on Heads and win, you get $1.
If you bet on Tails and win, you get $K (where K >> 1, i.e., K is much larger than 1).
The optimal strategy is to bet on Tails every time, even though the coin is fair (50/50).
If you repeat this experiment many times, always betting on Tails will be a winning strategy in the long run.
Despite this, the probability of the coin landing Heads remains 0.5 (50%).
Counterargument to Thirder Position:
Thirders might argue: “The optimal betting strategy aligns with the 1⁄3 probability for Heads.”
Rebuttal: This confuses expected value with probability. The betting strategy is optimal due to the asymmetric nature of the payoffs (betting twice on Tails vs. once on Heads), not because Tails is more likely. The underlying probability of the coin flip remains 50⁄50, regardless of the betting structure.
8. Counterfactuals and Different Problems
Arguments involving additional information change the problem fundamentally.
“X & Y is evidence for H, therefore X is evidence for H” is invalid reasoning.
9. Information Relevance
Not all information about the experimental setup is relevant for probability calculations.
The wake-up protocol, while part of the setup, doesn’t provide discriminatory evidence for Heads vs. Tails.
10. Epistemological Stance
Adheres to strict Bayesian principles for updating beliefs.
Rejects arguments that conflate distinct problems or misapply probabilistic concepts.
11. Common Thirder Arguments Addressed
Frequency of wake-ups: Irrelevant due to subjective indistinguishability.
Anthropic reasoning: Incorrectly combines distinct events.
Betting strategies: Don’t necessarily reflect true probabilities.
Self-locating beliefs: Irrelevant to the coin flip probability.
12. Meta-level Considerations
Many arguments for the Thirder position stem from subtle misapplications of otherwise valid probabilistic principles.
13. Openness to Counter-Arguments
Willing to consider counter-arguments that adhere to rigorous Bayesian principles.
Rejects arguments based on frequentist interpretations, anthropic reasoning, or conflation of distinct problems.
This position maintains that the Sleeping Beauty problem, when correctly analyzed using Bayesian principles, does not provide any new information that would justify updating the prior 50⁄50 probability of the coin flip. It challenges readers to present counter-arguments that do not rely on commonly rejected reasoning patterns and that strictly adhere to Bayesian updating based on genuinely new, discriminatory evidence.
Closing Remarks
I am probably unjustified in my arrogance.
Some people who I strongly respect (e.g. Nick Bostrom) are apparently thirders.
This is IMO very strong evidence that I am actually just massively misunderstanding something or somehow mistaken here (especially as I have not yet engaged with Nick Bostrom’s arguments as at the time of writing this post).
On priors I don’t really expect to occupy an (on reflection endorsed) epistemic state where I think Nick Bostrom is making a basic epistemology mistake.
So I expect this is a position I can be easily convinced out of/I myself am misunderstanding something fundamental about the problem.
- ↩︎
I made some very light edits to the probability/odds treatment in point 7 to resolve factual inaccuracies.
It ultimately depends on how you define probabilities, and it is possible to define them such that the answer is 12.
I personally think that the only “good” definition (I’ll specify this more at the end) is that a probability of 14 should occur one in four times in the relevant reference class. I’ve previously called this view “generalized frequentism”, where we use the idea of repeated experiments to define probabilities, but generalizes the notion of “experiment” to subsume all instances of an agent with incomplete information acting in the real world (hence subsuming the definition as subjective confidence). So when you flip a coin, the experiment is not the mathematical coin with two equally likely outcomes, but the situation where you as an agent are flipping a physical coin, which may include a 0.01% probability of landing on the side, or a 10−15 probability of breaking in two halfs mid air or whatever. But the probability for it coming up heads should be about 12 because in about 12 of cases where you as an agent are about to flip a physical coin, you subsequently observe it coming up heads.
There are difficulties here with defining the reference class, but I think they can be adequately addressed, and anyway, those don’t matter for the sleeping beauty experiment because there, the reference classes is actually really straight-forward. Among the times that you as an agent are participating in the experiment and are woken up and interviewed (and are called Sleeping Beauty, if you want to include this in the reference class), one third will have the coin heads, so the probability is 13. This is true regardless of whether the experiment is run repeatedly throughout history, or repeatedly because of Many Worlds, or an infinite universe, etc. (And I think the very few cases in which there is genuinely not a repeated experiment are in fact qualitatively difference since now we’re talking logical uncertainty rather than probability, and this distinction is how you can answer 13 in Sleeping Beauty without being forced to answer 11000000 on the Presumptuous Philosopher problem.)
So RE this being the only “good” definition, well one thing is that it fits betting odds, but I also suspect that most smart people would eventually converge on an interpretation with these properties if they thought long enough about the nature of probability and implications of having a different definition, though obviously I can’t prove this. I’m not aware of any case where I want to define probability differently, anyway.
So in this case, I agree that like if this experiment is repeated multiple times and every Sleeping Beauty version created answered tails, the reference class of Sleeping Beauty agents would have many more correct answers than if the experiment is repeated many times and every sleeping Beauty created answered heads.
I think there’s something tangible here and I should reflect on it.
I separately think though that if the actual outcome of each coin flip was recorded, there would be a roughly equal distribution between heads and tails.
And when I was thinking through the question before it was always about trying to answer a question regarding the actual outcome of the coin flip and not what strategy maximises monetary payoffs under even bets.
While I do think that like betting odds isn’t convincing re: actual probabilities because you can just have asymmetric payoffs on equally probable mutually exclusive and jointly exhaustive events, the “reference class of agents being asked this question” seems like a more robust rebuttal.
I want to take some time to think on this.
Strong up voted because this argument actually/genuinely makes me think I might be wrong here.
Much less confident now, and mostly confused.
Importantly, this is counting each coinflip as the “experiment”, whereas the above counts each awakening as the “experiment”. It’s okay that different experiments would see different outcome frequencies.
Yes.
If you record the moments when the outside observer sees the coin landing, you will get 1⁄2.
If you record the moments when the Sleeping Beauty, right after making her bet, is told the actual outcome, you will get 1⁄3.
So we get 1⁄2 by identifying with the outside observer, but he is not the one who was asked in this experiment.
Unless you change the rules so that the Sleeping Beauty is only rewarded for the correct bet at the end of the week, and will only get one reward even if she made two (presumably identical) bets. In that case, recording the moment when the Sleeping Beauty gets the reward or not, you will again get 1⁄2.
What I’d say is that this corresponds to the question, “someone tells you they’re running the Sleeping Beauty experiment and just flipped a coin; what’s the probability that it’s heads?”. Difference reference class, different distribution; probability now is 0.5. But this is different from the original question, where we are Sleeping Beauty.
My current position now is basically:
I’m curious how your conception of probability accounts for logical uncertainty?
I count references within each logical possibility and then multiply by their “probability”.
Here’s a super contrived example to explain this. Suppose that if the last digit of pi is between 0 and 3, Sleeping Beauty experiments work as we know them, whereas if it’s between 4 and 9, everyone in the universe is miraculously compelled to interview Sleeping Beauty 100 times if the coin is tails. In this case, I think P(coin heads|interviewed) is 0.4⋅13+0.6⋅1101. So it doesn’t matter how many more instances of the reference class there are in one logical possibility; they don’t get “outside” their branch of the calculation. So in particular, the presumptuous philosopher problem doesn’t care about number of classes at all.
In practice, it seems super hard to find genuine examples of logical uncertainty and almost everything is repeated anyway. I think the presumptuous philosopher problem is so unintuitive precisely because it’s a rare case of actual logical uncertainty where you genuinely cannot count classes.
Why do you suddenly substitute the notion of “probability experiment” with the notion of “reference class”? What do you achieve by this?
From my perspective, this is where the source of confusion lingers. Probability experiment can be precisely specified: the description of any probability theory problem is supposed to be that. But “reference class” is misleading and up for the interpretation.
And indeed, because of this “reference class” business you suddenly started treating individual awakening of Sleeping Beauty as mutually exclusive outcome, even though it’s absolutely not the case in the experiment as stated. I don’t see how you would make such mistake if you kept using the term “probability experiment” without switching to speculate about “reference classes”.
Among the iterations of Sleeping Beauty probability experiment that a participant awakes, half the time the coin is Heads so the probability is 1⁄2.
Here there are no difficulties to address—everything is crystal clear. You just need to calm the instinctive urge to weight the probability by the number of awakenings, which would be talking about a different mathematical concept.
EDIT: @ProgramCrafter the description of the experiment clearly states that that when the coin is Tails the Beauty is to be awaken twice in the same iteration of the experiment. Therefore, individual awakennings are not mutually exclisive with each other: more than one can happen in the same iteration of the experiment.
Just to be clear, the reference class here is the set of all instances across all of space and time where an agent is in the same “situation” as you (where the thing you can argue about is how precisely one has to specify the situation). So in the case of the coinflip, it’s all instances across space and time where you flip a physical coin (plus, if you want to specify further, any number of other details about the current situation).
So with that said, to answer your question: why define probabilities in terms of this concept? Because I don’t think I want a definition of probability that doesn’t align with this view, when it’s applicable. If we can discretely count the number of instances across the history of the universe that fit the current situation , and we know some event happens in one third of those instances, then I think the probability has to be one third. This seems very self-evident to me; it seems exactly what the concept of probability is supposed to do.
I guess one analogy—suppose one third of all houses is painted blue from the outside and one third red, and you’re in one house but have no idea which one. What’s the probability that it’s blue? I think it’s 2⁄3, and I think this situation is precisely analogous to the reference class construction. Like I actually think there is no relevant difference; you’re in one of the situations that fit the current situation (trivially so), and you can’t tell which one (by construction; if you could, that would be included in the definition of the reference class, which would make it different from the others). Again, this just seems to get at precisely the core of what a probability should do.
So I think that answers it? Like I said, I think you can define “probability” differently, but if the probability doesn’t align with reference class counting, then it seems to me that the point of the concept has been lost. (And if you do agree with that, the question is just whether or not reference class counting is applicable, which I haven’t really justified in my reply, but for Sleeping Beauty it seems straight-forward.)
Suppose I want matrix multiplication to be commutative. Surely it would be so convinient if it was! I can define some operator * over matrixes so that A*B = B*A. I can even call this operator “matrix multiplication”.
But did I just make matrix multiplication, as it’s conventionally defined, commutative? Of course not. I logically pinpointed a new function and called it the same way as the previous function is being called, but it didn’t change anything about how the previous function is logically pinpointed.
My new function may have some interesting applications and therefore deserve to be talked about in its own right. But calling it’s “matrix multiplication” is very misleading. And if I were to participate in conversation about matrix multiplication while talking about my function I’d be confusing everyone.
This is basically the situation that we have here.
Initially probability function is defined over iterations of probability experiment. You define a different function over all space and time, which you still call “probability”. It surely has properties that you like, but it’s a different function! Please use another name, this is already taken. Or add a disclaimer. Preferably do both. You know how easy it is to confuse people with such things! Definetely, do not start participating in the conversations about probability while talking about your function.
As long as these instances are independent of each other—sure. Like with your houses analogy. When we are dealing with simple, central cases there is no diasagreement between probability and weighted probability and so nothing to argue about.
But as soon as we are dealing with more complicated scenario where there is no independence and it’s possible to be inside multiple houses in the same instance… Surely, you see how demanding to have coherent P(Red xor Blue) becomes unfeasible?
The problem is, our intuitions are too eager to assume that everything as independent. We are used to think in terms of physical time, using our memory as something that allows us to orient in it. This is why amnesia scenarios are so mindboggling to us!
And that’s why the notion of probability experiment where every single trial is independent and the outcomes in any single trial are mutually exclusive is so important. We strictly define what the “situation” means and therefore do not allow ourselves to be tricked. We can clearly see that individual awakenings can’t be treated as outcomes of the Sleeping Beauty experiment.
But when you are thinking in terms of “reference classes” your definition of “situation” is too vague. And so you allow yourself to count the same house multiple times. Treat yourself not as a person participating in the experiment but as an “awakening state of the person”, even though one awakening state necessary follows the other.
The “point of probability” is lost when it doesn’t allign with reasoning about instances of probability experiments. Namely, we are starting to talk about something else, instead of what was logically pinpointed as probability in the first place. Most of the time reasoning about reference classes does allign with it, so you do not notice the difference. But once in a while it doesn’t and so you end up having “probability” that contradicts conservation of expected evidence and “utility” shifting back and forth.
So what’s the point of these reference classes? What’s so valuable in them? As far as I can see they do not bring anything to the table except extra confusion.
Upon rereading your posts, I retract disagreement on “mutually exclusive outcomes”. Instead...
An obvious way to do so is put a hazard sign on “probability” and just not use it, not putting resources into figuring out what “probability” SB should name, isn’t it? For instance, suppose Sleeping Beauty claims “my credence for Tails is 1π”; any specific objection would be based on what you expected to hear.
(And now I realize a possible point why you’re arguing to keep “probability” term for such scenarios well-defined; so that people in ~anthropic settings can tell you their probability estimates and you, being observer, could update on that information.)
As for why I believe probability theory to be useful in life despite the fact that sometimes different tools need to be used: I believe disappearing as a Boltzmann brain or simulated person is balanced out by appearing the same way, dissolving into different quantum branches is balanced out by branches reassembling, and likewise for most processes.
It’s an obvious thing to do when dealing with simularity clusters poorly defined in natural language. Not so much, when we are talking about a logically pinpointed mathematical concept which we know are crucial for epistemology.
It’s not just about anthropic scenarios and not just about me being able to understand other people. It’s about general truth preserving mechanism of logical and mathematical reasoning. When people just use different definitions—this is annoying but fine. But when they use different definitions without realizing that these definitions are different and, moreover insist that it’s you who is making a mistake—then we have an actual disagreement about math which will provide more confusion along the way. Anthropic scenarious are just the ones where this confusion is noticeable.
What exactly do you mean by “different tools need to be used”? Can you give me an example?
I mean that Beauty should maintain full model of experiment, and use decision theory as well as probability theory (if latter is even useful, which it admittedly seems to be). If she didn’t keep track of full setup but only “a fair coin was flipped, so the odds are 1:1”, she would predictably lose when betting on the coin outcome.
Also, I’ve minted another “paradox” version. I can predict you’ll take issue with one of formulations in it, but what do you think about it?
I suppose the participant is just supposed to lie about their credence here in order to “win”.
On Tuesday your credence in Heads supposed to be 0, but saying the true value would go against the experimental protocol unless you also said that your credence is 0 on Monday, which would also be a lie.
I don’t understand this formulation. If Beauty always says that the probability of Heads is 1⁄7, does she win? Whatever “win” means...
She certainly gets a reward for following experimental protocol, but beyond that… I concur there’s the problem, and I have the same issue with standard formulation asking for probability.
In particular, pushing problem out to morality “what should Sleeping Beauty answer so that she doesn’t feel as if she’s lying” doesn’t solve anything either; rather, it feels like asking question “is continuum hypothesis true?” providing only options ‘true’ and ‘false’, while it’s actually independent of ZFC axioms (claims of it or of its negation produce different models, neither proven to self-contradict).
P.S. One more analogue: there’s a field, and some people (experimenters) are asking whether it rained recently with clear intent to walk through if it didn’t; you know it didn’t rain but there are mines all over the field.
I argue you should mention the mines first (“that probability—which by the way will be 1⁄2 - can be found out, conforms to epistemology, but isn’t directly usable anywhere”) before saying if there was rain.
If you can demonstrate how, in the reference class setting, there is a relevant criterion by which several instances should be grouped together, then I think you could have an argument.
If you look at space-time from above, there’s two blue houses for every red house. Sorry I meant there’s two SB(=Sleeping Beauty)-tails instances for every SB-heads instance. The two instances you want to group together (tails-Monday & tails-Tuesday) aren’t actually at the same time (not that I think it matters). If the universe is very large of Many Worlds is true, then there are in fact many instances of Monday-heads, Monday-tails, and Tuesday tails occurring at the same time, and I don’t think you want to group those together.
In any case, from the PoV of SB, all instances look identical to you. So by what criterion should we group some of them together? That’s the thing I think your position requires (just because you accept reference classes are a priori valid and then become invalid in some cases), and I don’t see the criterion.
What is going to be done with these numbers? If Sleeping Beauty is to gamble her money, she should accept the same betting odds as a thirder. If she has to decide which coinflip result kills her, she should be ambivalent like a halfer.
Halfer makes sense if you pre-commit to a single answer before the coin-flip, but not if you are making the decisions independently after each wake-up event. If you say heads, you have a 50% chance of surviving when asked on Monday, and a 0% chance of surviving when asked on Tuesday. If you say tails, you have a 50% chance of surviving Monday and a 100% chance of surviving Tuesday.
If you say heads every time, half of all futures contain you; likewise with tails.
I’ve updated my comment. You are correct as long as you pre-commit to a single answer beforehand, not if you are making the decision after waking up. The only reason pre-committing to heads works, though, is because it completely removes the Tuesday interview from the experiment. She will no longer be awoken on Tuesday, even if the result is tails. So, this doesn’t really seem to be in the spirit of the experiment in my opinion. I suppose the same pre-commit logic holds if you say the correct response gets (1/coin-side-wake-up-count) * value per response though.
Betting argument are tangential here.
https://www.lesswrong.com/posts/cvCQgFFmELuyord7a/beauty-and-the-bets
The disagreement is how to factorise expected utility function into probability and utility, not which bets to make. This disagreement is still tangible, because the way you define your functions have meaningfull consequences for your mathematical reasoning.
I mean I think the “gamble her money” interpretation is just a different question. It doesn’t feel to me like a different notion of what probability means, but just betting on a fair coin but with asymmetric payoffs.
The second question feels closer to actually an accurate interpretation of what probability means.
https://www.lesswrong.com/posts/Mc6QcrsbH5NRXbCRX/dissolving-the-question
Probability is not some vaguely defined similarity cluster like “sound”. It’s a mathematical function that has specific properties. Not all of them are solely about betting.
We can dissolve the semantic disagreement between halfers and thirders and figure out that they are talking about two different functions p and p’ with subtly different properties while producing the same betting odds.
This in itself, however, doesn’t resolve the actual question: which of these functions fits the strict mathematical notion of probability for the Sleeping Beauty experiment and which doesn’t. This question has an answer.
I would frame the question as “What is the probability that you are in heads-space?”, not “What is the probability of heads?”. The probability of heads is 1⁄2, but the probability that I am in heads-space, given I’ve just experiences a wake-up event, is 1⁄3.
The wake-up event is only equally likely on Monday. On Tuesday, the wake-up event is 0%/100%. We don’t know whether it is Tuesday or not, but we know there is some chance of it being Tuesday, because 1⁄3 of wake-up events happen on Tuesday, and we’ve just experienced a wake-up event:
P(Monday|wake-up) = 2⁄3
P(Tuesday|wake-up) = 1⁄3
P(Heads|Tuesday) = 0⁄1
P(Heads|Monday) = 1⁄2
P(Heads|wake-up) = P(Heads|Monday) * P(Monday|wake-up) + P(Heads|Tuesday) * P(Tuesday|wake-up) = 1⁄3
Thirder here (with acknowledgement that the real answer is to taboo ‘probability’ and figure out why we actually care)
The subjective indistinguishability of the two Tails wakeups is not a counterargument - it’s part of the basic premise of the problem. If the two wakeups were distinguishable, being a halfer would be the right answer (for the first wakeup).
Your simplified example/analogies really depend on that fact of distinguishability. Since you didn’t specify whether or not you have it in your examples, it would change the payoff structure.
I’ll also note you are being a little loose with your notion of ‘payoff’. You are calculating the payoff for the entire experiment, whereas I define the ‘payoff’ as being the odds being offered at each wakeup. (since there’s no rule saying that Beauty has to bet the same each time!)
To be concise, here’s my overall rationale:
Upon each (indistinguishable) wakeup, you are given the following offer:
If you bet H and win, you get N dollars.
If you bet T and win, you get 1+ϵ dollars.
If you believe T yields a higher EV, then you have a credence P(T)≥NN+1
You get a positive EV for all N up to 2, so P(T)=23. Thus you should be a thirder.
Here’s a clarifying example where this interpretation becomes more useful than yours:
The experimenter flips a second coin. If the second coin is Heads (H2), then N= 1.50 on Monday and 2.50 on Tuesday. If the second coin is Tails, then the order is reversed.
I’ll maximize my EV if I bet T when N=1.5, and H when N=2.5. Both of these fall cleanly out of ‘thirder’ logic.
What’s the ‘halfer’ story here? Your earlier logic doesn’t allow for separate bets on each awakening.
This is, I think, the key thing that those smart people disagree with you about.
Suppose Alice and Bob are sitting in different rooms. Alice flips a coin and looks at it—it’s Heads. What is the probability that the coin is Tails? Obviously, it’s 0% right? That’s just a fact about the coin. So I go to Bob in the other room and and ask Bob what’s the probability the coin is Tails, and Bob tells me it’s 50%, and I say “Wrong, you’ve failed to know a basic fact about the coin. Since it was already flipped the probability was already either 0% or 100%, and maybe if you didn’t know which it was you should just say you can’t assign a probability or something.”
Now, suppose there are two universes that differ only by the polarization of a photon coming from a distant star, due to hit Earth in a few hours. And I go into the universe where that polarization is left-handed (rather than right-handed), and in that universe the probability that the photon is right-handed is 0% - it’s just a fact about the photon. So I go to the copy of Carol that lives in this universe and ask Carol what’s the probability the photon has right-handed polarization, and Carol tells me it’s 50%, and I say “Wrong, you’ve failed to know a basic fact about the photon. Since it’s already on its way the probability was already either 0% or 100%, and maybe if you don’t know which it was you should just say you can’t assign a probability or something.”
Now, suppose there are two universes that differ outside of the room that Dave is currently in, but are the same within Dave’s room. Say, in one universe all the stuff outside the room is arranged is it is today in our universe, while in the other universe all the stuff outside the room is arranged as it was ten years ago. And I go into the universe where all the stuff outside the room is arranged as it was ten years ago, which I will shorthand as it being 2014 (just a fact about calendars, memories, the positions of galaxies, etc.), and ask Dave what’s the probability that the year outside is 2024, and Dave tells me it’s 50%...
I mean I am not convinced by the claim that Bob is wrong.
Bob’s prior probability is 50%. Bob sees no new evidence to update this prior so the probability remains at 50%.
I don’t favour an objective notion of probabilities. From my OP:
So I am unconvinced by your thought experiments? Observing nothing new I think the observers priors should remain unchanged.
I feel like I’m not getting the distinction you’re trying to draw out with your analogy.
Yes, Bob is right. Because the probability is not a property of the coin. It’s ‘about’ the coin in a sense, but it also depends on Bob’s knowledge, including knowledge about location in time (Dave) or possible worlds (Carol).
You need to start by clearly understanding that the Sleeping Beauty Problem is almost realistic—it is close to being actually doable. We often forget things. We know of circumstances (eg, head injury) that cause us to forget things. It would not be at all surprising if the amnesia drug needed for the scenario to actually be carried out were discovered tomorrow. So the problem is about a real person. Any answer that starts with “Suppose that Sleeping Beauty is a computer program...” or otherwise tries to divert you away from regarding Sleeping Beauty as a real person is at best answering some other question.
Second, the problem asks what probability of Heads Sleeping Beauty should have on being interviewed after waking. This of course means what probability she should rationally have. This question makes no sense if you think of probabilities as some sort of personal preference, like whether you like chocolate ice cream or not. Probabilities exist in the framework of probability theory and decision theory. Probabilities are supposed to be useful for making decisions. Personal beliefs come into probabilities through prior probabilities, but for this problem, the relevant prior beliefs are supposed to be explicitly stated (eg, the coin is fair). Any answer that says “It depends on how you define probabilities”, or “It depends on what reference class you use”, or “Probabilities can’t be assigned in this problem” is just dodging the question. In real life, you can’t just not decide what to do on the basis that it would depend on your reference class or whatever. Real life consists of taking actions, based on probabilities (usually not explicitly considered, of course). You don’t have the option of not acting (since no action is itself an action).
Third, in the standard framework of probability and decision theory, your probabilities for different states of the world do not depend on what decisions (if any) you are going to make. The same probabilities can be used for any decision. That is one of the great strengths of the framework—we can form beliefs about the world, and use them for many decisions, rather than having to separately learn how to act on the basis of evidence for each decision context. (Instincts like pulling our hand back from a hot object are this sort of direct evidence->action connection, but such instincts are very limited.) Any answer that says the probabilities depend on what bets you can make is not using probabilities correctly, unless the setup is such that the fact that a bet is offered is actual evidence for Heads versus Tails.
Of course, in the standard presentation, Sleeping Beauty does not make any decisions (other than to report her probability of Heads). But for the problem to be meaningful, we have to assume that Beauty might make a decision for which her probability of Heads is relevant.
So, now the answer… It’s a simple Bayesian problem. On Sunday, Beauty thinks the probability of Heads is 1⁄2 (ie, 1-to-1 odds), since it’s a fair coin. On being woken, Beauty knows that Beauty experiences an awakening in which she has a slight itch in her right big toe, two flies are crawling towards each other on the wall in front of her, a Beatles song is running through her head, the pillow she slept on is half off the bed, the shadow of the sun shining on the shade over the window is changing as the leaves in the tree outside rustle due to a slight breeze, and so forth. Immediately on wakening, she receives numerous sensory inputs. To update her probability of Heads in Bayesian fashion, she should multiply her prior odds of Heads by the ratio of the probability of her sensory experience given Heads to the probability of her experience given Tails.
The chances of receiving any particular set of such sensory inputs on any single wakening is very small. So the probability that Beauty has this particular experience when there are two independent wakening is very close to twice that small probability. The ratio of the probability of experiencing what she knows she is experiencing given Heads to that probability given Tails is therefore 1⁄2, so she updates her odds in favour of Heads from 1-to-1 to 1-to-2. That is, Heads now has probability 1⁄3.
(Not all of Beauty’s experiences will be independent between awakenings—eg, the colour of the wallpaper may be the same—but this calculation goes through as long as there are many independent aspects, as will be true for any real person.)
The 1⁄3 answer works. Other answers, such as 1⁄2, do not work. One can see this by looking at how probabilities should change and at how decisions (eg, bets) should be made.
For example, suppose that after wakening, Beauty says that her probability of Heads is 1⁄2. It also happens that, in an inexcusable breach of experimental protocol, the experimenter interviewing her drops her phone in front of Beauty, and the phone display reveals that it is Monday. How should Beauty update her probability of Heads? If the coin landed Heads, it is certain to be Monday. But if the coin landed Tails, there was only a probability 1⁄2 of it being Monday. So Beauty should multiply her odds of Heads by 2, giving a 2⁄3 probability of Heads.
But this is clearly wrong. Knowing that it is Monday eliminates any relevance of the whole wakening/forgetting scheme. The probability of Heads is just 1⁄2, since it’s a fair coin. Note that if Beauty had instead thought the probability of Heads was 1⁄3 before seeing the phone, she would correctly update to a probability of 1⁄2.
Some Halfers, when confronted with this argument, maintain that Beauty should not update her probability of Heads when seeing the phone, leaving it at 1⁄2. But as the phone was dropping, before she saw the display, Beauty would certainly not think that it was guaranteed to show that it is Monday (Tuesday would seem possible). So not updating is unreasonable.
We also see that 1⁄2 does not work in betting scenarios. I’ll just mention the simplest of these. Suppose that when Beauty is woken, she is offered a bet in which she will win $12 if the coin landed Heads, and lose $10 if the coin landed Tails. She know that she will always be offered such a bet after being woken, so the offer does not provide any evidence for Heads versus Tails. If she is woken twice, she is given two opportunities to bet, and could take either, both, or neither. Should she take the offered bet?
If Beauty thinks that the probability of Heads is 1⁄2, she will take such bets, since she thinks that the expected payoff of such a bet is (1/2)*12-(1/2)*10=1. But she shouldn’t take these bets, since following the strategy of taking these bets has an expected payoff of (1/2)*12 - (1/2)*2*10 = −4. In contrast, if Beauty thinks the probability of Heads is 1⁄3, she will think the expected payoff from a bet is (1/3)*12-(2/3)*10=-2.666… and not take it.
Note that Beauty is a real person. She is not a computer program that is guaranteed to make the same decision in all situations where the “relevant” information is the same. It is possible that if the coin lands Tails, and Beauty is woken twice, she will take the bet on one awakening, and refuse the bet on the other awakening. Her decision when woken is for that awakening alone. She makes the right decisions if she correctly applies decision theory based on the probability of Heads being 1⁄3. She makes the wrong decision if she correctly applies decision theory with the wrong probability of 1⁄2 for Heads.
She can also make the right decision by incorrectly applying decision theory with an incorrect probability for Heads, but that isn’t a good argument for that incorrect probability.
If the experiment instead was constructed such that:
If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
If the coin comes up tails, Sleeping Beauty’s twin sister will be awakened and interviewed on Monday and Sleeping Beauty will be awakened and interviewed on Tuesday.
In this case it is “obvious” that the halfer position is the right choice. So why would it be any different if Sleeping Beauty in the case of tails is awakened on Monday too, since she in this experiment have zero recollection of that event? It does not matter how many other people they have woken up before the day she is woken up, she has NO new information that could update her beliefs.
Or say that the experiment instead was constructed that she for tails would be woken up and interviewed 999999 days in row, would she then say upon being woken up that the probability that the coin landed heads is 1/1000000?
If the first sister’s experience is equivalent to the original Sleeping Beauty problem, then wouldn’t the second sister’s experience also have to be equivalent by the same logic? And, of course, the second sister will give 100% odds to it being Monday.
Suppose we run the sister experiment, but somehow suppress their memories of which sister they are. If they each reason that there’s a two-thirds chance that they’re the first sister, since their current experience is certain for her but only 50% likely for the second sister, then their odds of it being Monday are the same as in the thirder position- a one-third chance of the odds being 100%, plus a two-thirds chance of the odds being 50%.
If instead they reason that there’s a one-half chance that they’re the first sister, since they have no information to update on, then their odds of it being Monday should be one half of 100% plus one half of 50%, for 75%. Which is a really odd result.
Maybe I was a bit vague. I was trying to say that waking up SB’s twin sister on monday was a way of saying that SB’s would be equally aware of that as if her self would be awakened on monday under the conditions stipulated in the original experiment, i.e. zero recollection of the event. Or the other way around SB is awakened on monday but her twin siter on Tuesday. SB will not be aware of that here twin sister will be awakened on Tuesday. For that reason she is only awakened ONE time no matter if it is heads or tails. She will only experience ONE awakening per path. The is no cumulative effect of her being awakened 2 or a million times, every time is the “first” time and the “last” time”. If she is awake its equal chance that it is day 1 on the heads path as it would be day 56670395873966 (or any other day) on the tails path as far as she knows.
Or like this. Imagine that I flip a coin that I can see but you can not. I give you the rule that if it is heads I show you a picture of a dog. If it is tails, I show you the same picture of a dog but I might have shown this picture to thousands of people before you and maybe thousands of people after you, which you have no information about. You might be the first one to see it but you might also be the last one to see it or somewhere in the middle, i.e. you are not aware of the other observers. When I show you the picture of the dog, what chance do you give that the coin flip was heads?
But I am curious to know how a person with a thirder position argues in the case that she is awakened 999 or 8490584095805 times on the tails path, what probability should SB give heads in that case?
If you look over all possible worlds, then asking “did the coin come up Heads or Tails” as if there’s only one answer is incoherent. If you look over all possible worlds, there’s a ~100% chance the coin comes up as Heads in at least one world, and a ~100% chance the coin comes up as Tails in at least one world.
But from the perspective of a particular observer, the question they’re trying to answer is a question of indexical uncertainty—out of all the observers in their situation, how many of them are in Heads-worlds, and how many of them are in Tails-worlds? It’s true that there are equally as many Heads-worlds as Tails-worlds—but 2⁄3 of observers are in the latter worlds.
Or to put it another way—suppose you put 10 people in one house, and 20 people in another house. A given person should estimate a 1⁄3 chance that they’re in the first house—and the fact that 1 house is half of 2 houses is completely irrelevant. Why should this reasoning be any different just because we’re talking about possible universes rather than houses?
“What is your credence now for the proposition that the coin landed heads?”
There are three doors. Two are labeled Monday, and one is labeled Tuesday. Behind each door is a Sleeping Beauty. In a waiting room, many (finite) more Beauties are waiting; every time a Beauty is anesthetized, a coin is flipped and taped to their forehead with clear tape. You open all three doors, the Beauties wake up, and you ask the three Beauties The Question. Then they are anesthetized, the doors are shut, and any Beauties with a Heads showing on their foreheads or behind a Tuesday door are wheeled away after the coin is removed from their forehead. The Beauty with a Tails on their forehead behind the Monday door is wheeled behind the Tuesday door. Two new Beauties are wheeled behind the two Monday doors, one with Heads and one with Tails. The experiment repeats.
You observe that Tuesday Beauties always have a Tails taped to their forehead. You always observe that one Monday Beauty has a Tails showing, and one has a Heads showing. You also observe that every Beauty says 1⁄3, matching the ratio of Heads to Tails showing, and it is apparent that they can’t see the coins taped to their own or each other’s foreheads or the door they are behind. Every Tails Beauty is questioned twice. Every Heads Beauty is questioned once. You can see all the steps as they happen, there is no trick, every coin flip has 1⁄2 probability for Heads.
There is eventually a queue of Waiting Sleeping Beauties with all-Heads or all-Tails showing and a new Beauty must be anesthetized with a new coin; the queue length changes over time and sometimes switches face. You can stop the experiment when the queue is empty, as a random walk guarantees to happen eventually, if you like tying up loose ends.
I prefer to just think about utility, rather than probabilities. Then you can have 2 different “incentivized sleeping beauty problems”
Each time you are awakened, you bet on the coin toss, with $ payout. You get to spend this money on that day or save it for later or whatever
At the end of the experiment, you are paid money equal to what you would have made betting on your average probability you said when awoken.
In the first case, 1⁄3 maximizes your money, in the second case 1⁄2 maximizes it.
To me this implies that in real world analogues to the Sleeping Beauty problem, you need to ask whether your reward is per-awakening or per-world, and answer accordingly
That argument just shows that, in the second betting scenario, Beauty should say that her probability of Heads is 1⁄2. It doesn’t show that Beauty’s actual internal probability of Heads should be 1⁄2. She’s incentivized to lie.
EDIT: Actually, on considering further, Beauty probably should not say that her probability of Heads is 1⁄2. She should probably use a randomized strategy, picking what she says from some distribution (independently for each wakening). The distribution to use would depend on the details of what the bet/bets is/are.
Welcome to the club.
I suppose my posts are among the ones that you are talking about here?
Hijacking this thread, has anybody worked through Ape in the coat’s anthropic posts and understood / gotten stuff out of them? It’s something I might want to do sometime in my copious free time but haven’t worked up to it yet.
I propose to sic o1 on them to distill it all into something readable/concise. (I tried to comprehend it and failed / got distracted).
I think some people pointed out in comments that their model doesn’t represent prob of “what day it is NOW” btw
I’m actually talking about it in the post here. But yes this is additionally explored in the comments pretty well.
Here is the core part that allows to understand why “Today” is ill-defined from the perspective of the Beauty:
Let’s say there is an accurate mechanical calendar in the closed box in the room. She can open it but wouldn’t. Should she have no expectation about like in what state this calendar is?
What state the calendar is when?
On Monday it’s Monday. On Tuesday it’s Tuesday. And “Today” is ill-defined, there is no coherent state for it.
Well, now! She looks at the box and thinks there is definitely a calendar in some state. What state? What would happen if i open it?
Please specify this “now” thingy you are talking about, using formal logic. If this is a meaningful event for the setting, surely there wouldn’t be any problems.
Are you talking about Monday xor Tuesday? Monday or Tuesday? Monday and Tuesday? Something else?
Well, idk. My opinion here is that you bite some weird bullet, which I’m very ambivalent to. I think “now” question makes total sense and you factor it out into some separate parts from your model.
Like, can you add to the sleeping beauty some additional decision problems including the calendar? Will it work seamlessly?
The counter-intuitiveness comes from us not being accustomed to reasoning under amnesia and repetition of the same experience. It’s understandable that initially we would think that question about “now”/”today” makes sense as we are used to situation where it indeed does. But then we can clearly see that in such situations there is no problem with formally defining what event we mean by it. Contrary to SB, where such event is ill-defined.
Oh absolutely.
Suppose that on every awakening the Beauty is proposed to bet that “Today is Monday” What odds is she supposed to take?
“Today is Monday” is ill-defined, but she can construct a corresponding betting scheme using events “Monday awakening happens” and “Tuesday awakening happens” like this:
E(Monday) = P(Monday)U(Monday) - P(Tuesday)U(Tuesday)
P(Monday) = 1; P(Tuesday) = 1⁄2, therefore
E(Monday) = U(Monday) − 1/2U(Tuesday)
solving E(Monday)=0 for U(Monday):
U(Monday) = 1/2U(Tuesday)
Which means 2:1 betting odds
As you see everything is quite seamless.
So, she shakes the box contemplatively. There is mechanical calendar. She knows the betting odds of it displaying “Monday” but not the credence. She thinks it’s really really weird
I’m very available to answer questions about my posts as soon as people actuall engage with the reasoning, so feel free to ask if you feel confused about anything.
If I am to highlight the core principle it would be: Thinking in terms of what happens in the probability experiment as a whole, to the best of your knowledge and from your perspective as a participant.
Suppose this experiment happened to you multiple times. If on iteration of the experiment something happens 2⁄3 of times then the probability of such event is 2⁄3. If something happens 100% of times then its probability is 1 and realizationof such event doesn’t give you you any evidence.
All the rest is commentary.
I have not read all of them!