The Beauty and the Prince
This post will address a problem proposed by Radford Neal in his paper Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical conditioning. In particular, he defined this problem—The Beauty and the Prince—to argue against the halver solution to the Sleeping Beauty Problem. I don’t think that this is ultimately a counter-example, but I decided to dedicate a post to it because I felt that it was quite persuasive when I first saw it. I’ll limit the scope of this post to arguing that his analysis of the halver solution is incorrect and providing a correct analysis instead. I won’t try to justify the halver solution as being philosophically correct as I plan to write another post on the Anthropic Principle later, just show how it applies here.
The Beauty and the Prince is just like the Sleeping Beauty Problem, but with a Prince who is also interviewed and memory-wiped. However, he is always interviewed on both Monday and Tuesday regardless of what the coin shows and he is told whether or not Sleeping Beauty is awake. If he is told that she is awake, what is the probability that the coin came up heads. The argument is that 3⁄4 times she will be awake and 1⁄4 times she is asleep so only 1⁄3 times when he is told she is awake will the coin be heads. Further, it seems that Sleeping Beauty should adopt the same odds as him. They both have the same information, so if he tells her the odds are 1⁄3, on what basis can she disagree? Further, she knows what he will say before he even asks her.
I want to propose that the Prince’s probability estimate as above is correct, but it is different from Sleeping Beauty’s. I think the key here is to realised that indexicals aren’t a part of standard probability, so we need to de-indexicalise the situation. However, we’ll de-indexicalise the original problem first. We’ll do this be ensuring that only one interview ever “counts”, by which we mean that we will calculate the probability of events over the interviews that count. We’ll do this by flipping a second coin if the first comes up tails. If it is heads, only the first interview counts, whilst for tails only the second interview counts. We then get the odds being: 1⁄2 heads + Monday counts; 1⁄4 tails + Monday counts; 1⁄4 tails + Tuesday counts.
We similarly de-indexalise the Prince, though we flip the second coin in the heads case too. Similarly, if it is heads we count the interview on Monday and if it is tails we count the interview on Tuesday, so the four possibilities become mutually exclusive and each have a probability of 25%.
If we look when the first coin is heads, we notice that the Prince’s interview on Monday only counts 50% of the time, whilst Sleeping Beauty’s counts 100% of the time. This means that Sleeping Beauty is calculating her probability over a different event space so we should actually expect her answer to differ from that of the Prince. Suppose we expand the Prince’s probability to include the Sleeping Beauty’s Monday interviews (which all count). Then we get the chance of heads moving from 1:2 = 1⁄3 to 2:2 = 1⁄2.
As we’ve seen, The Beauty and the Prince is not a problem for the halver solution. This does not mean that the halver solution is the correct solution to the Sleeping Beauty Problem, just that The Beauty and The Prince doesn’t provide a counter-example.
Update: I’ve been reading more of the literature. It seems that the technique that I’m using here is actually closer to what Bostrom call the Hybrid Model, then David Lewis’ Halver Solution. The difference is that if you are told it is Monday, Bostrom gets heads being 1⁄2, while Lewis gets heads being 2⁄3.
I don’t understand your argument. What does it mean for a situation to “count”?
I’m Beauty. I’m a real person. I’ve woken up. I can see the Prince sitting over there, though if you like, you can suppose that we can’t talk. The Prince is also a real person. I’m interested in the probability that the result of a flip of a actual, real coin is Heads.
How does whether something “counts” or not have anything to do with this question?
Agreed. Everyone is talking about probability, when it feels like they mean betting odds. If there are some situations which “count” twice or some which exist, but don’t “count”, then you’re asking about decision theory, not probability. The distinction isn’t in what happened or how likely something is, but in what the payout/cost of a correct/incorrect prediction is.
Imagine that you are considering whether or not to make a bet on a horse, but there is a bug in the system where any bet you place on one particular horse is submitted twice. This will affect how you bet on the horse, but it won’t change the probability. The halver argument is that Sleeping Beauty should be treated similarly.
But nothing in the specification of the Sleeping Beauty problem justifies treating it that way. Beauty is a ordinary human being who happens to have forgotten some things. If Beauty makes two betting decisions at different times, they are separate decisions, which are not necessarily the same—though it’s likely they will be the same if Beauty has no rational basis for making different decisions at those two times. There is a standard way of using probabilities to make decisions, which produces the decisions that everyone seems to agree are correct only if Beauty assigns probability 1⁄3 to the coin landing Heads.
You could say that you’re not going to use standard decision theory, and therefore are going to assign a different probability to Heads, but that’s just playing with words—like saying the sky is green, not blue, because you personally have a different scheme of colour names from everyone else.
“But nothing in the specification of the Sleeping Beauty problem justifies treating it that way”—apart from the fact that you’re being asked twice.
“There is a standard way of using probabilities to make decisions, which produces the decisions that everyone seems to agree are correct only if Beauty assigns probability 1⁄3 to the coin landing Heads” − 1⁄2 gives the correct decisions as well, you just need a trivial modification to your decision theory.
So in every situation in which someone asks you the same question twice, standard probability and decision theory doesn’t apply? Seems rather sweeping to me. Or it’s only a problem if you don’t happen to remember that they asked that question before? Still seems like it would rule out numerous real-life situations where in fact nobody thinks there is any problem whatever in using standard probability and decision theory.
There is one standard form of probabability theory and one standard form of decision theory. If you need a “trivial modification” of your decision theory to justify assigning a probability of 1⁄2 rather than 1⁄3 to some event, then you are not using standard probability and decision theory. I need only a “trivial modification” of the standard mapping from colour names to wavelengths to justify saying the sky is green.
Ok, I could have been clearer, simply being asked it twice isn’t enough, it’s that we need to be also scoring it twice. Further, if we are always asked the same question N times in each possible state and they are all included in the “score”, it’ll all just cancel out.
The memory wipe is only relevant in so far as it actually allows asking the same question twice; otherwise you can deduce that it is tails when you’re interviewed the second time.
What do you mean by “scoring it twice”? You seem to have some sort of betting/payoff scheme in mind, but you haven’t said what it is. I suspect that as soon as you specify some scheme, it will be clear that assigning probability 1⁄3 to Heads gives the right decision when you apply standard decision theory, and that you don’t get the right decision if you assign probability 1⁄2 to Heads and use standard decision theory.
And remember, Beauty is a normal human being. When a human being makes a decision, they are just making one decision. They are not simultaneously making that decision for all situations that they will ever find themselves in where the rational decision to make happens to be the same (even if the rationale for making that decision is also the same). That is not the way standard decision theory works. It is not the way normal human thought works.
“I suspect that as soon as you specify some scheme, it will be clear that assigning probability 1⁄3 to Heads gives the right decision when you apply standard decision theory, and that you don’t get the right decision if you assign probability 1⁄2 to Heads and use standard decision theory.”—I’ve already explained that you make a slight modification to account for the number of times you are asked. Obviously, if you don’t make this modification you’ll get the incorrect betting odds.
But I’m not looking to set things on a solid foundation in this post, that will have to wait to the future. The purpose of this post is just to demonstrate how a halver should analyse The Beauty and the Prince given foundations.
Unstated in the problem (which is the main point of confusion IMO) is what decision Beauty is making, and what are the payouts/costs if she’s right or wrong. What difference does it make to her if she says 1/pi as the probability, because that’s prettier than 1⁄2 or 1/3?
If the payout is +1 utility for being correct and −1 for being incorrect, calculated each time the question’s asked, then 1⁄3 is the correct answer, because she’ll lose twice if wrong, but only win once if right. If the payout is calculated only once on Wednesday, with a 0 payout if she manages to answer differently Monday and Tuesday, then 1⁄2 is the right answer.
Aren’t they? If there is zero detectable change in cognition or evidence between the two decisions, how could the second one be different?
No, you get the wrong answer in your second scenario (with −1, 0, or +1 payoff) if you assign a probability of 1⁄2 to Heads, and you get the right answer if you assign a probability of 1⁄3.
In this scenario, guessing right is always better than guessing wrong. Being right rather than wrong either (A) gives a payoff of +1 rather than −1, if you guess only once, or (B) gives a payoff of +1 rather than 0, if you guess correctly another day, or (C) gives a payoff of 0 rather than −1, if you guess incorrectly another day. Since the change in payoff for (B) and (C) are the same, one can summarize this by saying that the advantage of guessing right is +2 if you guess only once (ie, the coin landed Heads), and +1 if you guess twice (ie, the coin landed Tails).
A Halfer will compute the difference in payoff from guessing Heads rather than Tails as (1/2)*(+2) + (1/2)*(-1) = 1⁄2, and so they will guess Heads (both days, presumably, if the coin lands Tails). A Thirder will compute the difference in payoff from guessing Heads rather than Tails as (1/3)*(+2) + (2/3)*(-1) = 0, so they will be indifferent between guessing Heads or Tails. If we change the problem slightly so that there is a small cost (say 1⁄100) to guessing Heads (regardless of whether this guess is right or wrong), then a Halfer will still prefer Heads, but a Thirder will now definitely prefer Tails.
What will actually happen without the small penalty is that both the Halfer and the Thirder will get an average payoff of zero, which is what the Thirder expects, but not what the Halfer expects. If we include the 1⁄100 penalty for guessing Heads, the Halfer has an expected payoff of −1/100, while the Thirder still has an expected payoff of zero, so the Thirder does better.
---
If today you choose chocolate over vanilla ice cream today, and yesterday you did the same, and you’re pretty sure that you will always choose chocolate over vanilla, is your decision today really a decision not for one ice cream cone but for thousands of cones? Not by any normal idea of what it means to “decide”.
Huh? Maybe I wasn’t clear in my second scenario. This is the situation where the bet is resolved only once, on Wednesday, with payouts being: a) +1 if it was heads, and she said “heads” on Tuesday (not being woken up on Monday); b) −1 if it was heads but she said “tails” on tuesday; c) +1 if it was tails and she said “tails” on both monday and tuesday; d) −1 if it was tails and she said heads on both monday and tuesday; e) (for completeness, I don’t believe it’s possible) 0 if it was tails and she gave different answers on monday and tuesday.
1⁄2 is the right answer here—it’s a literal coinflip and we’ve removed the double-counting of the mindwipe.
You’ve swapped Monday and Tuesday compared to the usual description of the problem, but other than that, your description is what I am working with. You just have a mistaken intuition regarding how the probabilities relate to decisions—it’s slightly non-obvious (but maybe not obvious that it’s non-obvious). Note that this is all using completely standard probability and decision theory—I’m not doing anything strange here.
In this situation, as explained in detail in my reply above, Beauty gets the right answer regarding how to bet only if she gives probability 1⁄3 to Heads whenever she is woken, in which case she is indifferent to guessing Heads versus Tails (as she should be—as you say, it’s just a coin flip), whereas if she gives probability 1⁄2 to Heads, she will have a definite preference for guessing Heads. If we give guessing Heads a small penalty (say on Monday only, to resolve how this works if her guesses differ on the two days), in order to tip the scales away from indifference, the Thirder Beauty correctly guesses Tails, which does indeed maximizes her expected reward, whereas the Halfer Beauty does the wrong thing by still guessing Heads.
Knowing that your bet on this horse will be counted twice does not help you win by bettin on him or against him. Analogy to sleeping beauty would be, that bet is counted twice only if this horse wins.
You’re right that it doesn’t really affect single bets, but it becomes important if you try to do things such as arbitrage where you need to arrange bets in the correct ratios.
“Analogy to sleeping beauty would be, that bet is counted twice only if this horse wins”—but yeah, I should have said that instead
And counting bet twice only in case of horse winning is equivalent of betting with 2:1 odds. Bookmaker will only give such odds if probability of that horse winning is 1⁄3. Hence the 1⁄3 probability.
You can model it as having 2:1 odds or 1:1 odds with the bet counted twice. The later requires a trivial change to your betting algorithm. It also has the advantage of not changing your probability of a horse winning due to a mistake in a bookmaking system.
But that is not an actual analogy to sleeping beauty. Real analogy would be, that you are a “counted bet”, what horse are you more likely to be on?
My impression is that they have in mind something more than just looking at betting odds (for some unmentioned bet) rather than probabilities. Probability as normally conceived provides a basis for betting choices, using ordinary decision theory, with any sort of betting setup. In Sleeping Beauty with real people, there are no issues that could possibly make ordinary decision theory inapplicable. But it seems the OP has something strange in mind...
Standard probability works over a set of possibilities that are exclusive. If the possibilities are non-exclusive, then we can either: a) decide on a way to map the problem to a set of exclusive possibilities b) work with a non-standard probability, in this case one that handles indexicals. Diving too deep into trying to justify the halver solution is outside the scope of this post, which merely attempts to demonstrate that if the halver solution is valid for Sleeping Beauty, it can also be extended to The Beauty and the Prince. As I said, I’ll write another post on the anthropic principle when I’ve had time to do more research, but I thought that this objection was persuasive enough that it deserved to be handled in its own post.
In terms of defining the term “count”: If we want to use the term “you” then in addition to information about the state of the world, we also need to now which person-component “you” refers to. So the version which “counts” is basically just the indexical information.
“I’m Beauty. I’m a real person. I’ve woken up. I can see the Prince sitting over there, though if you like, you can suppose that we can’t talk. The Prince is also a real person. I’m interested in the probability that the result of a flip of a actual, real coin is Heads”—Yes, but regardless of whether you go the halver or thirder route you need a notion of probability that extends standard probability to cover indexicals. You seem to be assuming that going the thirder route doesn’t require extending standard probability?
Right. I see no need to extend standard probability, because the mildly fantastic aspect of Sleeping Beauty does not take it outside the realm of standard probability theory and its applications.
Note that all actual applications of probability and decision theory involve “indexicals”, since whenever I make a decision (often based on probabilities) I am concerned with the effect this decision will have on me, or on things I value. Note all the uses of “I” and “me”. They occur in every application of probability and decision theory that I actually care about. If the occurrence of such indexicals was generally problematic, probability theory would be of no use to me (or anyone).
“If the occurrence of such indexicals was generally problematic, probability theory would be of no use to me (or anyone)”—Except that de-indexicalising is often trivial—“If I eat ice-cream, what is the chance that I will enjoy it” → “If Chris Leong eats ice-cream, what is the probability that Chris Leong will enjoy it”.
What makes you think that you are “Chris Leong”?
Anyway, to the extent that this approach works, it works just as well for Beauty. Beauty has unique experiences all the time. You (or more importantly, Beauty herself) can identify Beauty-at-any-moment by what her recent thoughts and experiences have been, which are of course different on Monday and Tuesday (if she is awake then). There is no difficulty in applying standard probability and decision theory.
At least there’s no problem if you are solving the usual Sleeping Beauty problem. I suspect that you are simply refusing to solve this problem, and instead are insisting on solving only a different problem. You’re not saying exactly what that problem is, but it seems to involve something like Beauty having exactly the same experiences on Monday as on Tuesday, which is of course impossible for any real human.
This seems wrong to me. Bayes’ rule works just fine if events are things like “The marble in the box front of me is blue”. Bayes would barely be useful if you couldn’t apply it to events like these.
Any model of the world an agent learns is going to be a centered one, e.g. it will be able to talk about “the thing in front of me” and “the city of New York in the Earth that I grew up in”, but will have no need to model a New York not in causal relation to the agent.
In general I think anything you can coherently refer to is in some causal relation to you, i.e. all references are indexical. (A detailed explanation of this can be found in Brian Cantwell Smith’s On the Origin of Objects). One thing that might be an exception is mathematics, but that’s still in causal relation to me in the sense that mathematics affects what my computer outputs, so I can indexically refer to “the mathematical computation that is determining the outputs of the computer in front of me”.
“The marble in the box front of me is blue”—We don’t need to provide absolute time or space co-ordinates to de-indexicalise, we just need unique co-ordinates. If here only refers to one possible location, we can set it to (0,0,0) or if time only refers to one possible time, we can set it to t=0. On the other hand, if there are things such as memory loss or copies at different points of space or time, this de-indexicalisation strategy won’t work.
(To clarify this further, there’s no reason why the box couldn’t be at (0,0,0). But let’s suppose we found out it was at (0,100,97) instead, would that change the problem? If not, we can just solve the problem where the box is specified to be at (0,0,0))
Agree that absolute coordinates are unnecessary. But de-indexicalizing can destroy information about your location in the world, depending on how you do it.
The way I would de-indexicalize Sleeping Beauty is to say there are 3 possible centered worlds when Beauty wakes up: heads/Monday, tails/Monday, and tails/Tuesday. There isn’t any need to say only one interview counts.
A possible reason for including this indexical information: Beauty is a real person, she might be curious what day it is, and what day it is might affect her plans for that day (e.g. maybe she is allowed to write letters that are read after the experiment is over, and which day it is affects which letter she wants to write). She should be able to update on local information (e.g. overhearing people talk about which day it is) to learn which day it is.
By de-indexicalise I meant to remove indexicals. The centered possible worlds approach uses indexicals, so it would be unusual to call that de-indexicalisation. It’s the other approach instead—choosing a version of probability theory that supports indexicals. So you can either remove the indexicals or use a theory that supports them.
Yeah. I came up with a similar argument last year and thought it proved thirdism for about a day, until people set me right. Halfism is internally consistent (you can make a video game where the player’s sequence of observations is generated with SSA), so these arguments can’t defeat it fully, though they are suggestive.
Your argument at that link is interesting, but I can see why Halfers would just say it’s a different problem.
For Beauty and the Prince, I start with a version where Beauty and the Prince can talk to each other, which obviously isn’t the same as the usual Sleeping Beauty problem. Supposing that it’s agreed that they should both assess the probability of Heads as 1⁄3 in this version, we then go on to a version where Beauty can see the Prince, but not talk with him. But she knows perfectly well what he would say anyway, so does that matter? And if we then put a curtain between Beauty and the Prince, so she can’t see him, though she knows he is there, does that change anything? If we move the Prince to a different room a thousand miles away, would that change things? Finally, does getting rid of the Prince altogether matter?
If none of these steps from Beauty and the Prince to the usual Sleeping Beauty matters, then the answers should be the same. So Halfers would have to claim that one or more steps does matter (or that the answer is 1⁄2 for the full Beauty and the Prince problem, but I see that as less likely). Perhaps they will claim one of these steps matters, but I see problems with this. For instance, if a Halfer thinks getting rid of the Prince altogether is different from him being in a room a thousand miles away, it seems that they would be committed to the 1⁄2 answer being sensitive to all sorts of details of the world that one would normally consider irrelevant (and which are assumed irrelevant in the usual problem statement).
I suppose the question becomes, “Why can’t Sleeping Beauty copy the Prince’s answer when he doesn’t count on Monday given that he still exists?”. And indeed, the Prince gives an answer of 1⁄3 to regardless of whether or not his answer counts at that point.
I guess the answer is that halvers don’t believe that you can answer: “If Sleeping Beauty is awake, what are the chance that the coin came up heads?” without de-indexicalising the situation first. After de-indexicalising it become, “If the Prince counts and Sleeping Beauty is awake, what are the odds that the coin came up heads?” (which is true 1⁄3 of the time).
Now that the statement has been de-indexicalised, it’s clear that including possibilities where the Prince doesn’t count or Sleeping Beauty isn’t awake doesn’t change the probability as they are filtered out by the “if” clause.
Next we de-indexicalise the question that Sleeping Beauty asks, “If Sleeping Beauty is awake and she counts, what are the odds that the coin came up heads?” It’s now clear that this includes a different set of possibilities than what the Prince asks, so they reach different answers. So even though the original questions is the same, the question becomes different once it is de-indexicalised. So she can’t just go ask the Prince, so he’s answering a different question.
I’ve noticed something that may explain some of the confusion. You say above:
...halvers don’t believe that you can answer: “If Sleeping Beauty is awake, what are the chance that the coin came up heads?” without de-indexicalising the situation first.
But in the Sleeping Beauty problem as usually specified, the question is what probability Beauty should assign to Heads, not what some external observer should think she should be doing. Beauty is in no doubt about who she is (eg, she’s the person who just stubbed her toe on this bedpost here) even though she doesn’t know what day of the week it is.
Speaking very roughly:
Well, there’s two kinds of probability that we could calculate: we could roughly call them subjective probability (which gives an answer of 1⁄3) or objective probability (which gives an answer of 1⁄2).
The confusing part is that you can ask an “objective” observer about the subjective probability relative to beauty and they’d say 1⁄3 or you can ask a “subjective” observer like beauty about the objective probability and they’d say 1⁄2.
This is further obscured by subjective probability already having a definition, so I really need to find a different name.
Anyway, I hope that most of the questions will be cleared up when I write up a more comprehensive post dealing the various issues that have already been raised, though I’ll probably wait at least a week (quite possibly two) because I think people need time to digest all the conversation that has already occurred.
I agree that we need to remove any dependence on the indexical “today,” but what you propose doesn’t do that. Determining whether “today counts” still depends on it. But there is a way to unequivocally remove this dependence. Use four volunteers, and wake each either once or twice as in the original Sleeping Beauty Problem (OSBP). But change the day and/or coin result that determines the circumstances where each is left asleep.
So one volunteer (call her SB1) will be left asleep on Tuesday, if Heads is flipped, as in the OSBP. Another (SB2) will be left asleep on Monday, also if Heads is flipped. This is essentially the same as the OSBP, since in a non-indexical version the day can’t matter. The other two (SB3 and SB4) will be left asleep if Tails is flipped, one on Monday and one on Tuesday. And if we ask them about their confidence in Tails instead of Heads, the correct answer should be the same as the OSBP, whatever that turns out to be.
The “de-indexicalization” is accomplished by changing the question to an equivalent one. For SB1 and SB2, the truth value of the statement “I am awake now, and it is my only awakening” is the always the same as “Heads was flipped.” For SB3 and SB4, it is the same as “Tails was flipped.”
Note that it can’t matter if you know which of these volunteers you are, or if you are allowed to discuss the question “Is this my only awakening?” as long as you can’t reveal which one you are to the others. One each day of the experiment, exactly three will be awake. For exactly one of those three, it will be her only awakening.
The non-indexical answer is 1⁄3.
“Determining whether “today counts” still depends on it”—No, you just ask about the (second) coin which determines what day counts and whether it shows heads or tails (for consistency assume that we flip a heads-only coin if the first coin comes up heads). So questions becomes, “What is the chance of the (first) coin being heads given Sleeping Beauty’s non-indexical state of knowledge on Monday if the second coin is heads or Sleeping Beauty’s non-indexical state of knowledge on Tuesday if the second coin is tails?”
If an interview on one day “counts,” while an interview on another day doesn’t, you are using an indexical to discriminate those days. Adding another coin to help pick which day does not count is just obfuscating how you indexed it.
This is why betting (or frequency) arguments will never work. Essentially, the number of bets (or the number of trials in the frequency experiment) is dependent on the answer, so the argument is circular. If you decide ahead of time that you want to get 1⁄3, you will use three bets (or “trials”) that each have a 1⁄2 *prior* probability of happening to Beauty on a single *indexed* day in the experiment. If you want to get 1⁄2, you use two. So, that you get 1⁄2 by your method is not surprising in the slightest. It was pre-ordained.
You need to find a way to justify one or the other that is not a non sequitur, and that isn’t possible. You can’t justify why “ensuring that only one interview ever ‘counts’” solves an issue in the debate. You never tried, you just asserted that it was the thing to do.
Wait for my next post on this topic. Unfortunately, I chose a narrow scope for this post (only explaining the halfer response to the specific Beauty and the Prince objection, not justifying this approach in general) and everyone is posting objections that would require a whole post to answer. But basically, I will argue that there are valid reasons for adopting this formalism that aren’t merely trivial.
And in my reply I will show how you are addressing the conclusion you want to reach, and not the problem itself. No matter how you convolute choosing the sample point you ignore, you will still be ignoring one. All you will be doing, is creating a complicated algorithm for picking a day that “doesn’t count,” and it will be probabilisticly equivalent to saying “Tuesday doesn’t count” (since you already ignore Tue-H). That isn’t the Sleeping Beauty Problem.
But you haven’t responded to my proof, which actually does eliminate the indexing issue. Its answer is unequivocally 1⁄3. I think there is an interesting lesson to be learned from the problem, but it can’t be approached until people stop trying to make the lesson fit the answer they want.
+++++
The cogent difference between halfers and thirders, is between looking at the experiment from the outside, or the inside.
From the outside, most halfers consider Beauty’s awakenings on Mon-T and Tue-T to be the same outcome. They cannot be separated from each other. The justification for this outlook is that, over the course of the experiment, one necessitates the other. The answer from this viewpoint is clearly 1⁄2.
But it has an obvious flaw. If the plan is to tell Beauty what day it is after she answers, that can’t affect her answer but it clearly invalidates the viewpoint. The sample space that considers Mon-T and Tue-T to be the same outcome is inadequate to describe Beauty’s situation after she is told that it is Monday, so it can’t be adequate before. You want to get around this by saying that one interview “doesn’t count.” In my four-volunteer proof, this is equivalent to saying that one of the three awake volunteers “doesn’t count.” Try to convince her of that. Or, ironically, ask her for her confidence that her confidence “doesn’t count.”
But a sample space that includes Tue-T must also include Tue-H. The fact that Beauty sleeps though it does not make it “unhappen,” which is what halfers (and even some thirders) seem to think.
To illustrate, let me propose a slight change to the drugs we assume are being used. Drug A is the “go to sleep” drug, but it lasts only about 12 hours and the subject wakes up groggy. So each morning, Beauty must be administered either drug B that wakes her up and overrides the grogginess, or another dose of drug A. The only point of this change, since it cannot affect Beauty’s thought processes, is to make Tue-H a more concrete outcome.
What Beauty sees, from the inside, is a one-day experiment. Not a two-day one. At the start of this one-day experiment, there was a 3⁄4 chance that drug B was chosen, and a 1⁄4 chance that it was drug A. Beauty’s evidence is that it was drug B, and there is a 1⁄3 chance of Heads, given drug B.
Suppose there is roulette table. Host throws the ball. If red—beauty is woken up 1 time, if black—two times.
When woken, beauty is asked to bet 1 dollar on either red or black. Roulette betting rules applies. Now there are two beauties—red and black. Red always bets red, black always bets black. Both undergo experiment 100 times.
In roulette red number drops out ~50% of the time. So Red queen wins ~$50 and loses ~$100 as for every black number she bets and looses 1$ twice.
Black queen gets back with ~$50 plus. In halfer world both should end up at 0.
I didn’t address the betting odds argument as its been covered extensively in other posts, but instead of just calculating the odds based on the probability, you need to add an extra parameter for the number of repeats.
But if we’re talking about an ordinary Sleeping Beauty problem, there are no repeats—no multiple instances of Beauty with exactly the same memories. Whatever betting scheme may have been defined, when Beauty decides what to bet, her decision is made at a single moment in time, and applies only for that time, affecting her payoff according to whatever the rules of the betting scheme may be. She is allowed to make a different decision at a different time (though of course she may in fact make the same decision), and again that will affect her payoff (or not) according to the rules of the scheme. There is no scope for any unusual relationship between probability and betting odds.