This is one of those cases where we need to disentangle the dispute over definitions (1), forget about the notion of subjective anticipation (2), list the well-defined questions and ask which we mean.
If by the probability we mean the fraction of waking moments, the answer is 1⁄3.
If by the probability we mean the fraction of branches, the answer is 1⁄2.
It’s hard to make a sensible notion of probability out of “fraction of waking moments”. Two subsequent states of a given dynamical system make for poor distinct elements of a sample space: when we’ve observed that the first moment of a given dynamical trajectory is not the second, what are we going to do when we encounter the second one? It’s already ruled “impossible”! Thus, Monday and Tuesday under the same circumstances shouldn’t be modeled as two different elements of a sample space.
As Wei Dai and Roko have observed, that depends on why you’re asking in the first place. Probability estimates should pay rent in correct decisions. If you’re making a bet that will pay off once at the end of the experiment, you should count the fraction of branches. If you’re making a bet that will pay off once per wake-up call, you should count the fraction of wake-up calls.
That’s the wrong way to look at it. A certain bet may be the “correct” action to perform, or even a certain ritual of cognition may pay its rent, but it won’t be about the concept of probability. Circumstances may make it preferable to do or say anything, but that won’t influence the meaning of fixed concepts. You can’t argue that 2+2 is in fact 5 on the grounds that saying that saves puppies. You may say that 2+2 is 5, or think that “probability of Tuesday” is 1⁄3 or 1⁄4 in order to win, but that won’t make it so, it will merely make you win.
Subjective probability is not a well-defined concept in the general case. Fractions are well-defined, but only after you’ve decided where you are getting the numerator and denominator from.
Let us not sacrifice effectiveness of our concepts in order to make them mathematically elegant. If reality gives you problems where you win by reasoning anthropically, but ordinary probability theory is not up to the job of facilitating, then invent UDT and use that instead.
It’s hard to make a sensible notion of probability out of “fraction of waking moments”.
If reality gives you problems where you win by reasoning anthropically, but ordinary probability theory is not up to the job of facilitating, then invent UDT and use that instead.
The winning thing might be better than the probability thing, but it won’t be a probability thing just because it’s winning. Also, UDT weakly relies on the same framework of expected utility and probability spaces, defined exactly as I discuss them in the comments to this post.
Not all of the waking moments have the same probability of occurring. If you estimate the probability of heads by the proportion of waking moments that were preceded by heads, you’d be throwing out information. Again, on a random waking moment, Monday preceded by heads is more likely than Monday preceded by tails.
On a random waking moment, Monday preceded by heads is equally likely as Monday preceded by tails.
I think you’re thinking of a similar problem that we discussed last year, which involves a forgetful driver who is driving past 1 to n intersections, and needs to turn left at at least one of them. That problem is different, because it’s asking about the probability of turning left at least once over the course of his drive.
The absent-minded driveris essentially the same problem, but it’s easier to analyze because explicit payoff specification prompts you to estimate expected value of possible strategies. In estimating those strategies, we use the same probability model that would say “1/2” in the Beauty problem.
Before she wakes, the probabilities SB would assign if she were conscious are P(monday and heads) = P(monday and tails) = p(tuesday and heads) = p(tuesday and tails) = 1⁄4.
After waking, she would update to p(tuesday and heads) = 0 and P(monday and heads) = P(monday and tails) = p(tuesday and tails) = 1⁄3, since p(tuesday and heads | wakes up) = 0 and p(monday and heads | wakes up) = p(monday and tails | wakes up) = p(tuesday and tails | wakes up) = 1.
Odds(monday and heads : monday and tails : tuesday and heads : tuesday and tails | wakes up)
= Odds(monday and heads : monday and tails : tuesday and heads : tuesday and tails) * Likelihood(wakes up | monday and heads : monday and tails : tuesday and heads : tuesday and tails)
= (1 : 1 : 1 : 1) * (1 : 1 : 0 : 1)
= (1 : 1 : 0 : 1)
= (1/3 : 1/3 : 0 : 1/3)
SB starts out with four equaly likely possibilities. On observing that she wakes up, she eliminates one of them, but does not distinguish between the remaining possiblities. Renormalizing the probabilities gives probability 1⁄3 to the remaining possibilities.
This is one of those cases where we need to disentangle the dispute over definitions (1), forget about the notion of subjective anticipation (2), list the well-defined questions and ask which we mean.
If by the probability we mean the fraction of waking moments, the answer is 1⁄3.
If by the probability we mean the fraction of branches, the answer is 1⁄2.
http://lesswrong.com/lw/np/disputing_definitions/
http://lesswrong.com/lw/208/the_iless_eye/
It’s hard to make a sensible notion of probability out of “fraction of waking moments”. Two subsequent states of a given dynamical system make for poor distinct elements of a sample space: when we’ve observed that the first moment of a given dynamical trajectory is not the second, what are we going to do when we encounter the second one? It’s already ruled “impossible”! Thus, Monday and Tuesday under the same circumstances shouldn’t be modeled as two different elements of a sample space.
As Wei Dai and Roko have observed, that depends on why you’re asking in the first place. Probability estimates should pay rent in correct decisions. If you’re making a bet that will pay off once at the end of the experiment, you should count the fraction of branches. If you’re making a bet that will pay off once per wake-up call, you should count the fraction of wake-up calls.
That’s the wrong way to look at it. A certain bet may be the “correct” action to perform, or even a certain ritual of cognition may pay its rent, but it won’t be about the concept of probability. Circumstances may make it preferable to do or say anything, but that won’t influence the meaning of fixed concepts. You can’t argue that 2+2 is in fact 5 on the grounds that saying that saves puppies. You may say that 2+2 is 5, or think that “probability of Tuesday” is 1⁄3 or 1⁄4 in order to win, but that won’t make it so, it will merely make you win.
Subjective probability is not a well-defined concept in the general case. Fractions are well-defined, but only after you’ve decided where you are getting the numerator and denominator from.
That fractions are well-defined doesn’t make them probabilities.
Let us not sacrifice effectiveness of our concepts in order to make them mathematically elegant. If reality gives you problems where you win by reasoning anthropically, but ordinary probability theory is not up to the job of facilitating, then invent UDT and use that instead.
The winning thing might be better than the probability thing, but it won’t be a probability thing just because it’s winning. Also, UDT weakly relies on the same framework of expected utility and probability spaces, defined exactly as I discuss them in the comments to this post.
Not all of the waking moments have the same probability of occurring. If you estimate the probability of heads by the proportion of waking moments that were preceded by heads, you’d be throwing out information. Again, on a random waking moment, Monday preceded by heads is more likely than Monday preceded by tails.
On a random waking moment, Monday preceded by heads is equally likely as Monday preceded by tails.
I think you’re thinking of a similar problem that we discussed last year, which involves a forgetful driver who is driving past 1 to n intersections, and needs to turn left at at least one of them. That problem is different, because it’s asking about the probability of turning left at least once over the course of his drive.
The absent-minded driver is essentially the same problem, but it’s easier to analyze because explicit payoff specification prompts you to estimate expected value of possible strategies. In estimating those strategies, we use the same probability model that would say “1/2” in the Beauty problem.
Nope. P(monday and heads)=1/2. P(monday and tails)=1/4. P(tuesday and tails)=1/4. Remember, these have to add to 1.
How come P(monday and heads) and P(monday and tails) are not the same? This is an ordinary unbiased coin, yes?
How come P(monday and tails) and P(tuesday and tails) are not the same. Nothing happens in the interim, yes?
Before she wakes, the probabilities SB would assign if she were conscious are P(monday and heads) = P(monday and tails) = p(tuesday and heads) = p(tuesday and tails) = 1⁄4.
After waking, she would update to p(tuesday and heads) = 0 and P(monday and heads) = P(monday and tails) = p(tuesday and tails) = 1⁄3, since p(tuesday and heads | wakes up) = 0 and p(monday and heads | wakes up) = p(monday and tails | wakes up) = p(tuesday and tails | wakes up) = 1.
Ugh. That makes no sense. Can you explain why she would update in such a manner?
SB starts out with four equaly likely possibilities. On observing that she wakes up, she eliminates one of them, but does not distinguish between the remaining possiblities. Renormalizing the probabilities gives probability 1⁄3 to the remaining possibilities.
I agree, but don’t see how this works as a reply to Phil’s comment.