Personally, I think saying there’s “no particular asymmetry” is dangerous to the point of being flat out wrong. The three possibilities don’t look the least bit symmetric to me, they’re all qualitatively quite different. There’s no “relevant”, asymmetry but how exactly do we know what’s relevant and what’s not? Applying symmetry in places it shouldn’t be applied is the key way in which people get these things wrong. The fact that it gives the right answer this time is no excuse.
So my challenge to you is, explain why the answer is 2⁄3 without using the word “symmetry”.
Here’s my attempt:
Start with a genuinely symmetric (prior) problem, then add the information. In this case, the genuinely symmetric problem is “It’s morning. What day is it and will/did the coin come up heads?”, while the information is “She just woke up, and the last thing she remembers is starting this particular bizzare coin/sleep game”. In the genuinely symmetric initial problem all days are equally likely and so are both coin flips. The process for applying this sort of additional information is to eliminate all scenarios that it’s inconsistent with, and renormalise what’s left. The information eliminates all possibilities except (Monday, heads), (Monday, tails), (Tuesday, tails) - and some more obscure possibilities of (for the sake of argument) negligable weight. These main three had equal weight before and are equally consistent with the new information so they have equal weight now.
Ok, I did use the word symmetry in there but only describing a different problem where it was safe. It’s still not the best construction because my initial problem isn’t all that well framed, but you get the idea.
Note that more generally you should ask for p(new information | scenario) and apply Bayes Rule, but anthropic-style information is a special case where the value of this is always either 0 or 1. Either it’s completely inconsistent with the scenario or guaranteed by it. That’s what leads to the simpler process I describe above of eliminating the impossible and simply renormalising what remains.
The good thing about doing it this way is that you can also get the exact answer for the case where she knows the coin is biased to land heads 52% of the time, where any idea that the scenario is symmetric is out the window.
Note that more generally you should ask for p(new information | scenario) and apply Bayes Rule, but anthropic-style information is a special case where the value of this is always either 0 or 1
But it’s not entirely special, which is interesting. For example, say it’s 8:00 and you have two buckets and there’s one ball in one of the buckets. You have a 1⁄2 chance of getting the ball if you pick a bucket. Then, at exactly 8:05, you add another bucket and mix up the ball. Now you have a 1⁄3 chance of getting the ball if you pick a bucket.
But what does Bayes’ rule say? Well, P(get the ball | you add a third bucket) = P(get the ball) * P(you add a third bucket | get the ball) / P(you add a third bucket). Since you always add a third bucket whether you get the ball or not, it seems the update is just 1/1=1, so adding a third bucket doesn’t change anything. I would claim that this apparent failure of Bayes’ rule (failure of interpreting it, more likely) is analogous to the apparent failure of Bayes’ rule in the sleeping beauty problem. But I’m not sure why either happens, or how you’d go about fixing the problem.
Personally, I think saying there’s “no particular asymmetry” is dangerous to the point of being flat out wrong. The three possibilities don’t look the least bit symmetric to me, they’re all qualitatively quite different. There’s no “relevant”, asymmetry but how exactly do we know what’s relevant and what’s not? Applying symmetry in places it shouldn’t be applied is the key way in which people get these things wrong. The fact that it gives the right answer this time is no excuse.
So my challenge to you is, explain why the answer is 2⁄3 without using the word “symmetry”.
Here’s my attempt: Start with a genuinely symmetric (prior) problem, then add the information. In this case, the genuinely symmetric problem is “It’s morning. What day is it and will/did the coin come up heads?”, while the information is “She just woke up, and the last thing she remembers is starting this particular bizzare coin/sleep game”. In the genuinely symmetric initial problem all days are equally likely and so are both coin flips. The process for applying this sort of additional information is to eliminate all scenarios that it’s inconsistent with, and renormalise what’s left. The information eliminates all possibilities except (Monday, heads), (Monday, tails), (Tuesday, tails) - and some more obscure possibilities of (for the sake of argument) negligable weight. These main three had equal weight before and are equally consistent with the new information so they have equal weight now.
Ok, I did use the word symmetry in there but only describing a different problem where it was safe. It’s still not the best construction because my initial problem isn’t all that well framed, but you get the idea.
Note that more generally you should ask for p(new information | scenario) and apply Bayes Rule, but anthropic-style information is a special case where the value of this is always either 0 or 1. Either it’s completely inconsistent with the scenario or guaranteed by it. That’s what leads to the simpler process I describe above of eliminating the impossible and simply renormalising what remains.
The good thing about doing it this way is that you can also get the exact answer for the case where she knows the coin is biased to land heads 52% of the time, where any idea that the scenario is symmetric is out the window.
:D
But it’s not entirely special, which is interesting. For example, say it’s 8:00 and you have two buckets and there’s one ball in one of the buckets. You have a 1⁄2 chance of getting the ball if you pick a bucket. Then, at exactly 8:05, you add another bucket and mix up the ball. Now you have a 1⁄3 chance of getting the ball if you pick a bucket.
But what does Bayes’ rule say? Well, P(get the ball | you add a third bucket) = P(get the ball) * P(you add a third bucket | get the ball) / P(you add a third bucket). Since you always add a third bucket whether you get the ball or not, it seems the update is just 1/1=1, so adding a third bucket doesn’t change anything. I would claim that this apparent failure of Bayes’ rule (failure of interpreting it, more likely) is analogous to the apparent failure of Bayes’ rule in the sleeping beauty problem. But I’m not sure why either happens, or how you’d go about fixing the problem.