More marbles and Sleeping Beauty
I
Previously I talked about an entirely uncontroversial marble game: I flip a coin, and if Tails I give you a black marble, if Heads I flip another coin to either give you a white or a black marble.
The probabilities of seeing the two marble colors are 3⁄4 and 1⁄4, and the probabilities of Heads and Tails are 1⁄2 each.
The marble game is analogous to how a ‘halfer’ would think of the Sleeping Beauty problem—the claim that Sleeping Beauty should assign probability 1⁄2 to Heads relies on the claim that your information for the Sleeping Beauty problem is the same as your information for the marble game—same possible events, same causal information, same mutual exclusivity and exhaustiveness relations.
So what’s analogous to the ‘thirder’ position, after we take into account that we have this causal information? Is it some difference in causal structure, or some non-causal anthropic modification, or something even stranger?
As it turns out, nope, it’s the same exact game, just re-labeled.
In the re-labeled marble game you still have two unknown variables (represented by flipping coins), and you still have a 1⁄2 chance of black and Tails, a 1⁄4 chance of black and Heads, and a 1⁄4 chance of white and Heads.
And then to get the thirds, you ask the question “If I get a black marble, what is the probability of the faces of the first coin?” Now you update to P(Heads|black)=1/3 and P(Tails|black)=2/3.
II
Okay, enough analogies. What’s going on with these two positions in the Sleeping Beauty problem?
1:
2:Here are two different diagrams, which are really re-labelings of the same diagram. The first labeling is the problem where P(Heads|Wake) = 1⁄2. The second labeling is the problem where P(Heads|Wake) = 1⁄3. The question at hand is really—which of these two math problems corresponds to the word problem / real world situation?
As a refresher, here’s the text of the Sleeping Beauty problem that I’ll use: Sleeping Beauty goes to sleep in a special room on Sunday, having signed up for an experiment. A coin is flipped—if the coin lands Heads, she will only be woken up on Monday. If the coin lands Tails, she will be woken up on both Monday and Tuesday, but with memories erased in between. Upon waking up, she then assigns some probability to the coin landing Heads, P(Heads|Wake).
Diagram 1: First a coin is flipped to get Heads or Tails. There are two possible things that could be happening to her, Wake on Monday or Wake on Tuesday. If the coin landed Heads, then she gets Wake on Monday. If the coin landed Tails, then she could either get Wake on Monday or Wake on Tuesday (in the marble game, this was mediated by flipping a second coin, but in this case it’s some unspecified process, so I’ve labeled it [???]). Because all the events already assume she Wakes, P(Heads|Wake) evaluates to P(Heads), which just as in the marble game is 1⁄2.
This [???] node here is odd, can we identify it as something natural? Well, it’s not Monday/Tuesday, like in diagram 2 - there’s no option that even corresponds to Heads & Tuesday. I’m leaning towards the opinion that this node is somewhat magical / acausal, just hanging around because of analogy to the marble game. So I think we can take it out. A better causal diagram with the halfer answer, then, might merely be Coin → (Wake on Monday / Wake on Tuesday), where Monday versus Tuesday is not determined at all by a causal node, merely informed probabilistically to be mutually exclusive and exhaustive.
Diagram 2: A coin is flipped, Heads or Tails, and also it could be either Monday or Tuesday. Together, these have a causal effect on her waking or not waking—if Heads and Monday, she Wakes, but if Heads and Tuesday, she Doesn’t wake. If Tails, she Wakes. Her pre-Waking prior for Heads is 1⁄2, but upon waking, the event Heads, Tuesday, Don’t Wake gets eliminated, and after updating P(Heads|Wake)=1/3.
There’s a neat asymmetry here. In diagram 1, when the coin was Heads she got the same outcome no matter the value of [???], and only when the coin was Tails were there really two options. In Diagram 2, when the coin is Heads, two different things happen for different values of the day, while if the coin is Tails the same thing happens no matter the day.
Do these seem like accurate depictions of what’s going on in these two different math problems? If so, I’ll probably move on to looking closer at what makes the math problem correspond to the word problem.
- Treating anthropic selfish preferences as an extension of TDT by 1 Jan 2015 0:43 UTC; 13 points) (
- 28 Apr 2015 5:21 UTC; 3 points) 's comment on Open Thread, Apr. 27 - May 3, 2015 by (
- 16 Jul 2021 18:41 UTC; 2 points) 's comment on Charlie Steiner’s Shortform by (
A better “outside perspective” causal diagram for the real world would be something like “coin flip on Sunday causes memory of coin flip on Monday, which causes waking or not waking on Monday and memory of the coin flip on Tuesday, which causes waking or not waking on Tuesday.” If the memories are perfectly correlated with the coin flip, they can be collapsed into one node with no losses. But now our diagram is just the coin flip causing waking or not on monday, and waking or not on tuesday.
Switching to a perspective in the middle of the experiment then requires the different days to be mutually exclusive and exhaustive, and conditions on waking rather than not waking.
This gives P(Heads|Wake)=1/3, but doesn’t have the day as a causal node.
So, there’s a few differences between these two diagrams, but they are all very similar differences. The two main ones are whether the day is an independent causal node from the coin flip, and whether not waking up on Tuesday is an event. The first difference implies the second but not vice versa, so it’s the stronger condition and the second is the weaker.
I think it’s not hard to answer these questions. But I also want to develop guidelines for what to do in other problems.
Here is what I consider a slam dunk against diagram 1: Before the experiment, what is Sleeping Beauty’s probability that on Tuesday, she doesn’t wake up? 1⁄2.
If she has a probability for this thing, she must be representing it as an event. This is that second difference I mentioned. In fact, the whole reason the probability changes upon waking in diagram 2 is because this non-waking option gets eliminated and its probability mass evenly redistributed.
If diagrams 1 and 2 were the only two options, this would sort of be the end—but they’re not, they’re just two diagrams that we promoted to attention by some other process. We want to be able to help out the intuitive process that fits these causal models to this story problem.
The categories to keep track of are the possible events, the causal structure, and the constraints on the various nodes (at least for simple problems like this where those things don’t change based on observations).
When you update on observations in these simple problems, this doesn’t mean changing the causal diagram to eliminate that observation from the causal diagram. Instead, you leave your information about causation unchanged and update just by conditionalization.
Hmm, maybe one can’t eliminate the [???] node. This is because non-causal-diagram information, such as observations, is conditioned on for all nodes, in the ordinary non-causal-diagram way. So the information encoded in diagram 1 really is causal, even if it’s unphysical. One could interpret this as a heuristic argument against diagram 1 - no unphysical causal nodes.
In general, you can remove the [???] node. Causal information can just be about how the wake on (day) node’s value is determined by the coin flip, it doesn’t necessitate another node. And even though this doesn’t fully determine the value of the wake on (day) node, our information still determines our probabilities.
Though if the causal picture of the universe is also true, the different undetermied choices are caused by either an extra causal factor ([???]) or un-tracked differences in the coin flip—just like how the different outcomes of the coin flip are caused by small differences in initial conditions.
If all coin flips are treated the same, this forces the decision to be made by some other sort of initial conditions. And it is a peculiarity of the Sleeping Beauty problem that this would be unphysical for diagram 1 - if we go back to the marble game, there’s any number of physical processes that work.