I don’t understand your question. Are you saying that Beauty flips a coin whenever she wakes up? And she then wonders whether the coin she just flipped is the same as another coin she has flipped or will flip? But she may not wake up on Tuesday, in which case there aren’t two flips, so I don’t understand....
Your argument seemed similar, but I may be misunderstanding:
“Treating these and other differences as random, the probability of Beauty having at some time the exact memories and experiences she has after being woken this time is twice as great if the coin lands Tails than if the coin lands Heads, since with Tails there are two chances for these experiences to occur rather than only one.”
It sounds like you are conditioning on “at least once such experiences occur”. That is, if Beauty wakes up and flips a coin, getting heads, and that’s the only experience she has so far, she will condition on “at least one heads.” This doesn’t seem generally correct, as the linked example covers. Doesn’t it also mean that, even before the coin flip, she would know exactly how she was going to update her probability afterward, regardless of result?
Perhaps the issue here is that if you wake up and flip heads, that isn’t the same thing as if, on Sunday, you asked “will I flip at least one heads?” and got an affirmative answer. The latter is relevant to the number of wakings. The former is not.
The crucial point is that Beauty’s experiences on wakening will not be confined to whatever coin flips may have been added to the experiment, but will also include many other things, such as whether or not her nose itches, and how much the fluorescent light in her room is buzzing. The probability of having a specific set of ALL her experiences is twice as great if she is woken twice (more precisely, approximately twice as great, if this probability is small, as it will be in any not-very-fantastical version of the problem).
Arguing that whether or not her nose itches is irrelevant, and so should not be conditioned on, is contrary to the ordinary rules of probability, in which any dispute over whether some information is relevant or not is settled by simply including it, which makes no difference if it is actually irrelevant. Refusing to condition on such information is like someone who’s solving a physics problem saying that air resistance can be ignored as negligible, and then continuing to insist that air resistance should be ignored after being shown a calculation demonstrating that including its effects has a substantial effect on the answer.
The question is about what information you actually have.
In the linked example, it may seem that you have precisely the information “there is at least one heads.” But if you condition on that you get the wrong answer. The explanation is that, in this type of memory loss situation, waking up and experiencing y is not equivalent to “I experience y at least once.” When you wake up and experience y you do know that you must experience y on either monday or tuesday, but your information is not equivalent to that statement.
If you asked on sunday “will I experience y at least once?” then the answer would be relevant. But if we nailed down the precise information gained from waking up and experiencing y, it would be irrelevant.
Beauty’s information isn’t “there is at least one head”, but rather, “there is a head seen by Beauty on a day when her nose itches, the fly on the wall is crawling upwards, her sock is scrunched up uncomfortably in her left shoe, the sharp end of a feather is sticking out of one of the pillows, a funny tune she played in high school band is running though her head, and so on, and so on.”.
I’m talking about the method you’re using. It looks like when you wake up and experience y you are treating that as equivalent to “I experience y at least once.”
This method is generally incorrect, as shown in the example. Waking up and experiencing y is not necessarily equivalent to “I experience y at least once.”
If you yourself believe the method is incorrect when y is “flip heads”, why should we believe it is correct when y is something else?
After my other response to this, I thought a bit more about the scenario described by Conitzer. A completely non-fantastic version of this would be as follows (somewhat analogous to my Sailor’s Child problem, though the whole child bit is not really necessary here):
You have two children. At age 10, you tell both of them that their Uncle has flipped two coins, one associated with each child, though the children are told nothing that would let them tell which is “their” coin. When they turn 20, they will each be told how “their” coin landed, in two separate rooms so they will not be able to communicate with each other.. They will then be asked what the probability is that the two coin flips were the same. (The two children correspond to two awakenings of Beauty.)
If you are one of these children, and are told that “your” coin landed heads, what should you give for the probability that the two flips are the same? It’s obvious that the correct answer is 1⁄2. But you might argue that their are four equally-likely possibilities for the two flips—HH, HT, TH, and TT—and that observing a head eliminates TT, giving a 1⁄3 probability that the two flips are the same.
This is of course an elementary mistake in probabilistic reasoning, caused by not using the right space of outcomes. Suppose that one of the children is left-handed and one is right-handed. Then there are actually eight equally-likely possibilities—RHH, LHH, RHT, LHT, RTH, LTH, RTT, LTT—where the initial R or L indicates whether the first coin is for the right-handed child or the left-handed child. Suppose you are the right-handed child. Observing heads eliminates LHT, RTH, RTT, and LTT, with the remaining possibilities being RHH, LHH, RHT, and LTH, in half of which the flips are the same. So the answer is now seen to be 1⁄2.
But why is this the right answer to this non-fantastical problem? (I take it that it is correct, and that this is not controversial.) The reason is that we know how probability works in ordinary situations, in which personal identities are clear, because everyone has different experiences. If instead we make a fantastic assumption that the two children are identical twins raised apart in absolutely identical environments, and therefore have exactly the same thoughts and experiences, up until the point at age 20 when they are told possibly-different things about how their coins landed, it may not be so clear that 1⁄2 is the right answer. It’s also not so clear that this fantastic scenario is possible, or of interest. It certainly would not be a good idea to treat it as being just the same as the non-fantastical scenario, apart from a little simplifying assumption about identical experiences...
In any not-completely-fantastical scenario, Beauty’s experiences on Monday are very unlikely to be repeated exactly on Tuesday, so “experiences y” and “experiences y at least once” are effectively equivalent. Any argument that relies on her sensory input being so restricted that there is a substantial probability of identical experiences on Monday and Tuesday applies only to a fantastical version of the problem. Maybe that’s an interesting version of the problem (though maybe instead it’s simply an impossible version), but it’s not the same as the usual, only-mildly-fantastical version.
I don’t understand your question. Are you saying that Beauty flips a coin whenever she wakes up? And she then wonders whether the coin she just flipped is the same as another coin she has flipped or will flip? But she may not wake up on Tuesday, in which case there aren’t two flips, so I don’t understand....
I’m referring to an example from here: https://users.cs.duke.edu/~conitzer/devastatingPHILSTUD.pdf where you do wake up both days.
Your argument seemed similar, but I may be misunderstanding:
“Treating these and other differences as random, the probability of Beauty having at some time the exact memories and experiences she has after being woken this time is twice as great if the coin lands Tails than if the coin lands Heads, since with Tails there are two chances for these experiences to occur rather than only one.”
It sounds like you are conditioning on “at least once such experiences occur”. That is, if Beauty wakes up and flips a coin, getting heads, and that’s the only experience she has so far, she will condition on “at least one heads.” This doesn’t seem generally correct, as the linked example covers. Doesn’t it also mean that, even before the coin flip, she would know exactly how she was going to update her probability afterward, regardless of result?
Perhaps the issue here is that if you wake up and flip heads, that isn’t the same thing as if, on Sunday, you asked “will I flip at least one heads?” and got an affirmative answer. The latter is relevant to the number of wakings. The former is not.
The crucial point is that Beauty’s experiences on wakening will not be confined to whatever coin flips may have been added to the experiment, but will also include many other things, such as whether or not her nose itches, and how much the fluorescent light in her room is buzzing. The probability of having a specific set of ALL her experiences is twice as great if she is woken twice (more precisely, approximately twice as great, if this probability is small, as it will be in any not-very-fantastical version of the problem).
Arguing that whether or not her nose itches is irrelevant, and so should not be conditioned on, is contrary to the ordinary rules of probability, in which any dispute over whether some information is relevant or not is settled by simply including it, which makes no difference if it is actually irrelevant. Refusing to condition on such information is like someone who’s solving a physics problem saying that air resistance can be ignored as negligible, and then continuing to insist that air resistance should be ignored after being shown a calculation demonstrating that including its effects has a substantial effect on the answer.
The question is about what information you actually have.
In the linked example, it may seem that you have precisely the information “there is at least one heads.” But if you condition on that you get the wrong answer. The explanation is that, in this type of memory loss situation, waking up and experiencing y is not equivalent to “I experience y at least once.” When you wake up and experience y you do know that you must experience y on either monday or tuesday, but your information is not equivalent to that statement.
If you asked on sunday “will I experience y at least once?” then the answer would be relevant. But if we nailed down the precise information gained from waking up and experiencing y, it would be irrelevant.
Beauty’s information isn’t “there is at least one head”, but rather, “there is a head seen by Beauty on a day when her nose itches, the fly on the wall is crawling upwards, her sock is scrunched up uncomfortably in her left shoe, the sharp end of a feather is sticking out of one of the pillows, a funny tune she played in high school band is running though her head, and so on, and so on.”.
I’m talking about the method you’re using. It looks like when you wake up and experience y you are treating that as equivalent to “I experience y at least once.”
This method is generally incorrect, as shown in the example. Waking up and experiencing y is not necessarily equivalent to “I experience y at least once.”
If you yourself believe the method is incorrect when y is “flip heads”, why should we believe it is correct when y is something else?
After my other response to this, I thought a bit more about the scenario described by Conitzer. A completely non-fantastic version of this would be as follows (somewhat analogous to my Sailor’s Child problem, though the whole child bit is not really necessary here):
You have two children. At age 10, you tell both of them that their Uncle has flipped two coins, one associated with each child, though the children are told nothing that would let them tell which is “their” coin. When they turn 20, they will each be told how “their” coin landed, in two separate rooms so they will not be able to communicate with each other.. They will then be asked what the probability is that the two coin flips were the same. (The two children correspond to two awakenings of Beauty.)
If you are one of these children, and are told that “your” coin landed heads, what should you give for the probability that the two flips are the same? It’s obvious that the correct answer is 1⁄2. But you might argue that their are four equally-likely possibilities for the two flips—HH, HT, TH, and TT—and that observing a head eliminates TT, giving a 1⁄3 probability that the two flips are the same.
This is of course an elementary mistake in probabilistic reasoning, caused by not using the right space of outcomes. Suppose that one of the children is left-handed and one is right-handed. Then there are actually eight equally-likely possibilities—RHH, LHH, RHT, LHT, RTH, LTH, RTT, LTT—where the initial R or L indicates whether the first coin is for the right-handed child or the left-handed child. Suppose you are the right-handed child. Observing heads eliminates LHT, RTH, RTT, and LTT, with the remaining possibilities being RHH, LHH, RHT, and LTH, in half of which the flips are the same. So the answer is now seen to be 1⁄2.
But why is this the right answer to this non-fantastical problem? (I take it that it is correct, and that this is not controversial.) The reason is that we know how probability works in ordinary situations, in which personal identities are clear, because everyone has different experiences. If instead we make a fantastic assumption that the two children are identical twins raised apart in absolutely identical environments, and therefore have exactly the same thoughts and experiences, up until the point at age 20 when they are told possibly-different things about how their coins landed, it may not be so clear that 1⁄2 is the right answer. It’s also not so clear that this fantastic scenario is possible, or of interest. It certainly would not be a good idea to treat it as being just the same as the non-fantastical scenario, apart from a little simplifying assumption about identical experiences...
In any not-completely-fantastical scenario, Beauty’s experiences on Monday are very unlikely to be repeated exactly on Tuesday, so “experiences y” and “experiences y at least once” are effectively equivalent. Any argument that relies on her sensory input being so restricted that there is a substantial probability of identical experiences on Monday and Tuesday applies only to a fantastical version of the problem. Maybe that’s an interesting version of the problem (though maybe instead it’s simply an impossible version), but it’s not the same as the usual, only-mildly-fantastical version.