you keep misunderstanding how my example is very different than yours.
I understand that they are different, that’s the whole point. They are different in such a way that we can agree that the answer to my problem is clearly 1⁄2, while we can’t agree to the answer to your problem.
But none of their differences actually affect the mathematical argument you have constructed, so the way you arrive to an answer 1⁄3 in your problem, would arrive to the same answer in mine.
Amnesia is irrelevant
What amnesia does in Sleeping Beauty is ensuring that the Beauty can’t order the outcomes. So when she is awaken she doesn’t know whether it’s the first awakening or the second. She is unable to observe the event “I’ve been awaken twice in this experiment”. The similar effect is achieved by the fact that she is given a random ball from the box. She doesn’t know whether it’s the first ball or the second. And she can’t directly observe whether there are two balls in the box or only one.
In mine, there are two “samplings.”
Which is completely irrelevant to your mathematical argument about four equiprobable states because you’ve constructed it in such a manner, that the same probabilities are assigned to all of them regardless of whether the Beauty is awake or not. All your argument is based on “There are four equiprobable states and one of them is incompatible with the observations”, it is not dependent on the number of observations.
Now, there is a different argument that you could’ve constructed that would take advantage of two awakening. You could’ve said that when the first coin is Tails there are twice as many awakenings as when it’s Heads and claim that we should interpret it as P(Awake|T1)=2P(Awake|H1) but it’s very much not the argument you were talking about. In a couple of days, in my next post I’m explicitly exploring both of them.
The probability in each is completely independent of the other.
This is wrong. And it’s very easy to check. You may simulate your experiment, a large number of times, writing down the states of the coins on every awakening and notice that there is a clear way to predict the next token beter than chance:
if i-th token == TH and i-1-th token != TT then
i+1-th token = TT
elseif i-th token == TT and i-1-th token != TH:
i+1-th token = TH
Which is absolutely not the case in a situation where your argument actually works:
Two coins are tossed on Heads Heads event doesn’t happen, on every other outcome it does. Event has happened, what is the probability that the first coin came Heads?
What you seem to ignore, is that the method used to arrange the coins is different in the first pass through these three steps
Ignore? On the contrary. This is the exact reason for why your argument doesn’t work. You treat correlated events as independant. That’s what I’m trying to explain to you the whole time and why I brought up the problem with balls being put in the box, because there this kind of mistake is more obvious there.
But BECAUSE OF AMNESIA, this modification does no, in any way, affect SB’s assessment that sample space is {HH, HT, TH, TT}, or that each has a 25% chance to be the outcome
I suppose this is our crux.
I see two possible avenues for disagreement: about the territory and about the map
First is, whether having an amnesia actually modify the statistical properties of the experiment you are participating in. Do we agree that it’s not the case?
Second, statement about the map, which, correct me if I’m wrong, you actually hold, is that the Beauty should reason about her awakenings as independent because this represents her new state of knowledge due to amnesia?
This would be a correct statement if the Beauty was made to forgot that the events are correlated, if on the awakenings she had different information about which experiment she is participating in, than before she was put to sleep.
But in our case, the Beauty remembers the design of the experiment, she is aware that the states of the coins are not independent between two passes and if she reasons as if they are—she makes a mistake.
Her answer is unambiguously 1⁄3 anytime she is asked.
Her answer is 1⁄3 to the question “What is the probability that the coin is Heads in a random awakening throughout multiple iterations of such experiment”.
Her answer is 1⁄2 to the question “What is the probability that the coin is Heads in this particular experiment”.
But I don’t think that ambiguity is really the problem here.
I understand that they are different, that’s the whole point. They are different in such a way that we can agree that the answer to my problem is clearly 1⁄2, while we can’t agree to the answer to your problem.
But none of their differences actually affect the mathematical argument you have constructed, so the way you arrive to an answer 1⁄3 in your problem, would arrive to the same answer in mine.
What amnesia does in Sleeping Beauty is ensuring that the Beauty can’t order the outcomes. So when she is awaken she doesn’t know whether it’s the first awakening or the second. She is unable to observe the event “I’ve been awaken twice in this experiment”. The similar effect is achieved by the fact that she is given a random ball from the box. She doesn’t know whether it’s the first ball or the second. And she can’t directly observe whether there are two balls in the box or only one.
Which is completely irrelevant to your mathematical argument about four equiprobable states because you’ve constructed it in such a manner, that the same probabilities are assigned to all of them regardless of whether the Beauty is awake or not. All your argument is based on “There are four equiprobable states and one of them is incompatible with the observations”, it is not dependent on the number of observations.
Now, there is a different argument that you could’ve constructed that would take advantage of two awakening. You could’ve said that when the first coin is Tails there are twice as many awakenings as when it’s Heads and claim that we should interpret it as P(Awake|T1)=2P(Awake|H1) but it’s very much not the argument you were talking about. In a couple of days, in my next post I’m explicitly exploring both of them.
This is wrong. And it’s very easy to check. You may simulate your experiment, a large number of times, writing down the states of the coins on every awakening and notice that there is a clear way to predict the next token beter than chance:
Which is absolutely not the case in a situation where your argument actually works:
Two coins are tossed on Heads Heads event doesn’t happen, on every other outcome it does. Event has happened, what is the probability that the first coin came Heads?
Ignore? On the contrary. This is the exact reason for why your argument doesn’t work. You treat correlated events as independant. That’s what I’m trying to explain to you the whole time and why I brought up the problem with balls being put in the box, because there this kind of mistake is more obvious there.
I suppose this is our crux.
I see two possible avenues for disagreement: about the territory and about the map
First is, whether having an amnesia actually modify the statistical properties of the experiment you are participating in. Do we agree that it’s not the case?
Second, statement about the map, which, correct me if I’m wrong, you actually hold, is that the Beauty should reason about her awakenings as independent because this represents her new state of knowledge due to amnesia?
This would be a correct statement if the Beauty was made to forgot that the events are correlated, if on the awakenings she had different information about which experiment she is participating in, than before she was put to sleep.
But in our case, the Beauty remembers the design of the experiment, she is aware that the states of the coins are not independent between two passes and if she reasons as if they are—she makes a mistake.
Her answer is 1⁄3 to the question “What is the probability that the coin is Heads in a random awakening throughout multiple iterations of such experiment”.
Her answer is 1⁄2 to the question “What is the probability that the coin is Heads in this particular experiment”.
But I don’t think that ambiguity is really the problem here.