I think that 1⁄2 is an acceptable answer, and in fact the only correct answer. Basically 1⁄2 corresponds to SSA, and 1⁄3 to SIA; and in my opinion SSA is right, and SIA is wrong.
We can convert the situation to an equivalent Incubator situation to see how SSA applies. We have two cells. We generate a person and put them in the first cell. Then we flip a coin. If the coin lands heads, we generate no one else. If the coin lands tails, we generate a new person and put them in the second cell.
Then we question all of the persons: “Do you think you are in the first cell, or the second?” “Do you think the coin landed heads, or tails?” To make things equivalent to your description we could question the person in the first cell before the coin is flipped, and the person in the second only if they exist, after it is flipped.
Estimates based on SSA:
P(H) = .5
P(T) = .5
P(1st cell) = .75 [there is a 50% chance I am in the first cell because of getting heads; otherwise there is a 50% chance I am the first person instead of the second]
P(2nd cell) = .25 [likewise]
P(H | 1st cell) = 2⁄3 [from above]
P(T | 1st cell) = 1⁄3 [likewise]
P(H | 2nd cell) = 0
P(T | 2nd cell) = 1
Your mistake is in the assumption that “P(Heads | Monday) = P(Tails | Monday) = 1⁄2, since if it’s Monday then a fair coin is about to be flipped.” The Doomsday style conclusion that I fully embrace is that if it is Monday, then it is more likely that the coin will land heads.
I’m curious: is this grounded on anything beyond your intuition in these cases?
SSI is grounded on frequency. In the Incubator situation, the SSI probabilities are:
P(1st cell) = 2⁄3
P(2nd cell) = 1⁄3
P(H | 1st cell) = 1⁄2
P(H | 2nd cell) = 0
(FYI, I find this intuitive, and find SSA in this situation unintuitive.)
These agree with the actual frequencies, in terms of expected number of people in different circumstances, if you repeat this experiment. And frequencies seem very important to me, because if you’re a utilitarian that’s what you care about. If we consider torturing anyone in the first cell vs. torturing anyone in the second cell, the former is twice as bad in expectation (please tell me if you disagree, because I would find this very surprising).
So your probabilities aren’t grounded in frequency&utility. Is there something else they’re grounded in that you care about? Or do you choose them only because they feel intuitive?
These agree with the actual frequencies, in terms of expected number of people in different circumstances, if you repeat this experiment. And frequencies seem very important to me, because if you’re a utilitarian that’s what you care about.
In a previous thread on Sleeping Beauty, I showed that if there are multiple experiments, SSA will assign intermediate probabilities, closer to the SIA probabilities. And if you run an infinite number, it will converge to the SIA probabilities. So you will partially get this benefit in any case; but apart from this, there is nothing to prevent a person from taking into account the whole situation when they decide whether to make a bet or not.
If we consider torturing anyone in the first cell vs. torturing anyone in the second cell, the former is twice as bad in expectation (please tell me if you disagree, because I would find this very surprising).
I agree with this, since there will always be someone in the first cell, and someone in the second cell only 50% of the time.
So your probabilities aren’t grounded in frequency&utility. Is there something else they’re grounded in that you care about? Or do you choose them only because they feel intuitive?
I care about truth, and I care about honestly reporting my beliefs. SIA requires me to assign a probability of 1 to the hypothesis that there are an infinite number of observers. I am not in fact certain of that, so it would be a falsehood to say that I am.
Likewise, if there is nothing inclining me to believe one of two mutually exclusive alternatives, saying “these seem equally likely to me” is a matter of truth. I would be falsely reporting my beliefs if I said that I believed one more than the other. In the Sleeping Beauty experiment, or in the incubator experiment, nothing leads me to believe that the coin will land one way or the other. So I have to assign a probability of 50% to heads, and a probability of 50% to tails. Nor can I change this when I am questioned, because I have no new evidence. As I stated in my other reply, the fact that I just woke up proves nothing; I knew that was going to happen anyway, even if, e.g. in the incubator case, there is only one person, since I cannot distinguish “I exist” from “someone else exists.”
In contrast, take the incubator case, where a thousand people are generated if the coin lands tails. SIA implies that you are virtually certain a priori that the coin will land tails, or that when you wake up, you have some way to notice that it is you rather than someone else. Both things are false—you have no way of knowing that the coin will land tails or is in any way more likely to land tails, nor do you have a way to distinguish your existence from the existence of someone else.
I think that 1⁄2 is an acceptable answer, and in fact the only correct answer. Basically 1⁄2 corresponds to SSA, and 1⁄3 to SIA; and in my opinion SSA is right, and SIA is wrong.
We can convert the situation to an equivalent Incubator situation to see how SSA applies. We have two cells. We generate a person and put them in the first cell. Then we flip a coin. If the coin lands heads, we generate no one else. If the coin lands tails, we generate a new person and put them in the second cell.
Then we question all of the persons: “Do you think you are in the first cell, or the second?” “Do you think the coin landed heads, or tails?” To make things equivalent to your description we could question the person in the first cell before the coin is flipped, and the person in the second only if they exist, after it is flipped.
Estimates based on SSA:
P(H) = .5
P(T) = .5
P(1st cell) = .75 [there is a 50% chance I am in the first cell because of getting heads; otherwise there is a 50% chance I am the first person instead of the second]
P(2nd cell) = .25 [likewise]
P(H | 1st cell) = 2⁄3 [from above]
P(T | 1st cell) = 1⁄3 [likewise]
P(H | 2nd cell) = 0
P(T | 2nd cell) = 1
Your mistake is in the assumption that “P(Heads | Monday) = P(Tails | Monday) = 1⁄2, since if it’s Monday then a fair coin is about to be flipped.” The Doomsday style conclusion that I fully embrace is that if it is Monday, then it is more likely that the coin will land heads.
I’m curious: is this grounded on anything beyond your intuition in these cases?
SSI is grounded on frequency. In the Incubator situation, the SSI probabilities are:
P(1st cell) = 2⁄3
P(2nd cell) = 1⁄3
P(H | 1st cell) = 1⁄2
P(H | 2nd cell) = 0
(FYI, I find this intuitive, and find SSA in this situation unintuitive.)
These agree with the actual frequencies, in terms of expected number of people in different circumstances, if you repeat this experiment. And frequencies seem very important to me, because if you’re a utilitarian that’s what you care about. If we consider torturing anyone in the first cell vs. torturing anyone in the second cell, the former is twice as bad in expectation (please tell me if you disagree, because I would find this very surprising).
So your probabilities aren’t grounded in frequency&utility. Is there something else they’re grounded in that you care about? Or do you choose them only because they feel intuitive?
In a previous thread on Sleeping Beauty, I showed that if there are multiple experiments, SSA will assign intermediate probabilities, closer to the SIA probabilities. And if you run an infinite number, it will converge to the SIA probabilities. So you will partially get this benefit in any case; but apart from this, there is nothing to prevent a person from taking into account the whole situation when they decide whether to make a bet or not.
I agree with this, since there will always be someone in the first cell, and someone in the second cell only 50% of the time.
I care about truth, and I care about honestly reporting my beliefs. SIA requires me to assign a probability of 1 to the hypothesis that there are an infinite number of observers. I am not in fact certain of that, so it would be a falsehood to say that I am.
Likewise, if there is nothing inclining me to believe one of two mutually exclusive alternatives, saying “these seem equally likely to me” is a matter of truth. I would be falsely reporting my beliefs if I said that I believed one more than the other. In the Sleeping Beauty experiment, or in the incubator experiment, nothing leads me to believe that the coin will land one way or the other. So I have to assign a probability of 50% to heads, and a probability of 50% to tails. Nor can I change this when I am questioned, because I have no new evidence. As I stated in my other reply, the fact that I just woke up proves nothing; I knew that was going to happen anyway, even if, e.g. in the incubator case, there is only one person, since I cannot distinguish “I exist” from “someone else exists.”
In contrast, take the incubator case, where a thousand people are generated if the coin lands tails. SIA implies that you are virtually certain a priori that the coin will land tails, or that when you wake up, you have some way to notice that it is you rather than someone else. Both things are false—you have no way of knowing that the coin will land tails or is in any way more likely to land tails, nor do you have a way to distinguish your existence from the existence of someone else.