I’m curious: is this grounded on anything beyond your intuition in these cases?
SSI is grounded on frequency. In the Incubator situation, the SSI probabilities are:
P(1st cell) = 2⁄3
P(2nd cell) = 1⁄3
P(H | 1st cell) = 1⁄2
P(H | 2nd cell) = 0
(FYI, I find this intuitive, and find SSA in this situation unintuitive.)
These agree with the actual frequencies, in terms of expected number of people in different circumstances, if you repeat this experiment. And frequencies seem very important to me, because if you’re a utilitarian that’s what you care about. If we consider torturing anyone in the first cell vs. torturing anyone in the second cell, the former is twice as bad in expectation (please tell me if you disagree, because I would find this very surprising).
So your probabilities aren’t grounded in frequency&utility. Is there something else they’re grounded in that you care about? Or do you choose them only because they feel intuitive?
These agree with the actual frequencies, in terms of expected number of people in different circumstances, if you repeat this experiment. And frequencies seem very important to me, because if you’re a utilitarian that’s what you care about.
In a previous thread on Sleeping Beauty, I showed that if there are multiple experiments, SSA will assign intermediate probabilities, closer to the SIA probabilities. And if you run an infinite number, it will converge to the SIA probabilities. So you will partially get this benefit in any case; but apart from this, there is nothing to prevent a person from taking into account the whole situation when they decide whether to make a bet or not.
If we consider torturing anyone in the first cell vs. torturing anyone in the second cell, the former is twice as bad in expectation (please tell me if you disagree, because I would find this very surprising).
I agree with this, since there will always be someone in the first cell, and someone in the second cell only 50% of the time.
So your probabilities aren’t grounded in frequency&utility. Is there something else they’re grounded in that you care about? Or do you choose them only because they feel intuitive?
I care about truth, and I care about honestly reporting my beliefs. SIA requires me to assign a probability of 1 to the hypothesis that there are an infinite number of observers. I am not in fact certain of that, so it would be a falsehood to say that I am.
Likewise, if there is nothing inclining me to believe one of two mutually exclusive alternatives, saying “these seem equally likely to me” is a matter of truth. I would be falsely reporting my beliefs if I said that I believed one more than the other. In the Sleeping Beauty experiment, or in the incubator experiment, nothing leads me to believe that the coin will land one way or the other. So I have to assign a probability of 50% to heads, and a probability of 50% to tails. Nor can I change this when I am questioned, because I have no new evidence. As I stated in my other reply, the fact that I just woke up proves nothing; I knew that was going to happen anyway, even if, e.g. in the incubator case, there is only one person, since I cannot distinguish “I exist” from “someone else exists.”
In contrast, take the incubator case, where a thousand people are generated if the coin lands tails. SIA implies that you are virtually certain a priori that the coin will land tails, or that when you wake up, you have some way to notice that it is you rather than someone else. Both things are false—you have no way of knowing that the coin will land tails or is in any way more likely to land tails, nor do you have a way to distinguish your existence from the existence of someone else.
I’m curious: is this grounded on anything beyond your intuition in these cases?
SSI is grounded on frequency. In the Incubator situation, the SSI probabilities are:
P(1st cell) = 2⁄3
P(2nd cell) = 1⁄3
P(H | 1st cell) = 1⁄2
P(H | 2nd cell) = 0
(FYI, I find this intuitive, and find SSA in this situation unintuitive.)
These agree with the actual frequencies, in terms of expected number of people in different circumstances, if you repeat this experiment. And frequencies seem very important to me, because if you’re a utilitarian that’s what you care about. If we consider torturing anyone in the first cell vs. torturing anyone in the second cell, the former is twice as bad in expectation (please tell me if you disagree, because I would find this very surprising).
So your probabilities aren’t grounded in frequency&utility. Is there something else they’re grounded in that you care about? Or do you choose them only because they feel intuitive?
In a previous thread on Sleeping Beauty, I showed that if there are multiple experiments, SSA will assign intermediate probabilities, closer to the SIA probabilities. And if you run an infinite number, it will converge to the SIA probabilities. So you will partially get this benefit in any case; but apart from this, there is nothing to prevent a person from taking into account the whole situation when they decide whether to make a bet or not.
I agree with this, since there will always be someone in the first cell, and someone in the second cell only 50% of the time.
I care about truth, and I care about honestly reporting my beliefs. SIA requires me to assign a probability of 1 to the hypothesis that there are an infinite number of observers. I am not in fact certain of that, so it would be a falsehood to say that I am.
Likewise, if there is nothing inclining me to believe one of two mutually exclusive alternatives, saying “these seem equally likely to me” is a matter of truth. I would be falsely reporting my beliefs if I said that I believed one more than the other. In the Sleeping Beauty experiment, or in the incubator experiment, nothing leads me to believe that the coin will land one way or the other. So I have to assign a probability of 50% to heads, and a probability of 50% to tails. Nor can I change this when I am questioned, because I have no new evidence. As I stated in my other reply, the fact that I just woke up proves nothing; I knew that was going to happen anyway, even if, e.g. in the incubator case, there is only one person, since I cannot distinguish “I exist” from “someone else exists.”
In contrast, take the incubator case, where a thousand people are generated if the coin lands tails. SIA implies that you are virtually certain a priori that the coin will land tails, or that when you wake up, you have some way to notice that it is you rather than someone else. Both things are false—you have no way of knowing that the coin will land tails or is in any way more likely to land tails, nor do you have a way to distinguish your existence from the existence of someone else.