He proposes the coin toss could happen after the first awakening. Beauty’s answer ought to remain the same regardless the timing of the toss. A simple calculation tells us his credence of H must be 1⁄3. As SSA dictates this is also beauty’s answer. Now beauty is predicting a fair coin toss yet to happen would most likely land on T. This supernatural predicting power is a conclusive evidence against SSA.
So how do you get Beauty’s prediction? If at the end of the first day you ask for a prediction on the coin, but you don’t ask on the second day, then now Beauty knows that the coin flip is, as you say, yet to happen, and so she goes back to predicting 50⁄50. She only deviates from 50⁄50 when she thinks there’s some chance that the coin flip has already happened.
Sometimes people absolutely will come to different conclusions. And I think you’re part of the way there with the idea of letting people talk to see if they converge. But I think you’ll get the right answer even more often if you set up specific thought-experiment processes, and then had the imaginary people in those thought experiments bet against each other, and say the person (or group of people all with identical information) who made money on average (where “average” means over many re-runs of this specific thought experiment) had good probabilities, and the people who lost money had bad probabilities.
I don’t think this is what probabilities mean, or that it’s the most elegant way to find probabilities, but I think it’s a pretty solid and non-confusing way. And there’s a quite nice discussion article about it somewhere on this site that I can’t find, sadly.
Thank you for the reply. I really appreciate it since it reminds me that I have made a mistake in my argument. I didn’t say SSA means reasoning as if an observer is randomly selected from all actually existent observers ( past, present and /b/future/b/).
So how do you get Beauty’s prediction? If at the end of the first day you ask for a prediction on the coin, but you don’t ask on the second day, then now Beauty knows that the coin flip is, as you say, yet to happen, and so she goes back to predicting 50⁄50. She only deviates from 50⁄50 when she thinks there’s some chance that the coin flip has already happened.
I think Elga’s argument is beauty’s credence should not be dependent on the exact time of coin toss. It seems reasonable to me since the experiment can be carried out exact the same way no matter if the coin is tosses on Sunday or Monday night. According to SSA beauty should update credence of H to 2⁄3 after learning it is Monday. If you think beauty shall give 1⁄2 if she finds out the coin is tossed on Monday night then her answer would be dependent on the time of coin toss. Which to me seems a rather weak position.
Regarding a betting odds argument. I have give a frequentist model in part I which uses betting odds as part of the argument. In essence, beauty’s break even odd is at 1⁄2 while the selector’s is at 1⁄3, which agrees with there credence.
According to SSA beauty should update credence of H to 2⁄3 after learning it is Monday.
I always forget what the acronyms are. But the probability of H is 1⁄2 after learning it’s Monday, any any method that says otherwise is wrong, exactly by the argument that you can flip the coin on monday right in front of SB, and if she knows it’s Monday and thinks it’s not a 50⁄50 flip, her probability assignment is bad.
Beauty knows limiting frequency (which, when known, is equal to the probability) of the coin flips that she sees right in front of her will be equal to one-half. That is, if you repeat the experiment many times (plus a little noise to determine coin flips), then you get equal numbers of the event “Beauty sees a fair coin flip and it lands Heads” and “Beauty sees a fair coin flip and it lands Tails.” Therefore Beauty assigns 50⁄50 odds to any coin flips she actually gets to see.
You can make an analogous argument from symmetry of information rather than limiting frequency, but it’s less accessible and I don’t expect people to think of it on their own. Basically, the only reason to assign thirder probabilities is if you’re treating states of the world given your information as the basic mutually-exclusive-and-exhaustive building block of probability assignment. And the states look like Mon+Heads, Mon+Tails, and Tues+Tails. If you eliminate one of the possibilities, then the remaining two are symmetrical.
If it seems paradoxical that, upon waking up, she thinks the Monday coin is more likely to have landed tails, just remember that half of the time that coin landed tails, it’s Tuesday and she never gets to see the Monday coin being flipped—as soon as she actually expects to see it flipped, that’s a new piece of information that causes her to update her probabilities.
So how do you get Beauty’s prediction? If at the end of the first day you ask for a prediction on the coin, but you don’t ask on the second day, then now Beauty knows that the coin flip is, as you say, yet to happen, and so she goes back to predicting 50⁄50. She only deviates from 50⁄50 when she thinks there’s some chance that the coin flip has already happened.
Sometimes people absolutely will come to different conclusions. And I think you’re part of the way there with the idea of letting people talk to see if they converge. But I think you’ll get the right answer even more often if you set up specific thought-experiment processes, and then had the imaginary people in those thought experiments bet against each other, and say the person (or group of people all with identical information) who made money on average (where “average” means over many re-runs of this specific thought experiment) had good probabilities, and the people who lost money had bad probabilities.
I don’t think this is what probabilities mean, or that it’s the most elegant way to find probabilities, but I think it’s a pretty solid and non-confusing way. And there’s a quite nice discussion article about it somewhere on this site that I can’t find, sadly.
Thank you for the reply. I really appreciate it since it reminds me that I have made a mistake in my argument. I didn’t say SSA means reasoning as if an observer is randomly selected from all actually existent observers ( past, present and /b/future/b/).
I think Elga’s argument is beauty’s credence should not be dependent on the exact time of coin toss. It seems reasonable to me since the experiment can be carried out exact the same way no matter if the coin is tosses on Sunday or Monday night. According to SSA beauty should update credence of H to 2⁄3 after learning it is Monday. If you think beauty shall give 1⁄2 if she finds out the coin is tossed on Monday night then her answer would be dependent on the time of coin toss. Which to me seems a rather weak position.
Regarding a betting odds argument. I have give a frequentist model in part I which uses betting odds as part of the argument. In essence, beauty’s break even odd is at 1⁄2 while the selector’s is at 1⁄3, which agrees with there credence.
I always forget what the acronyms are. But the probability of H is 1⁄2 after learning it’s Monday, any any method that says otherwise is wrong, exactly by the argument that you can flip the coin on monday right in front of SB, and if she knows it’s Monday and thinks it’s not a 50⁄50 flip, her probability assignment is bad.
Yes, that’s why I think to this day Elga’s counter argument is still the best.
I don’t see any argument there.
To spell it out:
Beauty knows limiting frequency (which, when known, is equal to the probability) of the coin flips that she sees right in front of her will be equal to one-half. That is, if you repeat the experiment many times (plus a little noise to determine coin flips), then you get equal numbers of the event “Beauty sees a fair coin flip and it lands Heads” and “Beauty sees a fair coin flip and it lands Tails.” Therefore Beauty assigns 50⁄50 odds to any coin flips she actually gets to see.
You can make an analogous argument from symmetry of information rather than limiting frequency, but it’s less accessible and I don’t expect people to think of it on their own. Basically, the only reason to assign thirder probabilities is if you’re treating states of the world given your information as the basic mutually-exclusive-and-exhaustive building block of probability assignment. And the states look like Mon+Heads, Mon+Tails, and Tues+Tails. If you eliminate one of the possibilities, then the remaining two are symmetrical.
If it seems paradoxical that, upon waking up, she thinks the Monday coin is more likely to have landed tails, just remember that half of the time that coin landed tails, it’s Tuesday and she never gets to see the Monday coin being flipped—as soon as she actually expects to see it flipped, that’s a new piece of information that causes her to update her probabilities.