The point was to check whether this is a fair restatement of the problem, by attempting to up the stakes a bit. For example, if you believe that, quite obviously, the odds against heads are a billion to one, then the one-third-er position in the original problem should be equally obvious, unless I have failed at my mission.
Ah. I don’t think it quite works for me—it’s very different from Sleeping Beauty, because without the memory erasure there’s actual information in receiving the postcard—you eliminated all the universes where it was heads and you did NOT win the random. You can update on that, unlike SB who cannot update on being awakened.
I agree that it’s different but would phrase my objection differently regarding whether SB can update—I think it’s ambiguous whether she can update.
In this problem it’s clearly “fair” to have a bet, because everyone’s isn’t having their memory wiped and their epistemic state matters, so you can set the odds at rational betting odds (which assuming away complications, can be expected to favour betting long odds on tails, because in the universe that tails occurred a lot more people would be in the epistemic state to make such bets).
In the Sleeping Beauty problem, there’s a genuine issue as to whether the epistemic state of extra wakings that get reset “matters” beyond how one single waking matters. If someone arranges a bet with every waking of Sleeping Beauty and the winnings or losses of Sleeping Beauty at each waking accrue to Sleeping Beauty’s future self, she should clearly bet as if the probability were 1⁄3, but a halfer could object that arranging twice as many bets with Sleeping Beauty in the one case rather than the other is “unfair” and that the thirder bet only pays off because there were higher stakes in the tails case. Whereas, the bookie could alternatively pay off using the average of the two bets in the tails case and the thirder could object that this is unfair because there were lower stakes per waking in this case. I don’t think either is objectively wrong—it’s genuinely ambiguous to me.
The point was to check whether this is a fair restatement of the problem, by attempting to up the stakes a bit. For example, if you believe that, quite obviously, the odds against heads are a billion to one, then the one-third-er position in the original problem should be equally obvious, unless I have failed at my mission.
Ah. I don’t think it quite works for me—it’s very different from Sleeping Beauty, because without the memory erasure there’s actual information in receiving the postcard—you eliminated all the universes where it was heads and you did NOT win the random. You can update on that, unlike SB who cannot update on being awakened.
I agree that it’s different but would phrase my objection differently regarding whether SB can update—I think it’s ambiguous whether she can update.
In this problem it’s clearly “fair” to have a bet, because everyone’s isn’t having their memory wiped and their epistemic state matters, so you can set the odds at rational betting odds (which assuming away complications, can be expected to favour betting long odds on tails, because in the universe that tails occurred a lot more people would be in the epistemic state to make such bets).
In the Sleeping Beauty problem, there’s a genuine issue as to whether the epistemic state of extra wakings that get reset “matters” beyond how one single waking matters. If someone arranges a bet with every waking of Sleeping Beauty and the winnings or losses of Sleeping Beauty at each waking accrue to Sleeping Beauty’s future self, she should clearly bet as if the probability were 1⁄3, but a halfer could object that arranging twice as many bets with Sleeping Beauty in the one case rather than the other is “unfair” and that the thirder bet only pays off because there were higher stakes in the tails case. Whereas, the bookie could alternatively pay off using the average of the two bets in the tails case and the thirder could object that this is unfair because there were lower stakes per waking in this case. I don’t think either is objectively wrong—it’s genuinely ambiguous to me.