Q1: I give higher weight to many unstated options (cheating, pranks, mistakes, this is a dream, I misread the billionaire’s statement, etc.) than to the “only I got the card” outcome. But ignoring all that and if I could somehow believe with probability 1 that the setup is 100% reliable, It has to be 1billion:1 against it being heads.
Future answers will also ignore all the environmental and adversarial cases. And assume that my utility is linear with money, which is also very wrong. No way would I make any of these bets, but in a universe where such a thing were verifiably true, these are the correct answers.
Q2: I got mixed up with who’s offering and accepting the bet. It’s a fine bet to take (1000:1 lay on a 1B:1 proposition). It’s a bad bet to make (only 1000:1 payout on a 1B:1 prop). I don’t think there’s anything but linearity of utility, counterparty risk (will you actually get paid) that would make the betting decision diverge from the probability estimate.
Q3: Same answer.
Q4: Nick B is cheating—he’s changing the setup. If you call into question whether the setup is actually as stated, then of course all bets are off—it’s now a psychology excercise, figuring out who’s motivated to lie in what ways.
Q5: I dunno—I’ve seen so many of these that they all sound alike. It’s different from Sleeping Beauty because there’s no observational uncertainty—the setup and your observation are assumed to be unimpeachable (until Q4, but I kind of disregard that because the whole thing is pointless fantasy anyway.
Was there a point or paradox or alternate option you wanted to highlight with this?
The point was to check whether this is a fair restatement of the problem, by attempting to up the stakes a bit. For example, if you believe that, quite obviously, the odds against heads are a billion to one, then the one-third-er position in the original problem should be equally obvious, unless I have failed at my mission.
Ah. I don’t think it quite works for me—it’s very different from Sleeping Beauty, because without the memory erasure there’s actual information in receiving the postcard—you eliminated all the universes where it was heads and you did NOT win the random. You can update on that, unlike SB who cannot update on being awakened.
I agree that it’s different but would phrase my objection differently regarding whether SB can update—I think it’s ambiguous whether she can update.
In this problem it’s clearly “fair” to have a bet, because everyone’s isn’t having their memory wiped and their epistemic state matters, so you can set the odds at rational betting odds (which assuming away complications, can be expected to favour betting long odds on tails, because in the universe that tails occurred a lot more people would be in the epistemic state to make such bets).
In the Sleeping Beauty problem, there’s a genuine issue as to whether the epistemic state of extra wakings that get reset “matters” beyond how one single waking matters. If someone arranges a bet with every waking of Sleeping Beauty and the winnings or losses of Sleeping Beauty at each waking accrue to Sleeping Beauty’s future self, she should clearly bet as if the probability were 1⁄3, but a halfer could object that arranging twice as many bets with Sleeping Beauty in the one case rather than the other is “unfair” and that the thirder bet only pays off because there were higher stakes in the tails case. Whereas, the bookie could alternatively pay off using the average of the two bets in the tails case and the thirder could object that this is unfair because there were lower stakes per waking in this case. I don’t think either is objectively wrong—it’s genuinely ambiguous to me.
Q1: I give higher weight to many unstated options (cheating, pranks, mistakes, this is a dream, I misread the billionaire’s statement, etc.) than to the “only I got the card” outcome. But ignoring all that and if I could somehow believe with probability 1 that the setup is 100% reliable, It has to be 1billion:1 against it being heads.
Future answers will also ignore all the environmental and adversarial cases. And assume that my utility is linear with money, which is also very wrong. No way would I make any of these bets, but in a universe where such a thing were verifiably true, these are the correct answers.
Q2: I got mixed up with who’s offering and accepting the bet. It’s a fine bet to take (1000:1 lay on a 1B:1 proposition). It’s a bad bet to make (only 1000:1 payout on a 1B:1 prop). I don’t think there’s anything but linearity of utility, counterparty risk (will you actually get paid) that would make the betting decision diverge from the probability estimate.
Q3: Same answer.
Q4: Nick B is cheating—he’s changing the setup. If you call into question whether the setup is actually as stated, then of course all bets are off—it’s now a psychology excercise, figuring out who’s motivated to lie in what ways.
Q5: I dunno—I’ve seen so many of these that they all sound alike. It’s different from Sleeping Beauty because there’s no observational uncertainty—the setup and your observation are assumed to be unimpeachable (until Q4, but I kind of disregard that because the whole thing is pointless fantasy anyway.
Was there a point or paradox or alternate option you wanted to highlight with this?
The point was to check whether this is a fair restatement of the problem, by attempting to up the stakes a bit. For example, if you believe that, quite obviously, the odds against heads are a billion to one, then the one-third-er position in the original problem should be equally obvious, unless I have failed at my mission.
Ah. I don’t think it quite works for me—it’s very different from Sleeping Beauty, because without the memory erasure there’s actual information in receiving the postcard—you eliminated all the universes where it was heads and you did NOT win the random. You can update on that, unlike SB who cannot update on being awakened.
I agree that it’s different but would phrase my objection differently regarding whether SB can update—I think it’s ambiguous whether she can update.
In this problem it’s clearly “fair” to have a bet, because everyone’s isn’t having their memory wiped and their epistemic state matters, so you can set the odds at rational betting odds (which assuming away complications, can be expected to favour betting long odds on tails, because in the universe that tails occurred a lot more people would be in the epistemic state to make such bets).
In the Sleeping Beauty problem, there’s a genuine issue as to whether the epistemic state of extra wakings that get reset “matters” beyond how one single waking matters. If someone arranges a bet with every waking of Sleeping Beauty and the winnings or losses of Sleeping Beauty at each waking accrue to Sleeping Beauty’s future self, she should clearly bet as if the probability were 1⁄3, but a halfer could object that arranging twice as many bets with Sleeping Beauty in the one case rather than the other is “unfair” and that the thirder bet only pays off because there were higher stakes in the tails case. Whereas, the bookie could alternatively pay off using the average of the two bets in the tails case and the thirder could object that this is unfair because there were lower stakes per waking in this case. I don’t think either is objectively wrong—it’s genuinely ambiguous to me.