You are betting a positive extra payout of $1,000 against a net loss of -$999,000 that there are no Black Swans[1] at all in this situation.
Given that you already have 100 points of evidence that taking Box A makes Box B empty (added to the evidence that Omega is more intelligent than you). I’d say that’s a Bad Bet to make.
Given the amount of uncertainty in the world, choosing Box B instead of trying to “beat the system” seems like the rational step to me.
Edit I’ve given the Math in a comment below to show how to calculate when to make either decision.
[1] ie something you didn’t think of that makes Box B empty even after Omega’s gone away, or an invisible portkey in box B that is activated the moment you pick up Box A, or Omega’s time-machine that let him go forward to see your decision before putting the money into the boxes… or a device using some hand-wavey quantum state that lets either Box A be taken or Box B’s contents to exist…
Let P(BS) = probability of a Black Swan being involved
This makes the average payout work out to:
1-Box = $1,000,000
2-Box = $1,001,000 (1 - P(BS)) + $1,000 P(BS)
Now it seems to be that the average 2-boxer is assuming that P(BS) = 0,
which would make the 2-Box solution always == $1,001,000 which would, of
course, always beat the 1-box solution.
and maybe in this toy-problem, they’re right to assume P(BS) = 0
But IRL that’s almost never the case—after all, 0 is not a probability
yes?
So assume that P(BS) is non-zero. t what point would it be worth it to choose the 1-Box solution and what point the 2-Box solution? Lets run the math:
So, the estimated probability of Black Swan existing only has to be greater than 0.0998% for the 1-Box solution to have a greater expected payout and therefore the 1-Box option is the more rational::Bayesian choice
OTOH, if you can guarantee that P(BS) is less than 0.0998%, then the rational choice is to 2-Box.
I’m not sure what you are implying with this link—can you please expand? Are you saying that I’m choosing a least convenient possible world (and if so, how and what) or that 2-boxers are doing so?
Sorry, your comment was confusing and I didn’t properly concentrate on what you meant, so giving the LCPW link was a mistake, it doesn’t seem to apply.
You are finding technical flaws that are not essential to the intended sense of the thought experiment. Instead of making it uninteresting because of the potential flaws, make the thought experiment stronger by considering the case where these flaws are fixed.
You are betting a positive extra payout of $1,000 against a net loss of -$999,000 that there are no Black Swans[1] at all in this situation.
Given that you already have 100 points of evidence that taking Box A makes Box B empty (added to the evidence that Omega is more intelligent than you). I’d say that’s a Bad Bet to make.
Given the amount of uncertainty in the world, choosing Box B instead of trying to “beat the system” seems like the rational step to me.
Edit I’ve given the Math in a comment below to show how to calculate when to make either decision.
[1] ie something you didn’t think of that makes Box B empty even after Omega’s gone away, or an invisible portkey in box B that is activated the moment you pick up Box A, or Omega’s time-machine that let him go forward to see your decision before putting the money into the boxes… or a device using some hand-wavey quantum state that lets either Box A be taken or Box B’s contents to exist…
So working the math on that
Let P(BS) = probability of a Black Swan being involved
This makes the average payout work out to:
1-Box = $1,000,000
2-Box = $1,001,000 (1 - P(BS)) + $1,000 P(BS)
Now it seems to be that the average 2-boxer is assuming that P(BS) = 0, which would make the 2-Box solution always == $1,001,000 which would, of course, always beat the 1-box solution.
and maybe in this toy-problem, they’re right to assume P(BS) = 0 But IRL that’s almost never the case—after all, 0 is not a probability yes?
So assume that P(BS) is non-zero. t what point would it be worth it to choose the 1-Box solution and what point the 2-Box solution? Lets run the math:
1,000,000 = 1,001,000(1-x) + 1000x = 1001000 − 1001000x + 1000x = 1001000 - (1002000x)
=> 1000000 − 1001000 = −1002000x
=> x = −1000/-100200
=> x = 0.000998004
So, the estimated probability of Black Swan existing only has to be greater than 0.0998% for the 1-Box solution to have a greater expected payout and therefore the 1-Box option is the more rational::Bayesian choice
OTOH, if you can guarantee that P(BS) is less than 0.0998%, then the rational choice is to 2-Box.
Edit: Never mind, my comment resulted from a confusion.
http://wiki.lesswrong.com/wiki/Least_convenient_possible_world
I’m not sure what you are implying with this link—can you please expand? Are you saying that I’m choosing a least convenient possible world (and if so, how and what) or that 2-boxers are doing so?
Sorry, your comment was confusing and I didn’t properly concentrate on what you meant, so giving the LCPW link was a mistake, it doesn’t seem to apply.
No problem. I’ve expanded with the math explaining what I mean, hopefully that makes it less confusing what I was aiming at.
You are finding technical flaws that are not essential to the intended sense of the thought experiment. Instead of making it uninteresting because of the potential flaws, make the thought experiment stronger by considering the case where these flaws are fixed.