I don’t think Omega being a perfect predictor is essential to the paradox. Assume you are playing this game with me. Say my prediction is only 51% correct. I will fill an envelope according to the prescribed rule. I read you then give you the envelope (box B). After you put it in your pocket I put 1000 dollars on the table. Do you suggest not taking the 1000 dollar will make you richer? If you thinking you should take the 1000 in this case, then how good would I need to be for you to give that up? (somewhere between 51% and 99.9% I presume) I do not see a good reason for this cutoff.
I think the underlying rationale for two-boxing is to deny first-person decision-making in that particular situation. e.g. not conducting the causal analysis when facing the 1000 dollars. Which is your strategy, commit to taking one box only, let Omega read you, and stick to that decision.
“After you put it in your pocket I put 1000 dollars on the table. Do you suggest not taking the 1000 dollar will make you richer?”
Unlike the Omega problem, this is way too underspecified to make a sensible answer. It depends upon the details of how you get your 51% success rate.
Do you always predict they’re going to take two boxes, and only 51% of people actually did? Then obviously I will have $1000 instead of $0, and always be richer.
Or maybe you just get these feelings about people sometimes. In later carefully controlled tests it turns out that you get this feeling for about 2% of “easily read” people, you’re right about 90% of the time in both directions for them, and it isn’t correlated with how certain they themselves are about taking the money. This is more definite than any real-world situation will ever be, but illustrates the principle. In this scenario, if I’m in the 98% then your “prediction” is uncorrelated with my eventual intent, and I will be $1000 richer if I take the money.
Otherwise, I’m in the 2%. If I choose to take the money, there’s a 90% chance that showed up in some outward signs before you gave me the envelope, and I get $1000. There’s a 10% chance that it didn’t, and I get $1001000 for an expected payout of $101000. Note that this is an expected payout because I don’t know what is in the envelope. If I choose not to take the money, the same calculation gives $901000 expected payout.
Since I don’t know whether I’m easily read or not, I’m staking a 98% chance of a $1000 gain against a 2% chance of a $800000 loss. This is a bad idea, and on balance loses me money.
Well, to my defense you didn’t specify how is Omega 99.9% accurate either. But that does not matter. Let me change the question to fit your framework.
I get this feeling for some “easily read” people. I am about 51% right in both directions of them, and it isn’t correlated with how certain they themselves are about taking the money. Now, suppose you are one of the “easily read” people and you know it. After putting the envelope in your pocket, would you also take the 1000 dollar on the table? Will rejecting it make you richer?
No, I wouldn’t take the money on the table in this case.
I’m easily read, so I already gave off signs of what my decision would turn out to be. You’re not very good at picking them up, but enough that if people in my position take the money then there’s a 49% chance that the envelope contains a million dollars. If they don’t, then there’s a 51% chance that it does.
I’m not going to take $1000 if it is associated with a 2% reduction in the chance of me having a $1000000 in the envelope. On average, that would make me poorer. In the strict local causal sense I would be richer taking the money, but that reasoning is subject to Simpson’s paradox: action Take appears better than Leave for both cases Million and None in the envelope, but is worse when the cases are combined because the weights are not independent. Even a very weak correlation is enough because the pay-offs are so disparate.
I guess that is our disagreement. I would say not taking the money require some serious modification to causal analysis (e.g. retro-causal). You think there doesn’t need to be, it is perfectly resolved by Simpson’s paradox.
I don’t think Omega being a perfect predictor is essential to the paradox. Assume you are playing this game with me. Say my prediction is only 51% correct. I will fill an envelope according to the prescribed rule. I read you then give you the envelope (box B). After you put it in your pocket I put 1000 dollars on the table. Do you suggest not taking the 1000 dollar will make you richer? If you thinking you should take the 1000 in this case, then how good would I need to be for you to give that up? (somewhere between 51% and 99.9% I presume) I do not see a good reason for this cutoff.
I think the underlying rationale for two-boxing is to deny first-person decision-making in that particular situation. e.g. not conducting the causal analysis when facing the 1000 dollars. Which is your strategy, commit to taking one box only, let Omega read you, and stick to that decision.
“After you put it in your pocket I put 1000 dollars on the table. Do you suggest not taking the 1000 dollar will make you richer?”
Unlike the Omega problem, this is way too underspecified to make a sensible answer. It depends upon the details of how you get your 51% success rate.
Do you always predict they’re going to take two boxes, and only 51% of people actually did? Then obviously I will have $1000 instead of $0, and always be richer.
Or maybe you just get these feelings about people sometimes. In later carefully controlled tests it turns out that you get this feeling for about 2% of “easily read” people, you’re right about 90% of the time in both directions for them, and it isn’t correlated with how certain they themselves are about taking the money. This is more definite than any real-world situation will ever be, but illustrates the principle. In this scenario, if I’m in the 98% then your “prediction” is uncorrelated with my eventual intent, and I will be $1000 richer if I take the money.
Otherwise, I’m in the 2%. If I choose to take the money, there’s a 90% chance that showed up in some outward signs before you gave me the envelope, and I get $1000. There’s a 10% chance that it didn’t, and I get $1001000 for an expected payout of $101000. Note that this is an expected payout because I don’t know what is in the envelope. If I choose not to take the money, the same calculation gives $901000 expected payout.
Since I don’t know whether I’m easily read or not, I’m staking a 98% chance of a $1000 gain against a 2% chance of a $800000 loss. This is a bad idea, and on balance loses me money.
Well, to my defense you didn’t specify how is Omega 99.9% accurate either. But that does not matter. Let me change the question to fit your framework.
I get this feeling for some “easily read” people. I am about 51% right in both directions of them, and it isn’t correlated with how certain they themselves are about taking the money. Now, suppose you are one of the “easily read” people and you know it. After putting the envelope in your pocket, would you also take the 1000 dollar on the table? Will rejecting it make you richer?
No, I wouldn’t take the money on the table in this case.
I’m easily read, so I already gave off signs of what my decision would turn out to be. You’re not very good at picking them up, but enough that if people in my position take the money then there’s a 49% chance that the envelope contains a million dollars. If they don’t, then there’s a 51% chance that it does.
I’m not going to take $1000 if it is associated with a 2% reduction in the chance of me having a $1000000 in the envelope. On average, that would make me poorer. In the strict local causal sense I would be richer taking the money, but that reasoning is subject to Simpson’s paradox: action Take appears better than Leave for both cases Million and None in the envelope, but is worse when the cases are combined because the weights are not independent. Even a very weak correlation is enough because the pay-offs are so disparate.
I guess that is our disagreement. I would say not taking the money require some serious modification to causal analysis (e.g. retro-causal). You think there doesn’t need to be, it is perfectly resolved by Simpson’s paradox.