There is a problem, but it is not quite an inconsistency. The problem is the assumption that Omega is a perfect predictor. That is: for the system Omega + Agent + everything else, Omega can always find a fixed point such that Omega’s prediction is always correct for the state of the system some time after making that prediction and subsequent evolution of the system based on that. Even in a completely deterministic universe, this is asking too much. Some systems just don’t have fixed points.
The problem becomes much more reasonable when you consider an almost perfect predictor. It ruins the purity of the original question, but the space of possibilities becomes much broader.
It becomes more like a bluffing game. Omega can observe tells with superhuman acuity, you don’t know what those tells are, and certainly don’t know how to mask them. If your psychology was such that you’d take both boxes, you were already showing signs of it before you went into the room. Note that this is not supernatural, retro-causal, nor full of anthropic paradoxes but something people already do. Omega is just very much better at it than other humans are.
In this viewpoint, the obvious solution is to be so good at hiding your tells that Omega thinks you’re a one-boxer while actually being a two-boxer. But you don’t know how to do that, nobody else who tried succeeded, and you have to think you’ve got a 99.9% chance of success for it to be more worthwhile than obviously taking one box.
Indeed starting with an imperfect predictor helps. Classic CDT implicitly assumes that you are not a typical subject, but one of those who can go toe-to-toe with Omega. In the limit of 100% accuracy the space of such subjects is empty, but CDT insists on acting as if you are one anyway.
I don’t think Omega being a perfect predictor is essential to the paradox. Assume you are playing this game with me. Say my prediction is only 51% correct. I will fill an envelope according to the prescribed rule. I read you then give you the envelope (box B). After you put it in your pocket I put 1000 dollars on the table. Do you suggest not taking the 1000 dollar will make you richer? If you thinking you should take the 1000 in this case, then how good would I need to be for you to give that up? (somewhere between 51% and 99.9% I presume) I do not see a good reason for this cutoff.
I think the underlying rationale for two-boxing is to deny first-person decision-making in that particular situation. e.g. not conducting the causal analysis when facing the 1000 dollars. Which is your strategy, commit to taking one box only, let Omega read you, and stick to that decision.
“After you put it in your pocket I put 1000 dollars on the table. Do you suggest not taking the 1000 dollar will make you richer?”
Unlike the Omega problem, this is way too underspecified to make a sensible answer. It depends upon the details of how you get your 51% success rate.
Do you always predict they’re going to take two boxes, and only 51% of people actually did? Then obviously I will have $1000 instead of $0, and always be richer.
Or maybe you just get these feelings about people sometimes. In later carefully controlled tests it turns out that you get this feeling for about 2% of “easily read” people, you’re right about 90% of the time in both directions for them, and it isn’t correlated with how certain they themselves are about taking the money. This is more definite than any real-world situation will ever be, but illustrates the principle. In this scenario, if I’m in the 98% then your “prediction” is uncorrelated with my eventual intent, and I will be $1000 richer if I take the money.
Otherwise, I’m in the 2%. If I choose to take the money, there’s a 90% chance that showed up in some outward signs before you gave me the envelope, and I get $1000. There’s a 10% chance that it didn’t, and I get $1001000 for an expected payout of $101000. Note that this is an expected payout because I don’t know what is in the envelope. If I choose not to take the money, the same calculation gives $901000 expected payout.
Since I don’t know whether I’m easily read or not, I’m staking a 98% chance of a $1000 gain against a 2% chance of a $800000 loss. This is a bad idea, and on balance loses me money.
Well, to my defense you didn’t specify how is Omega 99.9% accurate either. But that does not matter. Let me change the question to fit your framework.
I get this feeling for some “easily read” people. I am about 51% right in both directions of them, and it isn’t correlated with how certain they themselves are about taking the money. Now, suppose you are one of the “easily read” people and you know it. After putting the envelope in your pocket, would you also take the 1000 dollar on the table? Will rejecting it make you richer?
No, I wouldn’t take the money on the table in this case.
I’m easily read, so I already gave off signs of what my decision would turn out to be. You’re not very good at picking them up, but enough that if people in my position take the money then there’s a 49% chance that the envelope contains a million dollars. If they don’t, then there’s a 51% chance that it does.
I’m not going to take $1000 if it is associated with a 2% reduction in the chance of me having a $1000000 in the envelope. On average, that would make me poorer. In the strict local causal sense I would be richer taking the money, but that reasoning is subject to Simpson’s paradox: action Take appears better than Leave for both cases Million and None in the envelope, but is worse when the cases are combined because the weights are not independent. Even a very weak correlation is enough because the pay-offs are so disparate.
I guess that is our disagreement. I would say not taking the money require some serious modification to causal analysis (e.g. retro-causal). You think there doesn’t need to be, it is perfectly resolved by Simpson’s paradox.
There is a problem, but it is not quite an inconsistency. The problem is the assumption that Omega is a perfect predictor. That is: for the system Omega + Agent + everything else, Omega can always find a fixed point such that Omega’s prediction is always correct for the state of the system some time after making that prediction and subsequent evolution of the system based on that. Even in a completely deterministic universe, this is asking too much. Some systems just don’t have fixed points.
The problem becomes much more reasonable when you consider an almost perfect predictor. It ruins the purity of the original question, but the space of possibilities becomes much broader.
It becomes more like a bluffing game. Omega can observe tells with superhuman acuity, you don’t know what those tells are, and certainly don’t know how to mask them. If your psychology was such that you’d take both boxes, you were already showing signs of it before you went into the room. Note that this is not supernatural, retro-causal, nor full of anthropic paradoxes but something people already do. Omega is just very much better at it than other humans are.
In this viewpoint, the obvious solution is to be so good at hiding your tells that Omega thinks you’re a one-boxer while actually being a two-boxer. But you don’t know how to do that, nobody else who tried succeeded, and you have to think you’ve got a 99.9% chance of success for it to be more worthwhile than obviously taking one box.
Indeed starting with an imperfect predictor helps. Classic CDT implicitly assumes that you are not a typical subject, but one of those who can go toe-to-toe with Omega. In the limit of 100% accuracy the space of such subjects is empty, but CDT insists on acting as if you are one anyway.
I don’t think Omega being a perfect predictor is essential to the paradox. Assume you are playing this game with me. Say my prediction is only 51% correct. I will fill an envelope according to the prescribed rule. I read you then give you the envelope (box B). After you put it in your pocket I put 1000 dollars on the table. Do you suggest not taking the 1000 dollar will make you richer? If you thinking you should take the 1000 in this case, then how good would I need to be for you to give that up? (somewhere between 51% and 99.9% I presume) I do not see a good reason for this cutoff.
I think the underlying rationale for two-boxing is to deny first-person decision-making in that particular situation. e.g. not conducting the causal analysis when facing the 1000 dollars. Which is your strategy, commit to taking one box only, let Omega read you, and stick to that decision.
“After you put it in your pocket I put 1000 dollars on the table. Do you suggest not taking the 1000 dollar will make you richer?”
Unlike the Omega problem, this is way too underspecified to make a sensible answer. It depends upon the details of how you get your 51% success rate.
Do you always predict they’re going to take two boxes, and only 51% of people actually did? Then obviously I will have $1000 instead of $0, and always be richer.
Or maybe you just get these feelings about people sometimes. In later carefully controlled tests it turns out that you get this feeling for about 2% of “easily read” people, you’re right about 90% of the time in both directions for them, and it isn’t correlated with how certain they themselves are about taking the money. This is more definite than any real-world situation will ever be, but illustrates the principle. In this scenario, if I’m in the 98% then your “prediction” is uncorrelated with my eventual intent, and I will be $1000 richer if I take the money.
Otherwise, I’m in the 2%. If I choose to take the money, there’s a 90% chance that showed up in some outward signs before you gave me the envelope, and I get $1000. There’s a 10% chance that it didn’t, and I get $1001000 for an expected payout of $101000. Note that this is an expected payout because I don’t know what is in the envelope. If I choose not to take the money, the same calculation gives $901000 expected payout.
Since I don’t know whether I’m easily read or not, I’m staking a 98% chance of a $1000 gain against a 2% chance of a $800000 loss. This is a bad idea, and on balance loses me money.
Well, to my defense you didn’t specify how is Omega 99.9% accurate either. But that does not matter. Let me change the question to fit your framework.
I get this feeling for some “easily read” people. I am about 51% right in both directions of them, and it isn’t correlated with how certain they themselves are about taking the money. Now, suppose you are one of the “easily read” people and you know it. After putting the envelope in your pocket, would you also take the 1000 dollar on the table? Will rejecting it make you richer?
No, I wouldn’t take the money on the table in this case.
I’m easily read, so I already gave off signs of what my decision would turn out to be. You’re not very good at picking them up, but enough that if people in my position take the money then there’s a 49% chance that the envelope contains a million dollars. If they don’t, then there’s a 51% chance that it does.
I’m not going to take $1000 if it is associated with a 2% reduction in the chance of me having a $1000000 in the envelope. On average, that would make me poorer. In the strict local causal sense I would be richer taking the money, but that reasoning is subject to Simpson’s paradox: action Take appears better than Leave for both cases Million and None in the envelope, but is worse when the cases are combined because the weights are not independent. Even a very weak correlation is enough because the pay-offs are so disparate.
I guess that is our disagreement. I would say not taking the money require some serious modification to causal analysis (e.g. retro-causal). You think there doesn’t need to be, it is perfectly resolved by Simpson’s paradox.