Actually, it would be interesting to break down the list of reasons people might have for two-boxing, even if we haven’t polled for reasons, only decisions. From https://en.wikipedia.org/wiki/Newcomb%27s_paradox, the outcomes are:
a: Omega predicts two-box, player two-boxes, payout $1000
b: Omega predicts two-box, player one-boxes, payout $0
c: Omega predicts one-box, player two-boxes, payout $1001000
d: Omega predicts one-box, player one-boxes, payout $1000000
I claim that one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing), and reason that d > a. And further I think that two-boxers believe that all 4 are possible (b and c being “tricking Omega”) and reason that c > d and a > b, so two-boxing dominates one-boxing.
Aside from “lizard man”, what are the other reasons that lead to two-boxing?
I claim that one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing)
Note that Omega isn’t necessarily a perfect predictor. Most one-boxers would also one-box if Omega is a near-perfect predictor.
Aside from “lizard man”, what are the other reasons that lead to two-boxing?
I think I could pass an intellectual Turing test (the main arguments in either direction aren’t very sophisticated), but maybe it’s easiest to just read, e.g., p. 151ff. of James Joyce’s The Foundations of Causal Decision Theory and note how Joyce understands the problem in pretty much the same way that a one-boxer would.
In particular, Joyce agrees that causal decision theorists would want to self-modify to become one-boxers. (I have heard many two-boxers admit to this.) This doesn’t make sense if they don’t believe in Omega’s prediction abilities.
I guess I should give my answer, which isn’t on any polls I’ve seen:
I do everything in my power to make Omega predict that I’ll one-box. In some formulations (mind-reading or the like), that means actually one-box. In others (good history of prediction, but a fair chance that it’s a trick, since it’s not actually possible), I probably decide at the last minute to two-box. In still others (never specified at all that I’ve seen), if I think I can gain fame and fortune by being Omega’s first mis-prediction, I one-box and hope to get $0.
As originally formulated by Nozick, Omega is not necessarily omniscient and does not necessarily have anything like divine foreknowledge. All that is said about this is that you have “enormous confidence” in Omega’s power to predict your choices, and that this being has “often correctly predicted your choices in the past (and has never, as far as you know made an incorrect prediction about your choices)”, and that the being has “often correctly predicted the choices of other people, many who are similar to you”. So, all I really know about Omega is that it has a really good track record.
So, nothing in Nozick rules out the possibility of the outcome “b” or “c” listed above.
At the time that you make your choice, Omega has already irrevocably either put $1M in box 2 or put nothing in box 2
If Omega has put $1M in box 2, your payoff will be $1M if you 1-box or 1.001M if you 2-box.
If Omega has put nothing in box 2, your payoff will be $0 if you 1-box or $1K if you 2-box.
So, whatever Omega has already done, you are better off 2-boxing. And, your choice now cannot change what Omega has already done.
So, you are better off 2-boxing.
So, basically, I agree with your assessment that “two-boxers believe that all 4 are possible” (or at least I believe that all 4 are possible). Why do I believe that all 4 are possible? Because nothing in the problem statement says otherwise.
ETA:
Also, I agree with your assessment that “one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing)”. But, in thinking this way, one-boxers are reading something into the problem beyond what is actually stated or implied by Nozick.
Actually, it would be interesting to break down the list of reasons people might have for two-boxing, even if we haven’t polled for reasons, only decisions. From https://en.wikipedia.org/wiki/Newcomb%27s_paradox, the outcomes are:
a: Omega predicts two-box, player two-boxes, payout $1000
b: Omega predicts two-box, player one-boxes, payout $0
c: Omega predicts one-box, player two-boxes, payout $1001000
d: Omega predicts one-box, player one-boxes, payout $1000000
I claim that one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing), and reason that d > a. And further I think that two-boxers believe that all 4 are possible (b and c being “tricking Omega”) and reason that c > d and a > b, so two-boxing dominates one-boxing.
Aside from “lizard man”, what are the other reasons that lead to two-boxing?
Note that Omega isn’t necessarily a perfect predictor. Most one-boxers would also one-box if Omega is a near-perfect predictor.
I think I could pass an intellectual Turing test (the main arguments in either direction aren’t very sophisticated), but maybe it’s easiest to just read, e.g., p. 151ff. of James Joyce’s The Foundations of Causal Decision Theory and note how Joyce understands the problem in pretty much the same way that a one-boxer would.
In particular, Joyce agrees that causal decision theorists would want to self-modify to become one-boxers. (I have heard many two-boxers admit to this.) This doesn’t make sense if they don’t believe in Omega’s prediction abilities.
I wish the polls that started this thread ever included those options.
[pollid:1209]
I guess I should give my answer, which isn’t on any polls I’ve seen:
I do everything in my power to make Omega predict that I’ll one-box. In some formulations (mind-reading or the like), that means actually one-box. In others (good history of prediction, but a fair chance that it’s a trick, since it’s not actually possible), I probably decide at the last minute to two-box. In still others (never specified at all that I’ve seen), if I think I can gain fame and fortune by being Omega’s first mis-prediction, I one-box and hope to get $0.
I’m a two-boxer. My rationale is:
As originally formulated by Nozick, Omega is not necessarily omniscient and does not necessarily have anything like divine foreknowledge. All that is said about this is that you have “enormous confidence” in Omega’s power to predict your choices, and that this being has “often correctly predicted your choices in the past (and has never, as far as you know made an incorrect prediction about your choices)”, and that the being has “often correctly predicted the choices of other people, many who are similar to you”. So, all I really know about Omega is that it has a really good track record.
So, nothing in Nozick rules out the possibility of the outcome “b” or “c” listed above.
At the time that you make your choice, Omega has already irrevocably either put $1M in box 2 or put nothing in box 2
If Omega has put $1M in box 2, your payoff will be $1M if you 1-box or 1.001M if you 2-box.
If Omega has put nothing in box 2, your payoff will be $0 if you 1-box or $1K if you 2-box.
So, whatever Omega has already done, you are better off 2-boxing. And, your choice now cannot change what Omega has already done.
So, you are better off 2-boxing.
So, basically, I agree with your assessment that “two-boxers believe that all 4 are possible” (or at least I believe that all 4 are possible). Why do I believe that all 4 are possible? Because nothing in the problem statement says otherwise.
ETA:
Also, I agree with your assessment that “one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing)”. But, in thinking this way, one-boxers are reading something into the problem beyond what is actually stated or implied by Nozick.