Caspar Oesterheld and Johannes Treutlein, who are researchers at the Foundational Research Institute working on decision theory from a Less-Wrong-ish perspective, looked at all the polls and surveys they could find indicating people’s preferred decision in the Newcomb problem.
They found, in line with conventional wisdom, that polls of professional philosophers, especially ones specializing in decision theory, tend to yield a substantial but not overwhelming majority in favour of two-boxing and that polls of other populations mostly yield results closer to 50:50 but tending to prefer one-boxing. … Well, except that it looks to me as if those polls in fact tend to give results about as much in favour of one-boxing as the philosophers are in favour of two-boxing.
The surveys with the largest populations sampled give the nearest-to-50:50 results.
Two of their polls were annual LW surveys. Those yielded a very large majority in favour of one-boxing. Some of the others did likewise; they look to me as if they sample quite LW-like populations, but I don’t have a strong opinion on whether it’s more likely that LW has influence on those populations’ ideas about Newcomb, or that LW-like people tend to prefer one-boxing in any case.
My own response varies based on presentation of the problem, as does most people I’ve informally discussed it with. What conclusions would anyone be able to draw from a blend of such polls? The right answer is clearly “one-box unless you think you can fool Omega”, and most formulations of the question can be taken as “do you think you can fool Omega?”.
Now that I think about it, I’ve only seen it discussed here in the context of acausal decision theory, showing that in the perfect-information case, one-boxing is simply correct. What do we learn from any polls that don’t specify the mechanism that closely?
What should I learn from polls showing that 40% of some demographic think they can fool omega, 60% of some other demographic think they can, and 4% of most polls vote for lizard man?
Yeah, I also think the “fooling Omega idea” is a common response. Note however that two-boxing is more common among academic decision theorists, all of which understand that Newcomb’s problem is set up such that you can’t fool Omega. I also doubt that the fooling Omega idea is the only (or even the main) cause of two-boxing among non-decision theorists.
Actually, it would be interesting to break down the list of reasons people might have for two-boxing, even if we haven’t polled for reasons, only decisions. From https://en.wikipedia.org/wiki/Newcomb%27s_paradox, the outcomes are:
a: Omega predicts two-box, player two-boxes, payout $1000
b: Omega predicts two-box, player one-boxes, payout $0
c: Omega predicts one-box, player two-boxes, payout $1001000
d: Omega predicts one-box, player one-boxes, payout $1000000
I claim that one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing), and reason that d > a. And further I think that two-boxers believe that all 4 are possible (b and c being “tricking Omega”) and reason that c > d and a > b, so two-boxing dominates one-boxing.
Aside from “lizard man”, what are the other reasons that lead to two-boxing?
I claim that one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing)
Note that Omega isn’t necessarily a perfect predictor. Most one-boxers would also one-box if Omega is a near-perfect predictor.
Aside from “lizard man”, what are the other reasons that lead to two-boxing?
I think I could pass an intellectual Turing test (the main arguments in either direction aren’t very sophisticated), but maybe it’s easiest to just read, e.g., p. 151ff. of James Joyce’s The Foundations of Causal Decision Theory and note how Joyce understands the problem in pretty much the same way that a one-boxer would.
In particular, Joyce agrees that causal decision theorists would want to self-modify to become one-boxers. (I have heard many two-boxers admit to this.) This doesn’t make sense if they don’t believe in Omega’s prediction abilities.
I guess I should give my answer, which isn’t on any polls I’ve seen:
I do everything in my power to make Omega predict that I’ll one-box. In some formulations (mind-reading or the like), that means actually one-box. In others (good history of prediction, but a fair chance that it’s a trick, since it’s not actually possible), I probably decide at the last minute to two-box. In still others (never specified at all that I’ve seen), if I think I can gain fame and fortune by being Omega’s first mis-prediction, I one-box and hope to get $0.
As originally formulated by Nozick, Omega is not necessarily omniscient and does not necessarily have anything like divine foreknowledge. All that is said about this is that you have “enormous confidence” in Omega’s power to predict your choices, and that this being has “often correctly predicted your choices in the past (and has never, as far as you know made an incorrect prediction about your choices)”, and that the being has “often correctly predicted the choices of other people, many who are similar to you”. So, all I really know about Omega is that it has a really good track record.
So, nothing in Nozick rules out the possibility of the outcome “b” or “c” listed above.
At the time that you make your choice, Omega has already irrevocably either put $1M in box 2 or put nothing in box 2
If Omega has put $1M in box 2, your payoff will be $1M if you 1-box or 1.001M if you 2-box.
If Omega has put nothing in box 2, your payoff will be $0 if you 1-box or $1K if you 2-box.
So, whatever Omega has already done, you are better off 2-boxing. And, your choice now cannot change what Omega has already done.
So, you are better off 2-boxing.
So, basically, I agree with your assessment that “two-boxers believe that all 4 are possible” (or at least I believe that all 4 are possible). Why do I believe that all 4 are possible? Because nothing in the problem statement says otherwise.
ETA:
Also, I agree with your assessment that “one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing)”. But, in thinking this way, one-boxers are reading something into the problem beyond what is actually stated or implied by Nozick.
Summary:
Caspar Oesterheld and Johannes Treutlein, who are researchers at the Foundational Research Institute working on decision theory from a Less-Wrong-ish perspective, looked at all the polls and surveys they could find indicating people’s preferred decision in the Newcomb problem.
They found, in line with conventional wisdom, that polls of professional philosophers, especially ones specializing in decision theory, tend to yield a substantial but not overwhelming majority in favour of two-boxing and that polls of other populations mostly yield results closer to 50:50 but tending to prefer one-boxing. … Well, except that it looks to me as if those polls in fact tend to give results about as much in favour of one-boxing as the philosophers are in favour of two-boxing.
The surveys with the largest populations sampled give the nearest-to-50:50 results.
Two of their polls were annual LW surveys. Those yielded a very large majority in favour of one-boxing. Some of the others did likewise; they look to me as if they sample quite LW-like populations, but I don’t have a strong opinion on whether it’s more likely that LW has influence on those populations’ ideas about Newcomb, or that LW-like people tend to prefer one-boxing in any case.
My own response varies based on presentation of the problem, as does most people I’ve informally discussed it with. What conclusions would anyone be able to draw from a blend of such polls? The right answer is clearly “one-box unless you think you can fool Omega”, and most formulations of the question can be taken as “do you think you can fool Omega?”.
Now that I think about it, I’ve only seen it discussed here in the context of acausal decision theory, showing that in the perfect-information case, one-boxing is simply correct. What do we learn from any polls that don’t specify the mechanism that closely?
What should I learn from polls showing that 40% of some demographic think they can fool omega, 60% of some other demographic think they can, and 4% of most polls vote for lizard man?
Yeah, I also think the “fooling Omega idea” is a common response. Note however that two-boxing is more common among academic decision theorists, all of which understand that Newcomb’s problem is set up such that you can’t fool Omega. I also doubt that the fooling Omega idea is the only (or even the main) cause of two-boxing among non-decision theorists.
Actually, it would be interesting to break down the list of reasons people might have for two-boxing, even if we haven’t polled for reasons, only decisions. From https://en.wikipedia.org/wiki/Newcomb%27s_paradox, the outcomes are:
a: Omega predicts two-box, player two-boxes, payout $1000
b: Omega predicts two-box, player one-boxes, payout $0
c: Omega predicts one-box, player two-boxes, payout $1001000
d: Omega predicts one-box, player one-boxes, payout $1000000
I claim that one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing), and reason that d > a. And further I think that two-boxers believe that all 4 are possible (b and c being “tricking Omega”) and reason that c > d and a > b, so two-boxing dominates one-boxing.
Aside from “lizard man”, what are the other reasons that lead to two-boxing?
Note that Omega isn’t necessarily a perfect predictor. Most one-boxers would also one-box if Omega is a near-perfect predictor.
I think I could pass an intellectual Turing test (the main arguments in either direction aren’t very sophisticated), but maybe it’s easiest to just read, e.g., p. 151ff. of James Joyce’s The Foundations of Causal Decision Theory and note how Joyce understands the problem in pretty much the same way that a one-boxer would.
In particular, Joyce agrees that causal decision theorists would want to self-modify to become one-boxers. (I have heard many two-boxers admit to this.) This doesn’t make sense if they don’t believe in Omega’s prediction abilities.
I wish the polls that started this thread ever included those options.
[pollid:1209]
I guess I should give my answer, which isn’t on any polls I’ve seen:
I do everything in my power to make Omega predict that I’ll one-box. In some formulations (mind-reading or the like), that means actually one-box. In others (good history of prediction, but a fair chance that it’s a trick, since it’s not actually possible), I probably decide at the last minute to two-box. In still others (never specified at all that I’ve seen), if I think I can gain fame and fortune by being Omega’s first mis-prediction, I one-box and hope to get $0.
I’m a two-boxer. My rationale is:
As originally formulated by Nozick, Omega is not necessarily omniscient and does not necessarily have anything like divine foreknowledge. All that is said about this is that you have “enormous confidence” in Omega’s power to predict your choices, and that this being has “often correctly predicted your choices in the past (and has never, as far as you know made an incorrect prediction about your choices)”, and that the being has “often correctly predicted the choices of other people, many who are similar to you”. So, all I really know about Omega is that it has a really good track record.
So, nothing in Nozick rules out the possibility of the outcome “b” or “c” listed above.
At the time that you make your choice, Omega has already irrevocably either put $1M in box 2 or put nothing in box 2
If Omega has put $1M in box 2, your payoff will be $1M if you 1-box or 1.001M if you 2-box.
If Omega has put nothing in box 2, your payoff will be $0 if you 1-box or $1K if you 2-box.
So, whatever Omega has already done, you are better off 2-boxing. And, your choice now cannot change what Omega has already done.
So, you are better off 2-boxing.
So, basically, I agree with your assessment that “two-boxers believe that all 4 are possible” (or at least I believe that all 4 are possible). Why do I believe that all 4 are possible? Because nothing in the problem statement says otherwise.
ETA:
Also, I agree with your assessment that “one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing)”. But, in thinking this way, one-boxers are reading something into the problem beyond what is actually stated or implied by Nozick.