I claim that one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing)
Note that Omega isn’t necessarily a perfect predictor. Most one-boxers would also one-box if Omega is a near-perfect predictor.
Aside from “lizard man”, what are the other reasons that lead to two-boxing?
I think I could pass an intellectual Turing test (the main arguments in either direction aren’t very sophisticated), but maybe it’s easiest to just read, e.g., p. 151ff. of James Joyce’s The Foundations of Causal Decision Theory and note how Joyce understands the problem in pretty much the same way that a one-boxer would.
In particular, Joyce agrees that causal decision theorists would want to self-modify to become one-boxers. (I have heard many two-boxers admit to this.) This doesn’t make sense if they don’t believe in Omega’s prediction abilities.
Note that Omega isn’t necessarily a perfect predictor. Most one-boxers would also one-box if Omega is a near-perfect predictor.
I think I could pass an intellectual Turing test (the main arguments in either direction aren’t very sophisticated), but maybe it’s easiest to just read, e.g., p. 151ff. of James Joyce’s The Foundations of Causal Decision Theory and note how Joyce understands the problem in pretty much the same way that a one-boxer would.
In particular, Joyce agrees that causal decision theorists would want to self-modify to become one-boxers. (I have heard many two-boxers admit to this.) This doesn’t make sense if they don’t believe in Omega’s prediction abilities.
I wish the polls that started this thread ever included those options.
[pollid:1209]