Succinctly, if someone runs into an omega which says “I will give you $1,000,000 if you are someone who would have two-boxed in Newcomb. If you would have one-boxed, I will kill your family”, then the two-boxers have much better outcomes than the one-boxers. You may object that this seems silly and artificial. I think it is no more so than the original problem.
And yes—I think EY is very wrong in the post you link to, and this is a response to the consensus LW view that one-boxing is correct.
Well, certainly in the setup you describe there is no reason to one-box. But that is not the Newcomb’s setup? So, you are solving a different problem, assuming it even needs solving.
Well, if you were confronted with Newcomb’s problem, would you one-box or two box? How fully do you endorse your answer as being “correct” or maximally rational, or anything along those lines?
I’m not trying to argue against anyone who says they aren’t sure, but they think they would one-box or two-box in some hypothetical, or anyone who has thought carefully about the possible existence of unknown unknowns and come down on the “I have no idea what’s optimal, but I’ve predetermined to do X for the sake of predictability” side for either X.
I am arguing against people who think that Newcomb’s problem means causal decision theory is wrong, and that they have a better alternative. I think Newcomb’s provides no (interesting, nontrivial) evidence against CDT.
Succinctly, if someone runs into an omega which says “I will give you $1,000,000 if you are someone who would have two-boxed in Newcomb. If you would have one-boxed, I will kill your family”, then the two-boxers have much better outcomes than the one-boxers. You may object that this seems silly and artificial. I think it is no more so than the original problem.
And yes—I think EY is very wrong in the post you link to, and this is a response to the consensus LW view that one-boxing is correct.
The post doesn’t seem to allow this possibility, it seems to say that the opaque box is empty. Relevant quote:
The intention was to portray the transparent box as having lots of money—call it $1,000,000.
Well, certainly in the setup you describe there is no reason to one-box. But that is not the Newcomb’s setup? So, you are solving a different problem, assuming it even needs solving.
Well, if you were confronted with Newcomb’s problem, would you one-box or two box? How fully do you endorse your answer as being “correct” or maximally rational, or anything along those lines?
I’m not trying to argue against anyone who says they aren’t sure, but they think they would one-box or two-box in some hypothetical, or anyone who has thought carefully about the possible existence of unknown unknowns and come down on the “I have no idea what’s optimal, but I’ve predetermined to do X for the sake of predictability” side for either X.
I am arguing against people who think that Newcomb’s problem means causal decision theory is wrong, and that they have a better alternative. I think Newcomb’s provides no (interesting, nontrivial) evidence against CDT.