Well, if you were confronted with Newcomb’s problem, would you one-box or two box? How fully do you endorse your answer as being “correct” or maximally rational, or anything along those lines?
I’m not trying to argue against anyone who says they aren’t sure, but they think they would one-box or two-box in some hypothetical, or anyone who has thought carefully about the possible existence of unknown unknowns and come down on the “I have no idea what’s optimal, but I’ve predetermined to do X for the sake of predictability” side for either X.
I am arguing against people who think that Newcomb’s problem means causal decision theory is wrong, and that they have a better alternative. I think Newcomb’s provides no (interesting, nontrivial) evidence against CDT.
Well, if you were confronted with Newcomb’s problem, would you one-box or two box? How fully do you endorse your answer as being “correct” or maximally rational, or anything along those lines?
I’m not trying to argue against anyone who says they aren’t sure, but they think they would one-box or two-box in some hypothetical, or anyone who has thought carefully about the possible existence of unknown unknowns and come down on the “I have no idea what’s optimal, but I’ve predetermined to do X for the sake of predictability” side for either X.
I am arguing against people who think that Newcomb’s problem means causal decision theory is wrong, and that they have a better alternative. I think Newcomb’s provides no (interesting, nontrivial) evidence against CDT.