Also, I don’t think Eliezer keeps harping on Newcomb’s problem because he anticipates experiencing precisely that scenario. I see several important points that I don’t think have been clearly made (not that I’m the one to do so):
We can choose whether and when to implement certain decision algorithms, including classical causal decision theory (CCDT). This choice may in fact be trivial, or it may be subtle, but it is a worthy question for a rationalist.
Although, for any fixed set of options, implementing CCDT maximizes your return, there are in fact cases where the options you have depend on the outcome of a model of your decision algorithm. I’m not talking about Omega, I’m talking about human social life. We base a large portion of our interactions with others on our anticipations of how they might respond. (This isn’t often done rationally by anyone’s standards, but it can be.)
It gets confusing (in particular, Hofstadterian) here, but a plausibly better outcome might be reached in the Prisoner’s Dilemma by selfish non-strangers mutually modeling the other’s likely decision process, and recognizing that only C-C and D-D are stable outcomes under mutual modeling.
Of course, I still feel a bit uncomfortable with this line of reasoning.
Apply the Least Convenient Possible World principle.
Also, I don’t think Eliezer keeps harping on Newcomb’s problem because he anticipates experiencing precisely that scenario. I see several important points that I don’t think have been clearly made (not that I’m the one to do so):
We can choose whether and when to implement certain decision algorithms, including classical causal decision theory (CCDT). This choice may in fact be trivial, or it may be subtle, but it is a worthy question for a rationalist.
Although, for any fixed set of options, implementing CCDT maximizes your return, there are in fact cases where the options you have depend on the outcome of a model of your decision algorithm. I’m not talking about Omega, I’m talking about human social life. We base a large portion of our interactions with others on our anticipations of how they might respond. (This isn’t often done rationally by anyone’s standards, but it can be.)
It gets confusing (in particular, Hofstadterian) here, but a plausibly better outcome might be reached in the Prisoner’s Dilemma by selfish non-strangers mutually modeling the other’s likely decision process, and recognizing that only C-C and D-D are stable outcomes under mutual modeling.
Of course, I still feel a bit uncomfortable with this line of reasoning.