there’s a difference between Newcomb, where you make your decision after Omega made its prediction, and “meta-Newcomb”, where you’re allowed to precommit before Omega makes its prediction, for example by choosing your programming.
I agree that meta-Newcomb is not the same problem, and that in meta-newcomb CDT would precommit to one-box.
However, even in normal Newcomb, it’s possible to have agents that behave as if they had precommited when they realize precomitting would have been better for them. More specifically, in pseudocode:
function take_decision(information_about_world, actions):
for each action:
calculate the utility that an agent that always returns that action would have got
return the action that got the highest utility
There are some subtleties, notably about how to take the information about the world into account, but an agent built along this model should one-box on problems like Newcomb’s, while two-boxing in cases where Omega decides by flipping a coin.
(such an agent; however, doesn’t cooperate with itself in prisonner’s dilemma, you need a better agent for that)
You are 100% correct. However, if you say “it’s possible to have agents that behave as if they had precommited”, then you are not talking about what’s the best decision to make in this situation, but what’s the best decision theory to have in this situation, and that is, again, meta-Newcomb, because the decision which decision theory you’re going to follow is a decision you have to make before Omega makes its prediction. Switching to this decision theory after Omega makes its prediction doesn’t work, obviously, so this is not a solution for Newcomb.
I agree that meta-Newcomb is not the same problem, and that in meta-newcomb CDT would precommit to one-box.
However, even in normal Newcomb, it’s possible to have agents that behave as if they had precommited when they realize precomitting would have been better for them. More specifically, in pseudocode:
There are some subtleties, notably about how to take the information about the world into account, but an agent built along this model should one-box on problems like Newcomb’s, while two-boxing in cases where Omega decides by flipping a coin.
(such an agent; however, doesn’t cooperate with itself in prisonner’s dilemma, you need a better agent for that)
You are 100% correct. However, if you say “it’s possible to have agents that behave as if they had precommited”, then you are not talking about what’s the best decision to make in this situation, but what’s the best decision theory to have in this situation, and that is, again, meta-Newcomb, because the decision which decision theory you’re going to follow is a decision you have to make before Omega makes its prediction. Switching to this decision theory after Omega makes its prediction doesn’t work, obviously, so this is not a solution for Newcomb.