I’ve since decided that one-boxing in Transparent Newcomb is the correct decision—because being the sort of agent that one-boxes is to be the sort of agent that gets given more frequently a filled first box (I think I only fully realized this after reading Eliezer’s paper on TDT, which I hadn’t at the time of this thread).
So the individual “losing” decision is actually part of a decision theory which is winning *overall”. And is therefore the correct decision no matter how counterintuitive.
Mind you, as a practical matter, I think it’s significantly harder for a human to choose to one-box in the case of Transparent Newcomb. I don’t know if I could manage it if I was actually presented with the situation, though I don’t think I’d have a problem with the case of classical Newcomb.
I’ve since decided that one-boxing in Transparent Newcomb is the correct decision—because being the sort of agent that one-boxes is to be the sort of agent that gets given more frequently a filled first box (I think I only fully realized this after reading Eliezer’s paper on TDT, which I hadn’t at the time of this thread).
So the individual “losing” decision is actually part of a decision theory which is winning *overall”. And is therefore the correct decision no matter how counterintuitive.
Mind you, as a practical matter, I think it’s significantly harder for a human to choose to one-box in the case of Transparent Newcomb. I don’t know if I could manage it if I was actually presented with the situation, though I don’t think I’d have a problem with the case of classical Newcomb.