Right, the XDT ANP. Because this is in fact a decision-controlled problem, only from the perspective of an XDT agent.
It is decision-determined from the perspective of any agent. The payoff only depends on the agent’s decision: namely, it’s 1000$ for two-boxing and 0$ for one-boxing.
And so they can simply choose to receive $1M on this problem if they know that that’s what they’re facing. $1M being bigger than $1000, I think they should do so.
Look on the problem from the perspective of the precursor. The precursor knows XDT two-boxes on the problem. There is no way to change this fact. So one box is going to be empty. Therefore building an XDT agent in this situation is no worse than building any other agent.
It is decision-determined from the perspective of any agent. The payoff only depends on the agent’s decision: namely, it’s 1000$ for two-boxing and 0$ for one-boxing.
Yeah, sorry, I misspoke. The contents of the boxes are controlled by the agent’s decision, only for an XTD agent.
Look on the problem from the perspective of the precursor. The precursor knows XDT two-boxes on the problem. There is no way to change this fact. So one box is going to be empty. Therefore building an XDT agent in this situation is no worse than building any other agent.
I am using XDT here in the sense of “the correct decision algorithm (whatever it is).” An XDT agent, if faced with the XDT-anti-Newcomb-problem, can, based on its decision, either get $1M, or $1k. If it takes the $1M, it loses in the sense that it does worse on this problem than a CDT agent. If it takes the $1k, it loses in the sense that it just took $1k over $1M :P
And because of XDT’s decision controlling the contents of the box, when you say “the payoff is $1000 for two-boxing and $0 for one-boxing,” you’re begging the question about what you think the correct decision algorithm should do.
And because of XDT’s decision controlling the contents of the box, when you say “the payoff is $1000 for two-boxing and $0 for one-boxing,” you’re begging the question about what you think the correct decision algorithm should do.
The problem is in the definition of “correct”. From my point of view, “correct” decision algorithm means the algorithm that a rational precursor should build. That is, it is the algorithm instantiating which by the precursor will yield at least as much payoff as instantiating any other algorithm.
Well, I agree with you there :P But I think you’re cashing this out as the fixed point of a process, rather than as the maximization I am cashing it out as.
It is decision-determined from the perspective of any agent. The payoff only depends on the agent’s decision: namely, it’s 1000$ for two-boxing and 0$ for one-boxing.
Look on the problem from the perspective of the precursor. The precursor knows XDT two-boxes on the problem. There is no way to change this fact. So one box is going to be empty. Therefore building an XDT agent in this situation is no worse than building any other agent.
Yeah, sorry, I misspoke. The contents of the boxes are controlled by the agent’s decision, only for an XTD agent.
I am using XDT here in the sense of “the correct decision algorithm (whatever it is).” An XDT agent, if faced with the XDT-anti-Newcomb-problem, can, based on its decision, either get $1M, or $1k. If it takes the $1M, it loses in the sense that it does worse on this problem than a CDT agent. If it takes the $1k, it loses in the sense that it just took $1k over $1M :P
And because of XDT’s decision controlling the contents of the box, when you say “the payoff is $1000 for two-boxing and $0 for one-boxing,” you’re begging the question about what you think the correct decision algorithm should do.
The problem is in the definition of “correct”. From my point of view, “correct” decision algorithm means the algorithm that a rational precursor should build. That is, it is the algorithm instantiating which by the precursor will yield at least as much payoff as instantiating any other algorithm.
Well, I agree with you there :P But I think you’re cashing this out as the fixed point of a process, rather than as the maximization I am cashing it out as.