No, I mean I think CDT can one-box within the regular Newcomb’s problem situation, if its reasoning capabilities are sufficiently strong. In detail: here and in the thread here.
No, if you have an agent that is one boxing either it is not a CDT agent or the game it is playing is not Newcomb’s problem. More specifically, in your first link you describe a game that is not Newcomb’s problem and in the second link you describe an agent that does not implement CDT.
More specifically, in your first link you describe a game that is not Newcomb’s problem and in the second link you describe an agent that does not implement CDT
It would be a little more helpful, although probably not quite as cool-sounding, if you explained in what way the game is not Newcomb’s in the first link, and the agent not a CDT in the second. AFAIK, the two links describe exactly the same problem and exactly the same agent, and I wrote both comments.
It would be a little more helpful, although probably not quite as cool-sounding,
That doesn’t seem to make helping you appealing.
if you explained in what way the game is not Newcomb’s in the first link,
The agent believes that it is has 50% chance of being in an actual Newcomb’s problem and 50% chance of being in a simulation which will be used to present another agent with a Newcomb’s problem some time in the future.
and the agent not a CDT in the second.
Orthonormal already explained this in the context.
Yes, I have this problem, working on it. I’m sorry, and thanks for your patience!
The agent believes that it is has 50% chance of being in an actual Newcomb’s problem and 50% chance of being in a simulation which will be used to present another agent with a Newcomb’s problem some time in the future.
Yes, except for s/another agent/itself/. In what way this is not a correct description of a pure Newcomb’s problem from the agent’s point of view? This is my original still unanswered question.
Note: in the usual formulations of Newcomb’s problem for UDT, the agent knows exactly that—it is called twice, and when it is running it does not know which of the two calls is being evaluated.
Orthonormal already explained this in the context.
I answered his explanation in the context, and he appeared to agree. His other objection seems to be based on a mistaken understanding.
This is worth writing into its own post- a CDT agent with a non-self-centered utility function (like a paperclip maximizer) and a certain model of anthropics (in which, if it knows it’s being simulated, it views itself as possibly within the simulation), when faced with a Predictor that predicts by simulating (which is not always the case), one-boxes on Newcomb’s Problem.
This is a novel and surprising result in the academic literature on CDT, not the prediction they expected. But it seems to me that if you violate any of the conditions above, one-boxing collapses back into two-boxing; and furthermore, it won’t cooperate in the Prisoner’s Dilemma against a CDT agent with an orthogonal utility function. That, at least, is inescapable from the independence assumption.
No, if you have an agent that is one boxing either it is not a CDT agent or the game it is playing is not Newcomb’s problem. More specifically, in your first link you describe a game that is not Newcomb’s problem and in the second link you describe an agent that does not implement CDT.
It would be a little more helpful, although probably not quite as cool-sounding, if you explained in what way the game is not Newcomb’s in the first link, and the agent not a CDT in the second. AFAIK, the two links describe exactly the same problem and exactly the same agent, and I wrote both comments.
That doesn’t seem to make helping you appealing.
The agent believes that it is has 50% chance of being in an actual Newcomb’s problem and 50% chance of being in a simulation which will be used to present another agent with a Newcomb’s problem some time in the future.
Orthonormal already explained this in the context.
Yes, I have this problem, working on it. I’m sorry, and thanks for your patience!
Yes, except for s/another agent/itself/. In what way this is not a correct description of a pure Newcomb’s problem from the agent’s point of view? This is my original still unanswered question.
Note: in the usual formulations of Newcomb’s problem for UDT, the agent knows exactly that—it is called twice, and when it is running it does not know which of the two calls is being evaluated.
I answered his explanation in the context, and he appeared to agree. His other objection seems to be based on a mistaken understanding.
This is worth writing into its own post- a CDT agent with a non-self-centered utility function (like a paperclip maximizer) and a certain model of anthropics (in which, if it knows it’s being simulated, it views itself as possibly within the simulation), when faced with a Predictor that predicts by simulating (which is not always the case), one-boxes on Newcomb’s Problem.
This is a novel and surprising result in the academic literature on CDT, not the prediction they expected. But it seems to me that if you violate any of the conditions above, one-boxing collapses back into two-boxing; and furthermore, it won’t cooperate in the Prisoner’s Dilemma against a CDT agent with an orthogonal utility function. That, at least, is inescapable from the independence assumption.