I think you’re misunderstanding something, but I can’t quite pin down what it is. For clarity, here is my analysis of the events in the thought experiment in chronological order: 1. Omega decides to host a Newcomb’s problem, and chooses an agent (Agent A) to participate in it. 2. Omega scans Agent A and simulates their consciousness (call the simulation Agent B), placing it in a “fake” Newcomb’s problem situation (e.g. Omega has made no prediction about Agent B, but says that it has in the simulation in order to get a result) 3. Agent B makes its decision, and Omega makes its prediction based on that 4. Omega shows itself to Agent A and initiates Newcomb’s problem in the real world, having committed to its prediction in step 3 5. Agent A makes its decision and Newcomb’s problem is done.
From a third-party perspective, there is no backward causality. The decision of Agent B influences Omega’s prediction, but the decision of Agent A does not. Likewise, the decision of Agent B does not influence the decision of Agent A, as it is hidden by Omega (this is why the simulation part of the EV calculations assumes a uniform prior over the decision of Agent A). There is no communication or causal influence between the simulation and reality besides the simulation influencing Omega’s prediction. The sole factor that makes it appear as though there is some kind of backward causality is that subjectively, neither agent knows whether they are Agent A or Agent B, and so they each act as though there is a 50% chance that they have forward causal influence over Omega’s prediction—not the prediction that Omega purports to already have made, since there is no way to influence that, but the prediction that Omega will make in the real world based on Agent B’s decision. That is, the only sense in which the agent in my post has causal influence over Omega’s decision is that in the case that they are Agent B, they will make their choice, find that the whole thing was fake and the boxes are full of static or something, they will cease to exist as the simulation is terminated, and then their decision will influence the prediction Omega claims to have made to Agent A.
I suspect the misunderstanding here is that I was too vague with the wording of the claim that “in the case that the agent is a simulation, its choice actually does have a causal influence on the ‘real’ prediction”. I hope that the distinction between Agents A and B clears up what I’m saying.
I think you’re misunderstanding something, but I can’t quite pin down what it is.
This is quite likely. I suspect that my understanding of CDT, from a technical perspective, remains incorrect, even after a fair bit of reading and discussion. From what I understand, CDT does not include the possibility that it can be simulated well enough that Omega’s prediction is binding. That is the backward-causality (the REAL, CURRENT decision is entangled with a PREVIOUS observation) which breaks it.
The point of the view expressed in this post is that you DON’T have to see the decisions of the real and simulated people as being “entangled”. If you just treat them as two different people, making two decisions (which if Omega is good at simulation are likely to be the same), then Causal Decision Theory works just fine, recommending taking only one box.
The somewhat strange aspect of the problem is that when making a decision in the Newcomb scenario, you don’t know whether you are the real or the simulated person. But less drastic ignorance of your place in the world is a normal occurrence. For instance, you might know (from family lore) that you are descended from some famous person, but be uncertain whether you are the famous person’s grandchild or great grandchild. Such uncertainty about “who you are” doesn’t undermine Causal Decision Theory.
I think you’re misunderstanding something, but I can’t quite pin down what it is. For clarity, here is my analysis of the events in the thought experiment in chronological order:
1. Omega decides to host a Newcomb’s problem, and chooses an agent (Agent A) to participate in it.
2. Omega scans Agent A and simulates their consciousness (call the simulation Agent B), placing it in a “fake” Newcomb’s problem situation (e.g. Omega has made no prediction about Agent B, but says that it has in the simulation in order to get a result)
3. Agent B makes its decision, and Omega makes its prediction based on that
4. Omega shows itself to Agent A and initiates Newcomb’s problem in the real world, having committed to its prediction in step 3
5. Agent A makes its decision and Newcomb’s problem is done.
From a third-party perspective, there is no backward causality. The decision of Agent B influences Omega’s prediction, but the decision of Agent A does not. Likewise, the decision of Agent B does not influence the decision of Agent A, as it is hidden by Omega (this is why the simulation part of the EV calculations assumes a uniform prior over the decision of Agent A). There is no communication or causal influence between the simulation and reality besides the simulation influencing Omega’s prediction. The sole factor that makes it appear as though there is some kind of backward causality is that subjectively, neither agent knows whether they are Agent A or Agent B, and so they each act as though there is a 50% chance that they have forward causal influence over Omega’s prediction—not the prediction that Omega purports to already have made, since there is no way to influence that, but the prediction that Omega will make in the real world based on Agent B’s decision. That is, the only sense in which the agent in my post has causal influence over Omega’s decision is that in the case that they are Agent B, they will make their choice, find that the whole thing was fake and the boxes are full of static or something, they will cease to exist as the simulation is terminated, and then their decision will influence the prediction Omega claims to have made to Agent A.
I suspect the misunderstanding here is that I was too vague with the wording of the claim that “in the case that the agent is a simulation, its choice actually does have a causal influence on the ‘real’ prediction”. I hope that the distinction between Agents A and B clears up what I’m saying.
This is quite likely. I suspect that my understanding of CDT, from a technical perspective, remains incorrect, even after a fair bit of reading and discussion. From what I understand, CDT does not include the possibility that it can be simulated well enough that Omega’s prediction is binding. That is the backward-causality (the REAL, CURRENT decision is entangled with a PREVIOUS observation) which breaks it.
The point of the view expressed in this post is that you DON’T have to see the decisions of the real and simulated people as being “entangled”. If you just treat them as two different people, making two decisions (which if Omega is good at simulation are likely to be the same), then Causal Decision Theory works just fine, recommending taking only one box.
The somewhat strange aspect of the problem is that when making a decision in the Newcomb scenario, you don’t know whether you are the real or the simulated person. But less drastic ignorance of your place in the world is a normal occurrence. For instance, you might know (from family lore) that you are descended from some famous person, but be uncertain whether you are the famous person’s grandchild or great grandchild. Such uncertainty about “who you are” doesn’t undermine Causal Decision Theory.