I think you’re misunderstanding something, but I can’t quite pin down what it is.
This is quite likely. I suspect that my understanding of CDT, from a technical perspective, remains incorrect, even after a fair bit of reading and discussion. From what I understand, CDT does not include the possibility that it can be simulated well enough that Omega’s prediction is binding. That is the backward-causality (the REAL, CURRENT decision is entangled with a PREVIOUS observation) which breaks it.
The point of the view expressed in this post is that you DON’T have to see the decisions of the real and simulated people as being “entangled”. If you just treat them as two different people, making two decisions (which if Omega is good at simulation are likely to be the same), then Causal Decision Theory works just fine, recommending taking only one box.
The somewhat strange aspect of the problem is that when making a decision in the Newcomb scenario, you don’t know whether you are the real or the simulated person. But less drastic ignorance of your place in the world is a normal occurrence. For instance, you might know (from family lore) that you are descended from some famous person, but be uncertain whether you are the famous person’s grandchild or great grandchild. Such uncertainty about “who you are” doesn’t undermine Causal Decision Theory.
This is quite likely. I suspect that my understanding of CDT, from a technical perspective, remains incorrect, even after a fair bit of reading and discussion. From what I understand, CDT does not include the possibility that it can be simulated well enough that Omega’s prediction is binding. That is the backward-causality (the REAL, CURRENT decision is entangled with a PREVIOUS observation) which breaks it.
The point of the view expressed in this post is that you DON’T have to see the decisions of the real and simulated people as being “entangled”. If you just treat them as two different people, making two decisions (which if Omega is good at simulation are likely to be the same), then Causal Decision Theory works just fine, recommending taking only one box.
The somewhat strange aspect of the problem is that when making a decision in the Newcomb scenario, you don’t know whether you are the real or the simulated person. But less drastic ignorance of your place in the world is a normal occurrence. For instance, you might know (from family lore) that you are descended from some famous person, but be uncertain whether you are the famous person’s grandchild or great grandchild. Such uncertainty about “who you are” doesn’t undermine Causal Decision Theory.