The point of the view expressed in this post is that you DON’T have to see the decisions of the real and simulated people as being “entangled”. If you just treat them as two different people, making two decisions (which if Omega is good at simulation are likely to be the same), then Causal Decision Theory works just fine, recommending taking only one box.
The somewhat strange aspect of the problem is that when making a decision in the Newcomb scenario, you don’t know whether you are the real or the simulated person. But less drastic ignorance of your place in the world is a normal occurrence. For instance, you might know (from family lore) that you are descended from some famous person, but be uncertain whether you are the famous person’s grandchild or great grandchild. Such uncertainty about “who you are” doesn’t undermine Causal Decision Theory.
The point of the view expressed in this post is that you DON’T have to see the decisions of the real and simulated people as being “entangled”. If you just treat them as two different people, making two decisions (which if Omega is good at simulation are likely to be the same), then Causal Decision Theory works just fine, recommending taking only one box.
The somewhat strange aspect of the problem is that when making a decision in the Newcomb scenario, you don’t know whether you are the real or the simulated person. But less drastic ignorance of your place in the world is a normal occurrence. For instance, you might know (from family lore) that you are descended from some famous person, but be uncertain whether you are the famous person’s grandchild or great grandchild. Such uncertainty about “who you are” doesn’t undermine Causal Decision Theory.