Had to search to find the rest of the problem (like what happens if he predicted you to be in Aleppo and you’re there—you die). This was helpful, and I came across a 2008 paper which argues that CDT works here.
I’m still not sure how this is any different from Newcomb’s problem: if Death predicts you perfectly, your best plan is to just accept it and leave your heirs the maximum amount (one-box). And CDT works just fine if you phrase it as “what is the probability that Death/Omega has correctly predicted your action” (but it does somewhat bend the “causal” part. I prefer the C stand for Classical, though).
Had to search to find the rest of the problem (like what happens if he predicted you to be in Aleppo and you’re there—you die). This was helpful, and I came across a 2008 paper which argues that CDT works here.
I’m still not sure how this is any different from Newcomb’s problem: if Death predicts you perfectly, your best plan is to just accept it and leave your heirs the maximum amount (one-box). And CDT works just fine if you phrase it as “what is the probability that Death/Omega has correctly predicted your action” (but it does somewhat bend the “causal” part. I prefer the C stand for Classical, though).
I think they use Death in Damascus rather than Newcomb because decision theorists agree more on what the correct behaviour is on the first problem.
My original post here is in error; see http://lesswrong.com/r/discussion/lw/orn/making_equilibrium_cdt_into_fdt_in_one_easy_step/ for a more correct version.