The whole point of Newcomb’s problem is that CDT two-boxes and prediction “isn’t really you”, so that we have a conflict of intuition to one-box and CDT, and need to resolve it somehow, thus gaining new understanding. What is your thought experiment for?
This is unsurprising: CDT relies on explicit dependencies given by causal definitions, while what you want is to look for logical (ambient) dependencies for which the particular way the problem was specified (e.g. physical content defined by causality) is irrelevant. After you find the dependencies as a result of such analysis, all that’s left is applying expected utility, at which point any CDT-specificity is gone (see Controlling Constant Programs).
The whole point of Newcomb’s problem is that CDT two-boxes and prediction “isn’t really you”, so that we have a conflict of intuition to one-box and CDT, and need to resolve it somehow, thus gaining new understanding. What is your thought experiment for?
Problems where CDT loses can be (probably mechanically) transformed to “strategy-equivalent” problems where CDT wins. That’s at least interesting.
It even suggests a decision theory. Just transform the problem and use the strategy that CDT recommends for this new problem.
This is unsurprising: CDT relies on explicit dependencies given by causal definitions, while what you want is to look for logical (ambient) dependencies for which the particular way the problem was specified (e.g. physical content defined by causality) is irrelevant. After you find the dependencies as a result of such analysis, all that’s left is applying expected utility, at which point any CDT-specificity is gone (see Controlling Constant Programs).