The case of CDT vs Newcomb-like problems to me has a lot of similarity with the different approaches to probability theory. In CDT you are considering only one type of information, i.e. causal dependency, to construct decision trees. This is akin to define probability as the frequency of some process, so that probability relations become causal ones.
Other approaches like TDT construct decisions using causal and logical dependency, as the induction logic approach to probability does. The Newcomb is not designed to be “unfair” to CDT, it is designed to show the limits of causal approach, exactly like calculating past sampling distribution from future extractions is a problem solvable only from the second approach (see the third chapter of Jaynes’ book).
That said, we can argue about what rationality should really be: just the correct execution of whatever type of agency a system has? Or the general principle that an agent should be able to reason about whatever situation is at hand and correctly deal with it, using all the available information? My sympathy goes to the second approach, not just because it seems to be more intuitively appealing, but also because it will be a fundamental necessity of a future AI.
The case of CDT vs Newcomb-like problems to me has a lot of similarity with the different approaches to probability theory.
In CDT you are considering only one type of information, i.e. causal dependency, to construct decision trees. This is akin to define probability as the frequency of some process, so that probability relations become causal ones. Other approaches like TDT construct decisions using causal and logical dependency, as the induction logic approach to probability does.
The Newcomb is not designed to be “unfair” to CDT, it is designed to show the limits of causal approach, exactly like calculating past sampling distribution from future extractions is a problem solvable only from the second approach (see the third chapter of Jaynes’ book).
That said, we can argue about what rationality should really be: just the correct execution of whatever type of agency a system has? Or the general principle that an agent should be able to reason about whatever situation is at hand and correctly deal with it, using all the available information?
My sympathy goes to the second approach, not just because it seems to be more intuitively appealing, but also because it will be a fundamental necessity of a future AI.