You could look at it another way. If a CDT agent knows it will face unspecified Newcomblike problems in the future, it will want to make the most general precommitment now. Of course you can’t come up with the most general precommitment that will solve all decision problems, because there could be a universe that arbitrarily punishes you for having a specific decision algorithm in your head, and rewards some other silly decision algorithm for being different. But if the universe rewards or punishes you only based on the return value of your algorithm and not its internals, then we can hope to figure out mathematically how the most general precommitment (UDT) should choose its return value in every situation. We already know enough to suspect that that it will probably talk about logical implication instead of physical causality, even in a world that runs on physical causality.
You could look at it another way. If a CDT agent knows it will face unspecified Newcomblike problems in the future, it will want to make the most general precommitment now. Of course you can’t come up with the most general precommitment that will solve all decision problems, because there could be a universe that arbitrarily punishes you for having a specific decision algorithm in your head, and rewards some other silly decision algorithm for being different. But if the universe rewards or punishes you only based on the return value of your algorithm and not its internals, then we can hope to figure out mathematically how the most general precommitment (UDT) should choose its return value in every situation. We already know enough to suspect that that it will probably talk about logical implication instead of physical causality, even in a world that runs on physical causality.