Heres an attempt to explain it using only causal subjunctives:
Say you build an agent, and you know in advance that it will face a certain decision problem. What choice should you have it make, to achieve as much utility as possible? Take Newcombs problem. Your choice to make the agent one-box will cause the agent to one-box, getting however much utility is in the box. It will also cause the simulator to predict the agent will one-box, meaning that the box will be filled. Thus CDT recommends building a one-boxing agent.
In the Sandwich problem, noone is trying to predict the agent. Therefore your choice of how to build it will cause only its actions and their consequences, and so you want the agent to switch to hummus after it learned that they are better.
FDT generalizes this approach into a criterion of rightness. It says that the right action in a given decision problem is the one that the agent CDT recommends you to build would take.
Now the point where the logical uncertainty does come in is the idea of “what you knew at the time”. While that doesnt avoid the issue, it puts it into a form thats more acceptable to academic philosophy. Clearly some version of “what you knew at the time” is needed to do decision theory at all, because we want to say that if you get unlucky in a gamble with positive expected value, you acted rationally.
Heres an attempt to explain it using only causal subjunctives:
Say you build an agent, and you know in advance that it will face a certain decision problem. What choice should you have it make, to achieve as much utility as possible? Take Newcombs problem. Your choice to make the agent one-box will cause the agent to one-box, getting however much utility is in the box. It will also cause the simulator to predict the agent will one-box, meaning that the box will be filled. Thus CDT recommends building a one-boxing agent.
In the Sandwich problem, noone is trying to predict the agent. Therefore your choice of how to build it will cause only its actions and their consequences, and so you want the agent to switch to hummus after it learned that they are better.
FDT generalizes this approach into a criterion of rightness. It says that the right action in a given decision problem is the one that the agent CDT recommends you to build would take.
Now the point where the logical uncertainty does come in is the idea of “what you knew at the time”. While that doesnt avoid the issue, it puts it into a form thats more acceptable to academic philosophy. Clearly some version of “what you knew at the time” is needed to do decision theory at all, because we want to say that if you get unlucky in a gamble with positive expected value, you acted rationally.