Jobst Heitzig asked me whether infra-Bayesianism has something to say about the absent-minded driver (AMD) problem. Good question! Here is what I wrote in response:
Philosophically, I believe that it is only meaningful to talk about a decision problem when there is also some mechanism for learning the rules of the decision problem. In ordinary Newcombian problems, you can achieve this by e.g. making the problem iterated. In AMD, iteration doesn’t really help because the driver doesn’t remember anything that happened before. We can consider a version of iterated AMD where the driver has a probability 0<ϵ≪1 to remember every intersection, but they always remember whether they arrived at the right destination. Then, it is equivalent to the following Newcombian problem:
With probability 1−2ϵ, counterfactual A happens, in which Omega decides about both intersections via simulating the driver in counterfactuals B and C.
With probability ϵ, counterfactual B happens, in which the driver decides about the first intersection, and Omega decides about the second intersection via simulating the driver in counterfactual C.
With probability ϵ, counterfactual C happens, in which the driver decides about the second intersection, and Omega decides about the first intersection via simulating the driver in counterfactual B.
For this, an IB agent indeed learns the updateless optimal policy (although the learning rate carries an ϵ−1 penalty).
Jobst Heitzig asked me whether infra-Bayesianism has something to say about the absent-minded driver (AMD) problem. Good question! Here is what I wrote in response: