In recursive decision theory, I think only C,C and D,D are possible. If A and B are perfectly rational, know each other’s preferences, and know the above two facts, then they can model the other by modelling how they would behave if they were in the other’s shoes.
This is a simple game and you state the preferences in the problem statement. Perfect rationality is pretty easy for this constrained case. These are NOT sufficient to perfectly predict (and know that the opponent has perfect prediction) the outcome.
Such prediction is actually impossible. It’s not a matter of “sufficiently intelligent”, as a perfect simulation is recursive—it includes simulating your opponent’s simulation of your simulation of your opponent’s simulation of you (etc. etc.). Actual mirroring or cross-causality (where your decision CANNOT diverge from your opponent’s) requires full state duplication and prevention of identity divergence. This is not a hurdle that can be overcome generally.
This is similar (or maybe identical to) Hofstadter’s https://en.wikipedia.org/wiki/Superrationality, and is very distinct from only perfect rationality. It’s a cute theory, which fails in all imaginable situations where agential identity is anything like our current experience.
The agents are provided information about their perfect rationality and mutual knowledge of each other’s preferences. I showed a resolution to the recursive simulation problem. The agents can avoid it by predisposing themselves.
In recursive decision theory, I think only C,C and D,D are possible. If A and B are perfectly rational, know each other’s preferences, and know the above two facts, then they can model the other by modelling how they would behave if they were in the other’s shoes.
This is a simple game and you state the preferences in the problem statement. Perfect rationality is pretty easy for this constrained case. These are NOT sufficient to perfectly predict (and know that the opponent has perfect prediction) the outcome.
Such prediction is actually impossible. It’s not a matter of “sufficiently intelligent”, as a perfect simulation is recursive—it includes simulating your opponent’s simulation of your simulation of your opponent’s simulation of you (etc. etc.). Actual mirroring or cross-causality (where your decision CANNOT diverge from your opponent’s) requires full state duplication and prevention of identity divergence. This is not a hurdle that can be overcome generally.
This is similar (or maybe identical to) Hofstadter’s https://en.wikipedia.org/wiki/Superrationality, and is very distinct from only perfect rationality. It’s a cute theory, which fails in all imaginable situations where agential identity is anything like our current experience.
The agents are provided information about their perfect rationality and mutual knowledge of each other’s preferences. I showed a resolution to the recursive simulation problem. The agents can avoid it by predisposing themselves.