Note that when you can have a well-specified Bayesian belief over your partner, these problems don’t arise. However, both agents can’t be in this situation: in this case agent A would have a belief over B that has a belief over A; if these are all well-specified Bayesian beliefs, then A has a Bayesian belief over itself, which is impossible.
There are ways to get around this. The most common way in the literature (in fact the only way I have seen) gives every agent a belief over a set of common worlds (which contain both the state of the world and the memory states of all of the agents). Then the state of the world is a sufficient statistic over everything that can happen and beliefs about other players beliefs can be derived from each player’s beliefs on the underlying world. This does mean you have to agree upon “possible memory states” before time, or at least both have beliefs that are described over sets that can be constantly combined into a “set of all possible worlds”.
There are ways to get around this. The most common way in the literature (in fact the only way I have seen) gives every agent a belief over a set of common worlds (which contain both the state of the world and the memory states of all of the agents). Then the state of the world is a sufficient statistic over everything that can happen and beliefs about other players beliefs can be derived from each player’s beliefs on the underlying world. This does mean you have to agree upon “possible memory states” before time, or at least both have beliefs that are described over sets that can be constantly combined into a “set of all possible worlds”.
Thanks, removed that section.