Yes, I can certainly argue that. In a sense, the point is even deeper: we have some intuitive heuristics for what it means for players to have “similar algorithms”, but what we ultimately care about is how likely it is that if I cooperate you cooperate, and when I don’t cooperate, there’s no ground truth about this. It is perfectly possible for one or both of the players to (due to their past logical experiences) believe they are not correlated, AND (this is the important point) if they thus don’t cooperate, this belief will never be falsified. This “falling in a bad equilibrium by your own fault” is exactly the fundamental problem with FixDT (and more in general, fix-points and action-relevant beliefs).
More realistically, both players will continue getting observations about ground-truth math and playing games with other players, and so the question becomes whether these learnings will be enough to quick them out of any dumb equilibria.
Thanks for the tip :)
Yes, I can certainly argue that. In a sense, the point is even deeper: we have some intuitive heuristics for what it means for players to have “similar algorithms”, but what we ultimately care about is how likely it is that if I cooperate you cooperate, and when I don’t cooperate, there’s no ground truth about this. It is perfectly possible for one or both of the players to (due to their past logical experiences) believe they are not correlated, AND (this is the important point) if they thus don’t cooperate, this belief will never be falsified. This “falling in a bad equilibrium by your own fault” is exactly the fundamental problem with FixDT (and more in general, fix-points and action-relevant beliefs).
More realistically, both players will continue getting observations about ground-truth math and playing games with other players, and so the question becomes whether these learnings will be enough to quick them out of any dumb equilibria.