A relevant result is Aumann’s agreement theorem, and offshoots where two Bayesians repeating their probability judgements back and forth will converge on a common belief. Although note that that belief isn’t always the one they would have in the case that they both knew all their observations—supposing we both privately flip coins, and state our probabilities that we got the same result, we’ll spend all day saying 50% without actually learning the answer—nevertheless you shouldn’t expect probabilities to badly asymptote in expectation.
This makes me think that you’ll want to think about bounded-rational models where people can only recurse 3 times, or something. [ETA: or models where some participants in the discourse are adversarial, as in this paper].
A relevant result is Aumann’s agreement theorem, and offshoots where two Bayesians repeating their probability judgements back and forth will converge on a common belief. Although note that that belief isn’t always the one they would have in the case that they both knew all their observations—supposing we both privately flip coins, and state our probabilities that we got the same result, we’ll spend all day saying 50% without actually learning the answer—nevertheless you shouldn’t expect probabilities to badly asymptote in expectation.
This makes me think that you’ll want to think about bounded-rational models where people can only recurse 3 times, or something. [ETA: or models where some participants in the discourse are adversarial, as in this paper].