Why can’t we reach agreement by updating on the beliefs of others, as per Aumann’s theorem? Why do we need to describe how we arrived at our beliefs, instead of simply stating them? In fact in ordinary conversation no one even tries to use Aumann-style negotiation, and from my perspective this seems completely rational. Even when talking to people whose rationality I trust, assuming common knowledge of rationality or honesty is never close to accurate; if we abandon skepticism we can expect to be consistently wrong (even when we aren’t deliberately manipulated).
This is an interesting problem. I agree that Aumann-style updating in practice will lead to less correct beliefs systematically, but you don’t seem to analyse why it is so. My hunch is that the reason is our inability to judge other people’s rationality well, but I’d like to know what do you think about it. The explanation
we aren’t sufficiently perfect rationalists to use Aumann agreement
Noone can do this exactly, but why isn’t some approximation effective? To update in a Bayesian way we need to know our priors too, and not being able to state the numbers precisely isn’t seen as a reason for not using Bayesian updating in a wide class of practical situations.
I would guess the problem isn’t approximation, its the common knowledge no one has. You can’t approximate this, and if anyone suspects it may not be completely true (which they should, since it isn’t) the result completely falls apart.
I forgot the other requirement, and the more onerous one, for Aumann agreement: the two people’s priors must already agree. This is absolutely unrealistic.
Strong Bayesians may say that there is a unique universal prior that every perfect Bayesian reasoner must have, but until that prior can be exhibited, Aumann agreement must remain a mirage. No-one has exhibited that prior.
Perfectly updating on the basis of others’ opinions is rare; I have never seen anyone who purports to do it correctly (I can’t update correctly on the basis of almost anything). Common knowledge of this ability seems impossible to come by. Common knowledge that neither of you is trying to manipulate the other (or speaking for signaling reasons) is also completely non-existent: I have never been in an environment where more than 2 or 3 levels of non-manipulation were known.
This is an interesting problem. I agree that Aumann-style updating in practice will lead to less correct beliefs systematically, but you don’t seem to analyse why it is so. My hunch is that the reason is our inability to judge other people’s rationality well, but I’d like to know what do you think about it. The explanation
doesn’t tell much.
Aumann agreement requires knowing one’s priors and posteriors. Actually knowing, i.e. being able to state the actual numbers. But no-one can do this.
Noone can do this exactly, but why isn’t some approximation effective? To update in a Bayesian way we need to know our priors too, and not being able to state the numbers precisely isn’t seen as a reason for not using Bayesian updating in a wide class of practical situations.
I would guess the problem isn’t approximation, its the common knowledge no one has. You can’t approximate this, and if anyone suspects it may not be completely true (which they should, since it isn’t) the result completely falls apart.
I forgot the other requirement, and the more onerous one, for Aumann agreement: the two people’s priors must already agree. This is absolutely unrealistic.
Strong Bayesians may say that there is a unique universal prior that every perfect Bayesian reasoner must have, but until that prior can be exhibited, Aumann agreement must remain a mirage. No-one has exhibited that prior.
Perfectly updating on the basis of others’ opinions is rare; I have never seen anyone who purports to do it correctly (I can’t update correctly on the basis of almost anything). Common knowledge of this ability seems impossible to come by. Common knowledge that neither of you is trying to manipulate the other (or speaking for signaling reasons) is also completely non-existent: I have never been in an environment where more than 2 or 3 levels of non-manipulation were known.
Aumann agreement requires common knowledge of priors. Since we don’t even know our own priors, Aumann agreement is not possible.