I don’t think Aumann’s agreement theorem is about getting “the best conclusion from the available data”. It is about agreement. The idea is not that an exchange produces a the most accurate outcome from all the evidence held by both parties—but rather that their disagreements do not persist for very long.
This post questions the costs of reaching such an agreement. Conventional wisdom is as follows:
But two key questions went unaddressed: first, can the agents reach agreement after a conversation of reasonable length? Second, can the computations needed for that conversation be performed efficiently? This paper answers both questions in the affirmative, thereby strengthening Aumann’s original conclusion.
Huh. In that case, I guess I’m wondering why we care.
That is, if we’re just talking about a mechanism whereby two agents can reach agreement efficiently and we’re OK with them agreeing on conclusions the evidence doesn’t actually support, isn’t it more efficient to, for example, agree to flip a coin and agree on A’s estimate if heads, and B’s estimate if tails?
I can’t speak for all those interested—but I think one common theme is that we see much persistent disagreement in the world when agents share their estimates—Aumann says it is unlikely to be epistemically rational and honest (although it often purports to be) - so what is going on?
Your proposed coin-flip is certainly faster than Aumann agreement—but does not offer such good quality results. In an Aumann agreement, agents take account of each others’ confidence levels.
I don’t think Aumann’s agreement theorem is about getting “the best conclusion from the available data”. It is about agreement. The idea is not that an exchange produces a the most accurate outcome from all the evidence held by both parties—but rather that their disagreements do not persist for very long.
This post questions the costs of reaching such an agreement. Conventional wisdom is as follows:
http://portal.acm.org/citation.cfm?id=1060686&preflayout=flat
Huh. In that case, I guess I’m wondering why we care.
That is, if we’re just talking about a mechanism whereby two agents can reach agreement efficiently and we’re OK with them agreeing on conclusions the evidence doesn’t actually support, isn’t it more efficient to, for example, agree to flip a coin and agree on A’s estimate if heads, and B’s estimate if tails?
I can’t speak for all those interested—but I think one common theme is that we see much persistent disagreement in the world when agents share their estimates—Aumann says it is unlikely to be epistemically rational and honest (although it often purports to be) - so what is going on?
Your proposed coin-flip is certainly faster than Aumann agreement—but does not offer such good quality results. In an Aumann agreement, agents take account of each others’ confidence levels.