Examples often help. I don’t know if you have a misconception—but a common misconception is that A and B need to share their evidence pertaining to the temperature rise before they can reach agreement.
What Aumann says—counterintuitively—is that no, they just need to share their estimates of the temperature rise with each other repeatedly—so each can update on the other’s updates—and that is all. As the Cowen quote from earlier says:
“In essence his opinion serves as a ‘sufficient statistic’ for all of his evidence”.
Suppose agent A has access to observations X1..X10, on the basis of which A concludes a 1-degree temperature rise.
Suppose agent B has access to observations X1..X9, on the basis of which A concludes a 2-degree temperature rise, and A and B are both perfect rationalists whose relevant priors are otherwise completely shared and whose posterior probabilities are perfectly calibrated to the evidence they have access to.
It follows that if B had access to X10, B would update and conclude a 2-degree rise. But neither A nor B know that.
In this example, A and B aren’t justified in having the conversation you describe, because A’s estimate already takes into account all of B’s evidence, so any updating A does based on the fact of B’s estimate in fact double-counts all of that evidence.
But until A can identify what it is that B knows and doesn’t know, A has no way of confirming that. If they just share their estimates, they haven’t done a fraction of the work necessary to get the best conclusion from the available data… in fact, if that’s all they’re going to do, A was better off not talking to B at all.
Of course, one might say “But we’re supposed to assume common priors, so we can’t say that A has high confidence in X10 while B doesn’t.” But in that case, I’m not sure what caused A and B to arrive at different estimates in the first place.
I don’t think Aumann’s agreement theorem is about getting “the best conclusion from the available data”. It is about agreement. The idea is not that an exchange produces a the most accurate outcome from all the evidence held by both parties—but rather that their disagreements do not persist for very long.
This post questions the costs of reaching such an agreement. Conventional wisdom is as follows:
But two key questions went unaddressed: first, can the agents reach agreement after a conversation of reasonable length? Second, can the computations needed for that conversation be performed efficiently? This paper answers both questions in the affirmative, thereby strengthening Aumann’s original conclusion.
Huh. In that case, I guess I’m wondering why we care.
That is, if we’re just talking about a mechanism whereby two agents can reach agreement efficiently and we’re OK with them agreeing on conclusions the evidence doesn’t actually support, isn’t it more efficient to, for example, agree to flip a coin and agree on A’s estimate if heads, and B’s estimate if tails?
I can’t speak for all those interested—but I think one common theme is that we see much persistent disagreement in the world when agents share their estimates—Aumann says it is unlikely to be epistemically rational and honest (although it often purports to be) - so what is going on?
Your proposed coin-flip is certainly faster than Aumann agreement—but does not offer such good quality results. In an Aumann agreement, agents take account of each others’ confidence levels.
Examples often help. I don’t know if you have a misconception—but a common misconception is that A and B need to share their evidence pertaining to the temperature rise before they can reach agreement.
What Aumann says—counterintuitively—is that no, they just need to share their estimates of the temperature rise with each other repeatedly—so each can update on the other’s updates—and that is all. As the Cowen quote from earlier says:
“In essence his opinion serves as a ‘sufficient statistic’ for all of his evidence”.
(nods) Examples do indeed help.
Suppose agent A has access to observations X1..X10, on the basis of which A concludes a 1-degree temperature rise.
Suppose agent B has access to observations X1..X9, on the basis of which A concludes a 2-degree temperature rise, and A and B are both perfect rationalists whose relevant priors are otherwise completely shared and whose posterior probabilities are perfectly calibrated to the evidence they have access to.
It follows that if B had access to X10, B would update and conclude a 2-degree rise. But neither A nor B know that.
In this example, A and B aren’t justified in having the conversation you describe, because A’s estimate already takes into account all of B’s evidence, so any updating A does based on the fact of B’s estimate in fact double-counts all of that evidence.
But until A can identify what it is that B knows and doesn’t know, A has no way of confirming that. If they just share their estimates, they haven’t done a fraction of the work necessary to get the best conclusion from the available data… in fact, if that’s all they’re going to do, A was better off not talking to B at all.
Of course, one might say “But we’re supposed to assume common priors, so we can’t say that A has high confidence in X10 while B doesn’t.” But in that case, I’m not sure what caused A and B to arrive at different estimates in the first place.
I don’t think Aumann’s agreement theorem is about getting “the best conclusion from the available data”. It is about agreement. The idea is not that an exchange produces a the most accurate outcome from all the evidence held by both parties—but rather that their disagreements do not persist for very long.
This post questions the costs of reaching such an agreement. Conventional wisdom is as follows:
http://portal.acm.org/citation.cfm?id=1060686&preflayout=flat
Huh. In that case, I guess I’m wondering why we care.
That is, if we’re just talking about a mechanism whereby two agents can reach agreement efficiently and we’re OK with them agreeing on conclusions the evidence doesn’t actually support, isn’t it more efficient to, for example, agree to flip a coin and agree on A’s estimate if heads, and B’s estimate if tails?
I can’t speak for all those interested—but I think one common theme is that we see much persistent disagreement in the world when agents share their estimates—Aumann says it is unlikely to be epistemically rational and honest (although it often purports to be) - so what is going on?
Your proposed coin-flip is certainly faster than Aumann agreement—but does not offer such good quality results. In an Aumann agreement, agents take account of each others’ confidence levels.