I was not using the term “common knowledge” the same way Aumann paper was. I was baseing my use of the term on what I found in the lesswrong wiki. I used common knowledge and common priors as essentially the same object in my post. Having the same priors seems to require “all pertinent knowledge” be known by both parties or known in common(this is how I used the term in my post) or a large coincidence where two, pertinent, partial non-overlapping(at least), lead to the same priors.
Imagine I think there are 200 balls in the urn, but Robin Hanson thinks there are 300 balls in the urn. Once Robin tells me his estimate, and I tell him mine, we should converge upon a common opinion. In essence his opinion serves as a “sufficient statistic” for all of his evidence.
May be I do not understand priors correctly. It the provided example it seems like Robin Hanson and and the author have different priors. These two cases seem to parellel what is consider two separate priors in the less wrong wiki:
Suppose you had a barrel containing some number of red and white balls. If you start with the belief that each ball was independently assigned red color (vs. white color) at some fixed probability between 0 and 1, and you start out ignorant of this fixed probability (the parameter could anywhere between 0 and 1), then each red ball you see makes it more likely that the next ball will be red. (By Laplace’s Rule of Succession.)
On the other hand, if you start out with the prior belief that the barrel contains exactly 10 red balls and 10 white balls, then each red ball you see makes it less likely that the next ball will be red (because there are fewer red balls remaining).
“Two people, 1 and 2, are said to have common knowledge of an event E if both know it, 1 knows that 2 knows it, 2 knows that 1 knows is, 1 knows that 2 knows that 1 knows it, and so on.”
Or I may have missed you point entirely. You have introduced the concept of “Aumann agreement exchange” but not what misconception you are trying to clear up with it.
Examples often help. I don’t know if you have a misconception—but a common misconception is that A and B need to share their evidence pertaining to the temperature rise before they can reach agreement.
What Aumann says—counterintuitively—is that no, they just need to share their estimates of the temperature rise with each other repeatedly—so each can update on the other’s updates—and that is all. As the Cowen quote from earlier says:
“In essence his opinion serves as a ‘sufficient statistic’ for all of his evidence”.
Suppose agent A has access to observations X1..X10, on the basis of which A concludes a 1-degree temperature rise.
Suppose agent B has access to observations X1..X9, on the basis of which A concludes a 2-degree temperature rise, and A and B are both perfect rationalists whose relevant priors are otherwise completely shared and whose posterior probabilities are perfectly calibrated to the evidence they have access to.
It follows that if B had access to X10, B would update and conclude a 2-degree rise. But neither A nor B know that.
In this example, A and B aren’t justified in having the conversation you describe, because A’s estimate already takes into account all of B’s evidence, so any updating A does based on the fact of B’s estimate in fact double-counts all of that evidence.
But until A can identify what it is that B knows and doesn’t know, A has no way of confirming that. If they just share their estimates, they haven’t done a fraction of the work necessary to get the best conclusion from the available data… in fact, if that’s all they’re going to do, A was better off not talking to B at all.
Of course, one might say “But we’re supposed to assume common priors, so we can’t say that A has high confidence in X10 while B doesn’t.” But in that case, I’m not sure what caused A and B to arrive at different estimates in the first place.
I don’t think Aumann’s agreement theorem is about getting “the best conclusion from the available data”. It is about agreement. The idea is not that an exchange produces a the most accurate outcome from all the evidence held by both parties—but rather that their disagreements do not persist for very long.
This post questions the costs of reaching such an agreement. Conventional wisdom is as follows:
But two key questions went unaddressed: first, can the agents reach agreement after a conversation of reasonable length? Second, can the computations needed for that conversation be performed efficiently? This paper answers both questions in the affirmative, thereby strengthening Aumann’s original conclusion.
Huh. In that case, I guess I’m wondering why we care.
That is, if we’re just talking about a mechanism whereby two agents can reach agreement efficiently and we’re OK with them agreeing on conclusions the evidence doesn’t actually support, isn’t it more efficient to, for example, agree to flip a coin and agree on A’s estimate if heads, and B’s estimate if tails?
I can’t speak for all those interested—but I think one common theme is that we see much persistent disagreement in the world when agents share their estimates—Aumann says it is unlikely to be epistemically rational and honest (although it often purports to be) - so what is going on?
Your proposed coin-flip is certainly faster than Aumann agreement—but does not offer such good quality results. In an Aumann agreement, agents take account of each others’ confidence levels.
I was not using the term “common knowledge” the same way Aumann paper was. I was baseing my use of the term on what I found in the lesswrong wiki. I used common knowledge and common priors as essentially the same object in my post. Having the same priors seems to require “all pertinent knowledge” be known by both parties or known in common(this is how I used the term in my post) or a large coincidence where two, pertinent, partial non-overlapping(at least), lead to the same priors.
May be I do not understand priors correctly. It the provided example it seems like Robin Hanson and and the author have different priors. These two cases seem to parellel what is consider two separate priors in the less wrong wiki:
“Common knowledge” is a highly misleading piece of technical terminology—in the context of Aumann’s paper.
A two-person Aumann agreement exchange example (of degrees C warming next century) looks like:
A: I think it’s 1.0 degrees...
B: Well, I think it’s 2.0 degrees...
A: Well, in that case, I think it’s 1.2 degrees...
B: Well, in that case, I think it’s 1.99 degrees...
A: Well...
The information exchanged is not remotely like all the pertinent knowledge—and so making the exchange is often relatively quick and easy.
That is not the definition at the top of the paper you just linked for me: http://www.ma.huji.ac.il/~raumann/pdf/Agreeing%20to%20Disagree.pdf
“Two people, 1 and 2, are said to have common knowledge of an event E if both know it, 1 knows that 2 knows it, 2 knows that 1 knows is, 1 knows that 2 knows that 1 knows it, and so on.”
Or I may have missed you point entirely. You have introduced the concept of “Aumann agreement exchange” but not what misconception you are trying to clear up with it.
Examples often help. I don’t know if you have a misconception—but a common misconception is that A and B need to share their evidence pertaining to the temperature rise before they can reach agreement.
What Aumann says—counterintuitively—is that no, they just need to share their estimates of the temperature rise with each other repeatedly—so each can update on the other’s updates—and that is all. As the Cowen quote from earlier says:
“In essence his opinion serves as a ‘sufficient statistic’ for all of his evidence”.
(nods) Examples do indeed help.
Suppose agent A has access to observations X1..X10, on the basis of which A concludes a 1-degree temperature rise.
Suppose agent B has access to observations X1..X9, on the basis of which A concludes a 2-degree temperature rise, and A and B are both perfect rationalists whose relevant priors are otherwise completely shared and whose posterior probabilities are perfectly calibrated to the evidence they have access to.
It follows that if B had access to X10, B would update and conclude a 2-degree rise. But neither A nor B know that.
In this example, A and B aren’t justified in having the conversation you describe, because A’s estimate already takes into account all of B’s evidence, so any updating A does based on the fact of B’s estimate in fact double-counts all of that evidence.
But until A can identify what it is that B knows and doesn’t know, A has no way of confirming that. If they just share their estimates, they haven’t done a fraction of the work necessary to get the best conclusion from the available data… in fact, if that’s all they’re going to do, A was better off not talking to B at all.
Of course, one might say “But we’re supposed to assume common priors, so we can’t say that A has high confidence in X10 while B doesn’t.” But in that case, I’m not sure what caused A and B to arrive at different estimates in the first place.
I don’t think Aumann’s agreement theorem is about getting “the best conclusion from the available data”. It is about agreement. The idea is not that an exchange produces a the most accurate outcome from all the evidence held by both parties—but rather that their disagreements do not persist for very long.
This post questions the costs of reaching such an agreement. Conventional wisdom is as follows:
http://portal.acm.org/citation.cfm?id=1060686&preflayout=flat
Huh. In that case, I guess I’m wondering why we care.
That is, if we’re just talking about a mechanism whereby two agents can reach agreement efficiently and we’re OK with them agreeing on conclusions the evidence doesn’t actually support, isn’t it more efficient to, for example, agree to flip a coin and agree on A’s estimate if heads, and B’s estimate if tails?
I can’t speak for all those interested—but I think one common theme is that we see much persistent disagreement in the world when agents share their estimates—Aumann says it is unlikely to be epistemically rational and honest (although it often purports to be) - so what is going on?
Your proposed coin-flip is certainly faster than Aumann agreement—but does not offer such good quality results. In an Aumann agreement, agents take account of each others’ confidence levels.