Aumann’s theorem itself doesn’t say anything about “with sufficient communication”; that’s just one possible way for them to make the relevant stuff “common knowledge”. (Also, remember that the thing about Aumann’s theorem is that the two parties are supposed not to have to share their actual evidence with one another—only their posterior probabilities. And, indeed, only their posterior probabilities for the single event whose probability they are to end up agreeing about.)
The scenario described in the original post here doesn’t say anything about there being “sufficient communication” either.
It seems to me that Aumann’s theorem is one of those (Goedel’s incompleteness theorem is notoriously one, to a much greater extent) where “everyone knows” a simple one-sentence version of it, which sounds exciting and dramatic and fraught with conclusions directly relevant to daily life, and which also happens to be quite different from the actual theorem.
But maybe some of those generalizations of Aumann’s theorem really do amount to saying that rational people can’t agree to disagree. If someone reading this is familiar with a presentation of some such generalization that actually provides details and a proof, I’d be very interested to know.
(For instance, is Hanson’s paper on “savvy Bayesian wannabes” an example? Brief skimming suggests that it still involves technical assumptions that might amount to drastic unrealism about how much the two parties know about one another’s cognitive faculties, and that its conclusion isn’t all that strong in any case—it basically seems to say that if A and B agree to disagree in the sense Hanson defines then they are also agreeing to disagree about how well they think, which doesn’t seem very startling to me even if it turns out to be true without heavy technical conditions.)
Thank you for the clarification: Aumann’s theorem does not assume that the people have the same information. They just know each other’s posteriors. After reading the original paper, I understand that the concensus comes about iteratively in the following way: they know each other’s conclusions (posteriors). If they have different conclusions, then they must infer that the other has different information, and they modify their posteriors based on this different, unknown information to some extent. They then recompare their posteriors. If they’re still different, they conclude that the other’s evidence must have been stronger than they estimated, and they recalculate. So without actually sharing the information, they deduce the net result of the information by mutually comparing the posteriors.
In Aumann’s original paper, the statement of the theorem doesn’t involve any assumption that the two parties have performed any sort of iterative procedure. In informal explanations of why the result makes sense, such iterative procedures are usually described. I think this illustrates the point that the innocuous-sounding description of what the two parties are supposed to know (“their posteriors are common knowledge”) conceals more than meets the eye: to get a situation where anything like it is true, you need to assume that they’ve been through some higher-quality information exchange procedure.
The proof looks at an ordering of posteriors p1, p2, etc, that result from subsequent levels of knowledge of knowledge of the other’s posteriors. However, these are shown to be equal, so in a sense all of the iterations happens in some way simultaneously. -- Actually, I looked at it again and I’m not so sure this is true. It’s how I understand it.
Aumann’s theorem itself doesn’t say anything about “with sufficient communication”; that’s just one possible way for them to make the relevant stuff “common knowledge”. (Also, remember that the thing about Aumann’s theorem is that the two parties are supposed not to have to share their actual evidence with one another—only their posterior probabilities. And, indeed, only their posterior probabilities for the single event whose probability they are to end up agreeing about.)
The scenario described in the original post here doesn’t say anything about there being “sufficient communication” either.
It seems to me that Aumann’s theorem is one of those (Goedel’s incompleteness theorem is notoriously one, to a much greater extent) where “everyone knows” a simple one-sentence version of it, which sounds exciting and dramatic and fraught with conclusions directly relevant to daily life, and which also happens to be quite different from the actual theorem.
But maybe some of those generalizations of Aumann’s theorem really do amount to saying that rational people can’t agree to disagree. If someone reading this is familiar with a presentation of some such generalization that actually provides details and a proof, I’d be very interested to know.
(For instance, is Hanson’s paper on “savvy Bayesian wannabes” an example? Brief skimming suggests that it still involves technical assumptions that might amount to drastic unrealism about how much the two parties know about one another’s cognitive faculties, and that its conclusion isn’t all that strong in any case—it basically seems to say that if A and B agree to disagree in the sense Hanson defines then they are also agreeing to disagree about how well they think, which doesn’t seem very startling to me even if it turns out to be true without heavy technical conditions.)
Thank you for the clarification: Aumann’s theorem does not assume that the people have the same information. They just know each other’s posteriors. After reading the original paper, I understand that the concensus comes about iteratively in the following way: they know each other’s conclusions (posteriors). If they have different conclusions, then they must infer that the other has different information, and they modify their posteriors based on this different, unknown information to some extent. They then recompare their posteriors. If they’re still different, they conclude that the other’s evidence must have been stronger than they estimated, and they recalculate. So without actually sharing the information, they deduce the net result of the information by mutually comparing the posteriors.
In Aumann’s original paper, the statement of the theorem doesn’t involve any assumption that the two parties have performed any sort of iterative procedure. In informal explanations of why the result makes sense, such iterative procedures are usually described. I think this illustrates the point that the innocuous-sounding description of what the two parties are supposed to know (“their posteriors are common knowledge”) conceals more than meets the eye: to get a situation where anything like it is true, you need to assume that they’ve been through some higher-quality information exchange procedure.
The proof looks at an ordering of posteriors p1, p2, etc, that result from subsequent levels of knowledge of knowledge of the other’s posteriors. However, these are shown to be equal, so in a sense all of the iterations happens in some way simultaneously. -- Actually, I looked at it again and I’m not so sure this is true. It’s how I understand it.