Aumann’s agreement theorem—I don’t claim to know all its generalizations—assumes that the two parties have the same priors, and that each knows the other’s “information partition” (i.e., what states of the world the other can distinguish). It also assumes that their knowledge of one another’s posteriors is “common knowledge” in a technical sense. It also assumes that both parties are perfect Bayesians and that this too is “common knowledge”. I see no reason to assume that any of these is true, given MBlume’s description of the situation. (In particular, the assumption regarding what each knows about the other seems to me, from Aumann’s description, to imply more detailed knowledge of one another’s cognitive faculties than any human has of any other’s.)
Clearly someone is failing (very broadly understood) at something, since at least one of the two doctors assigns 99% probability to something untrue. But, e.g., the following is perfectly consistent with the scenario as described (although unlikely):
Both doctors are superlatively intelligent, skillful and well informed (about medicine). Both have done the same, entirely sensible, tests; one has been the victim of extreme bad luck and got evidence at the 99.5% level for a wrong diagnosis. Both have also been victims of further mischance, and each has (unknown to the other) got evidence at the 99.5% level that the other is horrendously incompetent even though that is not actually true. (We can agree, I hope, that all this is possible, albeit very unlikely?)
Now each considers the evidence. A, before learning B’s verdict: P(malaria & B incompetent) = 0.995^2 ~= 0.99 P(malaria & B competent) = 0.995 . 0.005 ~= 0.005 P(bird flu & B incompetent) = 0.995 . 0.005 ~= 0.005 P(bird flu & B competent) = 0.005 . 0.005 = 0.000025
Now for a Bayesian update based on B’s opinion. Some kinda-plausible figures: P(B gets given result | malaria & B incompetent) = 0.1 P(B gets given result | malaria & B competent) = 0.001 P(B gets given result | bird flu & B incompetent) = 0.1 P(B gets given result | bird flu & B competent) = 0.99
So the new odds are roughly 0.099 : 0.000005 : 0.0005 : 0.000025, giving a probability of about 99.5% for the “malaria & B incompetent” option.
B goes through an exactly parallel calculation, favouring “bird flu & A incompetent”.
Both doctors have been unlucky, but neither has been irrational. Those who think Aumann’s theorem requires one of them to have been irrational given the data: please explain what’s impossible in the above scenario.
If each scientist has gotten evidence at the 99.5% level that the other is horrendously incompetent, then they should have no problem convincing each other that the other is incompetent with said evidence. (Unless one of them has additional evidence to defend their competency, in which case they will agree on how the additional evidence should change the assessment.) The idea being that with sufficient communication, they have exactly the same information and thus must make the same conclusions.
On the other hand, the requirement of the same priors is interesting. Mightn’t this be how they could rationally come to different conclusions?
Aumann’s theorem itself doesn’t say anything about “with sufficient communication”; that’s just one possible way for them to make the relevant stuff “common knowledge”. (Also, remember that the thing about Aumann’s theorem is that the two parties are supposed not to have to share their actual evidence with one another—only their posterior probabilities. And, indeed, only their posterior probabilities for the single event whose probability they are to end up agreeing about.)
The scenario described in the original post here doesn’t say anything about there being “sufficient communication” either.
It seems to me that Aumann’s theorem is one of those (Goedel’s incompleteness theorem is notoriously one, to a much greater extent) where “everyone knows” a simple one-sentence version of it, which sounds exciting and dramatic and fraught with conclusions directly relevant to daily life, and which also happens to be quite different from the actual theorem.
But maybe some of those generalizations of Aumann’s theorem really do amount to saying that rational people can’t agree to disagree. If someone reading this is familiar with a presentation of some such generalization that actually provides details and a proof, I’d be very interested to know.
(For instance, is Hanson’s paper on “savvy Bayesian wannabes” an example? Brief skimming suggests that it still involves technical assumptions that might amount to drastic unrealism about how much the two parties know about one another’s cognitive faculties, and that its conclusion isn’t all that strong in any case—it basically seems to say that if A and B agree to disagree in the sense Hanson defines then they are also agreeing to disagree about how well they think, which doesn’t seem very startling to me even if it turns out to be true without heavy technical conditions.)
Thank you for the clarification: Aumann’s theorem does not assume that the people have the same information. They just know each other’s posteriors. After reading the original paper, I understand that the concensus comes about iteratively in the following way: they know each other’s conclusions (posteriors). If they have different conclusions, then they must infer that the other has different information, and they modify their posteriors based on this different, unknown information to some extent. They then recompare their posteriors. If they’re still different, they conclude that the other’s evidence must have been stronger than they estimated, and they recalculate. So without actually sharing the information, they deduce the net result of the information by mutually comparing the posteriors.
In Aumann’s original paper, the statement of the theorem doesn’t involve any assumption that the two parties have performed any sort of iterative procedure. In informal explanations of why the result makes sense, such iterative procedures are usually described. I think this illustrates the point that the innocuous-sounding description of what the two parties are supposed to know (“their posteriors are common knowledge”) conceals more than meets the eye: to get a situation where anything like it is true, you need to assume that they’ve been through some higher-quality information exchange procedure.
The proof looks at an ordering of posteriors p1, p2, etc, that result from subsequent levels of knowledge of knowledge of the other’s posteriors. However, these are shown to be equal, so in a sense all of the iterations happens in some way simultaneously. -- Actually, I looked at it again and I’m not so sure this is true. It’s how I understand it.
Aumann’s agreement theorem—I don’t claim to know all its generalizations—assumes that the two parties have the same priors, and that each knows the other’s “information partition” (i.e., what states of the world the other can distinguish). It also assumes that their knowledge of one another’s posteriors is “common knowledge” in a technical sense. It also assumes that both parties are perfect Bayesians and that this too is “common knowledge”. I see no reason to assume that any of these is true, given MBlume’s description of the situation. (In particular, the assumption regarding what each knows about the other seems to me, from Aumann’s description, to imply more detailed knowledge of one another’s cognitive faculties than any human has of any other’s.)
Clearly someone is failing (very broadly understood) at something, since at least one of the two doctors assigns 99% probability to something untrue. But, e.g., the following is perfectly consistent with the scenario as described (although unlikely):
Both doctors are superlatively intelligent, skillful and well informed (about medicine). Both have done the same, entirely sensible, tests; one has been the victim of extreme bad luck and got evidence at the 99.5% level for a wrong diagnosis. Both have also been victims of further mischance, and each has (unknown to the other) got evidence at the 99.5% level that the other is horrendously incompetent even though that is not actually true. (We can agree, I hope, that all this is possible, albeit very unlikely?)
Now each considers the evidence. A, before learning B’s verdict:
P(malaria & B incompetent) = 0.995^2 ~= 0.99
P(malaria & B competent) = 0.995 . 0.005 ~= 0.005
P(bird flu & B incompetent) = 0.995 . 0.005 ~= 0.005
P(bird flu & B competent) = 0.005 . 0.005 = 0.000025
Now for a Bayesian update based on B’s opinion. Some kinda-plausible figures:
P(B gets given result | malaria & B incompetent) = 0.1
P(B gets given result | malaria & B competent) = 0.001
P(B gets given result | bird flu & B incompetent) = 0.1
P(B gets given result | bird flu & B competent) = 0.99
So the new odds are roughly 0.099 : 0.000005 : 0.0005 : 0.000025, giving a probability of about 99.5% for the “malaria & B incompetent” option.
B goes through an exactly parallel calculation, favouring “bird flu & A incompetent”.
Both doctors have been unlucky, but neither has been irrational. Those who think Aumann’s theorem requires one of them to have been irrational given the data: please explain what’s impossible in the above scenario.
If each scientist has gotten evidence at the 99.5% level that the other is horrendously incompetent, then they should have no problem convincing each other that the other is incompetent with said evidence. (Unless one of them has additional evidence to defend their competency, in which case they will agree on how the additional evidence should change the assessment.) The idea being that with sufficient communication, they have exactly the same information and thus must make the same conclusions.
On the other hand, the requirement of the same priors is interesting. Mightn’t this be how they could rationally come to different conclusions?
Aumann’s theorem itself doesn’t say anything about “with sufficient communication”; that’s just one possible way for them to make the relevant stuff “common knowledge”. (Also, remember that the thing about Aumann’s theorem is that the two parties are supposed not to have to share their actual evidence with one another—only their posterior probabilities. And, indeed, only their posterior probabilities for the single event whose probability they are to end up agreeing about.)
The scenario described in the original post here doesn’t say anything about there being “sufficient communication” either.
It seems to me that Aumann’s theorem is one of those (Goedel’s incompleteness theorem is notoriously one, to a much greater extent) where “everyone knows” a simple one-sentence version of it, which sounds exciting and dramatic and fraught with conclusions directly relevant to daily life, and which also happens to be quite different from the actual theorem.
But maybe some of those generalizations of Aumann’s theorem really do amount to saying that rational people can’t agree to disagree. If someone reading this is familiar with a presentation of some such generalization that actually provides details and a proof, I’d be very interested to know.
(For instance, is Hanson’s paper on “savvy Bayesian wannabes” an example? Brief skimming suggests that it still involves technical assumptions that might amount to drastic unrealism about how much the two parties know about one another’s cognitive faculties, and that its conclusion isn’t all that strong in any case—it basically seems to say that if A and B agree to disagree in the sense Hanson defines then they are also agreeing to disagree about how well they think, which doesn’t seem very startling to me even if it turns out to be true without heavy technical conditions.)
Thank you for the clarification: Aumann’s theorem does not assume that the people have the same information. They just know each other’s posteriors. After reading the original paper, I understand that the concensus comes about iteratively in the following way: they know each other’s conclusions (posteriors). If they have different conclusions, then they must infer that the other has different information, and they modify their posteriors based on this different, unknown information to some extent. They then recompare their posteriors. If they’re still different, they conclude that the other’s evidence must have been stronger than they estimated, and they recalculate. So without actually sharing the information, they deduce the net result of the information by mutually comparing the posteriors.
In Aumann’s original paper, the statement of the theorem doesn’t involve any assumption that the two parties have performed any sort of iterative procedure. In informal explanations of why the result makes sense, such iterative procedures are usually described. I think this illustrates the point that the innocuous-sounding description of what the two parties are supposed to know (“their posteriors are common knowledge”) conceals more than meets the eye: to get a situation where anything like it is true, you need to assume that they’ve been through some higher-quality information exchange procedure.
The proof looks at an ordering of posteriors p1, p2, etc, that result from subsequent levels of knowledge of knowledge of the other’s posteriors. However, these are shown to be equal, so in a sense all of the iterations happens in some way simultaneously. -- Actually, I looked at it again and I’m not so sure this is true. It’s how I understand it.