MBlume is relying on an Aumann’s (dis)agreement theorem, which is generally assumed knowledge around these parts. If you don’t think it (or any of it’s generalizations) apply here, please say why.
Aumann’s agreement theorem—I don’t claim to know all its generalizations—assumes that the two parties have the same priors, and that each knows the other’s “information partition” (i.e., what states of the world the other can distinguish). It also assumes that their knowledge of one another’s posteriors is “common knowledge” in a technical sense. It also assumes that both parties are perfect Bayesians and that this too is “common knowledge”. I see no reason to assume that any of these is true, given MBlume’s description of the situation. (In particular, the assumption regarding what each knows about the other seems to me, from Aumann’s description, to imply more detailed knowledge of one another’s cognitive faculties than any human has of any other’s.)
Clearly someone is failing (very broadly understood) at something, since at least one of the two doctors assigns 99% probability to something untrue. But, e.g., the following is perfectly consistent with the scenario as described (although unlikely):
Both doctors are superlatively intelligent, skillful and well informed (about medicine). Both have done the same, entirely sensible, tests; one has been the victim of extreme bad luck and got evidence at the 99.5% level for a wrong diagnosis. Both have also been victims of further mischance, and each has (unknown to the other) got evidence at the 99.5% level that the other is horrendously incompetent even though that is not actually true. (We can agree, I hope, that all this is possible, albeit very unlikely?)
Now each considers the evidence. A, before learning B’s verdict: P(malaria & B incompetent) = 0.995^2 ~= 0.99 P(malaria & B competent) = 0.995 . 0.005 ~= 0.005 P(bird flu & B incompetent) = 0.995 . 0.005 ~= 0.005 P(bird flu & B competent) = 0.005 . 0.005 = 0.000025
Now for a Bayesian update based on B’s opinion. Some kinda-plausible figures: P(B gets given result | malaria & B incompetent) = 0.1 P(B gets given result | malaria & B competent) = 0.001 P(B gets given result | bird flu & B incompetent) = 0.1 P(B gets given result | bird flu & B competent) = 0.99
So the new odds are roughly 0.099 : 0.000005 : 0.0005 : 0.000025, giving a probability of about 99.5% for the “malaria & B incompetent” option.
B goes through an exactly parallel calculation, favouring “bird flu & A incompetent”.
Both doctors have been unlucky, but neither has been irrational. Those who think Aumann’s theorem requires one of them to have been irrational given the data: please explain what’s impossible in the above scenario.
If each scientist has gotten evidence at the 99.5% level that the other is horrendously incompetent, then they should have no problem convincing each other that the other is incompetent with said evidence. (Unless one of them has additional evidence to defend their competency, in which case they will agree on how the additional evidence should change the assessment.) The idea being that with sufficient communication, they have exactly the same information and thus must make the same conclusions.
On the other hand, the requirement of the same priors is interesting. Mightn’t this be how they could rationally come to different conclusions?
Aumann’s theorem itself doesn’t say anything about “with sufficient communication”; that’s just one possible way for them to make the relevant stuff “common knowledge”. (Also, remember that the thing about Aumann’s theorem is that the two parties are supposed not to have to share their actual evidence with one another—only their posterior probabilities. And, indeed, only their posterior probabilities for the single event whose probability they are to end up agreeing about.)
The scenario described in the original post here doesn’t say anything about there being “sufficient communication” either.
It seems to me that Aumann’s theorem is one of those (Goedel’s incompleteness theorem is notoriously one, to a much greater extent) where “everyone knows” a simple one-sentence version of it, which sounds exciting and dramatic and fraught with conclusions directly relevant to daily life, and which also happens to be quite different from the actual theorem.
But maybe some of those generalizations of Aumann’s theorem really do amount to saying that rational people can’t agree to disagree. If someone reading this is familiar with a presentation of some such generalization that actually provides details and a proof, I’d be very interested to know.
(For instance, is Hanson’s paper on “savvy Bayesian wannabes” an example? Brief skimming suggests that it still involves technical assumptions that might amount to drastic unrealism about how much the two parties know about one another’s cognitive faculties, and that its conclusion isn’t all that strong in any case—it basically seems to say that if A and B agree to disagree in the sense Hanson defines then they are also agreeing to disagree about how well they think, which doesn’t seem very startling to me even if it turns out to be true without heavy technical conditions.)
Thank you for the clarification: Aumann’s theorem does not assume that the people have the same information. They just know each other’s posteriors. After reading the original paper, I understand that the concensus comes about iteratively in the following way: they know each other’s conclusions (posteriors). If they have different conclusions, then they must infer that the other has different information, and they modify their posteriors based on this different, unknown information to some extent. They then recompare their posteriors. If they’re still different, they conclude that the other’s evidence must have been stronger than they estimated, and they recalculate. So without actually sharing the information, they deduce the net result of the information by mutually comparing the posteriors.
In Aumann’s original paper, the statement of the theorem doesn’t involve any assumption that the two parties have performed any sort of iterative procedure. In informal explanations of why the result makes sense, such iterative procedures are usually described. I think this illustrates the point that the innocuous-sounding description of what the two parties are supposed to know (“their posteriors are common knowledge”) conceals more than meets the eye: to get a situation where anything like it is true, you need to assume that they’ve been through some higher-quality information exchange procedure.
The proof looks at an ordering of posteriors p1, p2, etc, that result from subsequent levels of knowledge of knowledge of the other’s posteriors. However, these are shown to be equal, so in a sense all of the iterations happens in some way simultaneously. -- Actually, I looked at it again and I’m not so sure this is true. It’s how I understand it.
“Well, it need hardly be said that someone here is failing at rationality.”
Right, using the agreement theorem, someone is failing at rationality. Either it is him or it is me. I must conclude that it is him. (If you would like an argument for this, I can provide it, but will skip for now as I suspect it is uncontroversial.)
Given that I have already concluded that my colleague is irrational, I cannot trust him to make a rational decision regarding the choice of drugs. Thus I just need to make the choice that will save at least 5,000 lives. (Note: I cannot predict which decision he will make, since irrational reasoning can lead to either choice. But if he chooses the malaria drugs, then all for the better.)
After we both choose the 5000 of each drug,
I don’t think you can claim to have acted rationally.
What is the “you” here? If the “you” is plural and refers to both me and my colleague, it is expected that we did not act rationally since we already knew we weren’t both rational.
However, I acted rationally, given the information that my colleague would not.
By the way, what is the interpretation “around these parts” of Aumann’s disagreement theorem taken together with the fact that apparently rational people have different solutions to these kinds of dilemma’s? Is the idea that eventually, we’ll reach a consensus?
Given that I have already concluded that my colleague is irrational, I cannot trust him to make a rational decision regarding the choice of drugs. Thus I just need to make the choice that will save at least 5,000 lives. (Note: I cannot predict which decision he will make, since irrational reasoning can lead to either choice. But if he chooses the malaria drugs, then all for the better.)
Is it a common belief that someone who has acted irrationally with regards to X is unable to act rationally with regards to Y? I am not challenging, just pinging for more information because this came as a surprise.
Whether or not your assumption is true, your comment added no information. If people are capable of detecting obvious errors, then they would already have done so; if not, then you haven’t helped.
Not only does this style of comment prevent others from learning from you, it also prevents others from actually engaging with your point, so that you might learn from them. Assuming that you have nothing to learn from others is, in general, a poor strategy.
(All of that assumes that you’re not just bluffing and trying to hide the fact that you have no idea what you’re talking about.)
“If people are capable of detecting obvious errors, then they would already have done so”
And there’s another. Possessing the capacity for something isn’t the same as achieving it. People might detect the error if they happen to notice it. People don’t necessarily notice things; in fact, they often don’t notice the obvious at all.
“Assuming that you have nothing to learn from others is, in general, a poor strategy.”
Concluding so accurately, however, is an excellent one.
“Well, it need hardly be said that someone here is failing at rationality.”
No. The given data does not require that either of the two individuals have failed to be rational.
Voted down for bald assertion with no argument.
MBlume is relying on an Aumann’s (dis)agreement theorem, which is generally assumed knowledge around these parts. If you don’t think it (or any of it’s generalizations) apply here, please say why.
Aumann’s agreement theorem—I don’t claim to know all its generalizations—assumes that the two parties have the same priors, and that each knows the other’s “information partition” (i.e., what states of the world the other can distinguish). It also assumes that their knowledge of one another’s posteriors is “common knowledge” in a technical sense. It also assumes that both parties are perfect Bayesians and that this too is “common knowledge”. I see no reason to assume that any of these is true, given MBlume’s description of the situation. (In particular, the assumption regarding what each knows about the other seems to me, from Aumann’s description, to imply more detailed knowledge of one another’s cognitive faculties than any human has of any other’s.)
Clearly someone is failing (very broadly understood) at something, since at least one of the two doctors assigns 99% probability to something untrue. But, e.g., the following is perfectly consistent with the scenario as described (although unlikely):
Both doctors are superlatively intelligent, skillful and well informed (about medicine). Both have done the same, entirely sensible, tests; one has been the victim of extreme bad luck and got evidence at the 99.5% level for a wrong diagnosis. Both have also been victims of further mischance, and each has (unknown to the other) got evidence at the 99.5% level that the other is horrendously incompetent even though that is not actually true. (We can agree, I hope, that all this is possible, albeit very unlikely?)
Now each considers the evidence. A, before learning B’s verdict:
P(malaria & B incompetent) = 0.995^2 ~= 0.99
P(malaria & B competent) = 0.995 . 0.005 ~= 0.005
P(bird flu & B incompetent) = 0.995 . 0.005 ~= 0.005
P(bird flu & B competent) = 0.005 . 0.005 = 0.000025
Now for a Bayesian update based on B’s opinion. Some kinda-plausible figures:
P(B gets given result | malaria & B incompetent) = 0.1
P(B gets given result | malaria & B competent) = 0.001
P(B gets given result | bird flu & B incompetent) = 0.1
P(B gets given result | bird flu & B competent) = 0.99
So the new odds are roughly 0.099 : 0.000005 : 0.0005 : 0.000025, giving a probability of about 99.5% for the “malaria & B incompetent” option.
B goes through an exactly parallel calculation, favouring “bird flu & A incompetent”.
Both doctors have been unlucky, but neither has been irrational. Those who think Aumann’s theorem requires one of them to have been irrational given the data: please explain what’s impossible in the above scenario.
If each scientist has gotten evidence at the 99.5% level that the other is horrendously incompetent, then they should have no problem convincing each other that the other is incompetent with said evidence. (Unless one of them has additional evidence to defend their competency, in which case they will agree on how the additional evidence should change the assessment.) The idea being that with sufficient communication, they have exactly the same information and thus must make the same conclusions.
On the other hand, the requirement of the same priors is interesting. Mightn’t this be how they could rationally come to different conclusions?
Aumann’s theorem itself doesn’t say anything about “with sufficient communication”; that’s just one possible way for them to make the relevant stuff “common knowledge”. (Also, remember that the thing about Aumann’s theorem is that the two parties are supposed not to have to share their actual evidence with one another—only their posterior probabilities. And, indeed, only their posterior probabilities for the single event whose probability they are to end up agreeing about.)
The scenario described in the original post here doesn’t say anything about there being “sufficient communication” either.
It seems to me that Aumann’s theorem is one of those (Goedel’s incompleteness theorem is notoriously one, to a much greater extent) where “everyone knows” a simple one-sentence version of it, which sounds exciting and dramatic and fraught with conclusions directly relevant to daily life, and which also happens to be quite different from the actual theorem.
But maybe some of those generalizations of Aumann’s theorem really do amount to saying that rational people can’t agree to disagree. If someone reading this is familiar with a presentation of some such generalization that actually provides details and a proof, I’d be very interested to know.
(For instance, is Hanson’s paper on “savvy Bayesian wannabes” an example? Brief skimming suggests that it still involves technical assumptions that might amount to drastic unrealism about how much the two parties know about one another’s cognitive faculties, and that its conclusion isn’t all that strong in any case—it basically seems to say that if A and B agree to disagree in the sense Hanson defines then they are also agreeing to disagree about how well they think, which doesn’t seem very startling to me even if it turns out to be true without heavy technical conditions.)
Thank you for the clarification: Aumann’s theorem does not assume that the people have the same information. They just know each other’s posteriors. After reading the original paper, I understand that the concensus comes about iteratively in the following way: they know each other’s conclusions (posteriors). If they have different conclusions, then they must infer that the other has different information, and they modify their posteriors based on this different, unknown information to some extent. They then recompare their posteriors. If they’re still different, they conclude that the other’s evidence must have been stronger than they estimated, and they recalculate. So without actually sharing the information, they deduce the net result of the information by mutually comparing the posteriors.
In Aumann’s original paper, the statement of the theorem doesn’t involve any assumption that the two parties have performed any sort of iterative procedure. In informal explanations of why the result makes sense, such iterative procedures are usually described. I think this illustrates the point that the innocuous-sounding description of what the two parties are supposed to know (“their posteriors are common knowledge”) conceals more than meets the eye: to get a situation where anything like it is true, you need to assume that they’ve been through some higher-quality information exchange procedure.
The proof looks at an ordering of posteriors p1, p2, etc, that result from subsequent levels of knowledge of knowledge of the other’s posteriors. However, these are shown to be equal, so in a sense all of the iterations happens in some way simultaneously. -- Actually, I looked at it again and I’m not so sure this is true. It’s how I understand it.
To be fair, I did neglect to state specifically that we have common knowledge of our probability estimates
Right, using the agreement theorem, someone is failing at rationality. Either it is him or it is me. I must conclude that it is him. (If you would like an argument for this, I can provide it, but will skip for now as I suspect it is uncontroversial.)
Given that I have already concluded that my colleague is irrational, I cannot trust him to make a rational decision regarding the choice of drugs. Thus I just need to make the choice that will save at least 5,000 lives. (Note: I cannot predict which decision he will make, since irrational reasoning can lead to either choice. But if he chooses the malaria drugs, then all for the better.)
After we both choose the 5000 of each drug,
What is the “you” here? If the “you” is plural and refers to both me and my colleague, it is expected that we did not act rationally since we already knew we weren’t both rational.
However, I acted rationally, given the information that my colleague would not.
By the way, what is the interpretation “around these parts” of Aumann’s disagreement theorem taken together with the fact that apparently rational people have different solutions to these kinds of dilemma’s? Is the idea that eventually, we’ll reach a consensus?
Is it a common belief that someone who has acted irrationally with regards to X is unable to act rationally with regards to Y? I am not challenging, just pinging for more information because this came as a surprise.
I generally assume that the people who read my comments are capable of detecting obvious errors.
Look at the requirements for those theorems to apply, and then look at the conditions MBlume set out.
Whether or not your assumption is true, your comment added no information. If people are capable of detecting obvious errors, then they would already have done so; if not, then you haven’t helped.
Not only does this style of comment prevent others from learning from you, it also prevents others from actually engaging with your point, so that you might learn from them. Assuming that you have nothing to learn from others is, in general, a poor strategy.
(All of that assumes that you’re not just bluffing and trying to hide the fact that you have no idea what you’re talking about.)
“your comment added no information.”
An obvious, trivial falsehood.
“If people are capable of detecting obvious errors, then they would already have done so”
And there’s another. Possessing the capacity for something isn’t the same as achieving it. People might detect the error if they happen to notice it. People don’t necessarily notice things; in fact, they often don’t notice the obvious at all.
“Assuming that you have nothing to learn from others is, in general, a poor strategy.”
Concluding so accurately, however, is an excellent one.