Here is a concrete example. Two referees independently read the same paper, giving it a mark in the range [0,1], where 0 is a definite reject and 1 is a definite accept. They then meet to decide on a joint verdict.
Alice rates the paper at 0.9. Bob rates the paper at 0.1. Assuming their perfect rationality:
How should they proceed to reach an Aumann-style agreement?
How accurate is the resulting common estimate likely to be?
Assume that they both have the same prior over the merits of the papers they receive to review: their true worths are uniformly distributed over [0,1]. They have read the same paper and have honestly attempted to judge it according to the same criteria. They may have other, differing information available to them, but the Aumann agreement process does not involve sharing such information.
I’ve been trying to analyse this in terms of Aumann’s original paper and Scott Aaronson’s more detailed treatment but I am not getting very far. In Aaronson’s framework, if we require a 90% chance of Alice and Bob agreeing to within 0.2, then this can be achieved (Theorem 5) with at most 1/(0.1*0.2^2) = 250 messages in which one referee tells the other their current estimate of the paper. It is as yet unclear to me what calculations Alice and Bob must perform to update their estimates, or what values they might converge on.
In practice, disagreements like this are resolved by sharing not posteriors, but evidence. In this example, Bob might know something that Alice does not, viz. that the authors already published almost the same work in another venue a year ago and that the present paper contains almost nothing new. Or, on the other hand, Bob might simply have missed the point of the paper due to lacking some background knowledge that Alice has.
They may have other, differing information available to them, but the Aumann agreement process does not involve sharing such information.
What they do indirectly share is something like their confidence levels—and how much their confidence is shaken by the confidence of their partner in a different result.
Yes, Aumann agreement is not very realistic—but the point is that the partners can be expected to relatively quickly reach agreement, without very much effort—if they are honest, truth-seekers with some energy for educating others—and know the other is the same way.
So, the prevalance of persistent disagreements suggests that the world is not filled with honest, truth-seekers. Not very surprising, perhaps.
Yes, Aumann agreement is not very realistic—but the point is that the partners can be expected to relatively quickly reach agreement, without very much effort—if they are honest, truth-seekers with some energy for educating others—and know the other is the same way.
250 rounds of “I update my estimate to...” strikes me as rather a lot of effort, but that’s not the important point here. My question is, assume that Alice and Bob are indeed honest truth-seekers with some energy for educating others, and have common knowledge that this is so. What then will the Aumann agreement process actually look like for this example, where the only thing that is directly communicated is each party’s latest expectation of the value of the paper? Will it converge to the true value of the paper in both of the following scenarios:
The true value is 0.1 because of the prior publications to which the present paper adds little new, publications which Bob knew about but Alice didn’t.
The true value is 0.8 because it’s excellent work which the authors need to improve their exposition of to make it accessible to non-experts like Bob.
No, I don’t think it is true that both parties necessarily wind up with more accurate estimates after updating and agreeing, or even an estimate closer to what they would have obtained by sharing all their data.
No, I don’t think it is true that both parties necessarily wind up with more accurate estimates after updating and agreeing, or even an estimate closer to what they would have obtained by sharing all their data.
That greatly diminishes the value of the theorem, and implies that it fails to justify blaming dishonesty and irrationality for the prevalance of persistent disagreements.
I’m not sure. Aumann’s paper seems to only bill itself as leading to agreement—with relatively little discussion of the properties of what is eventually agreed upon. Anyway, I think you may be expecting too much from it; and I don’t think it fails in the way that you say.
Why should ideal Bayesian rationalists alter their estimates to something that is not more likely to be true according to the available evidence? The theorem states that they reach agreement because it is the most likely way to be correct.
The parties do update according to their available evidence. However, neither has access to all the evidence. Also, evidence can be misleading—and subsets of the evidence are more likely to mislead.
Parties can become less accurate after updating, I think.
For another example, say A privately sees 5 heads, and A’s identical twin, B privately sees 7 tails—and then they Auman agree on the issue of whether the coin is fair. A will come out with more confidence in thinking that the coin is biased. If the coin is actually fair, A will have become more wrong.
If A and B had shared all their evidence—instead of going through an Auman agreement exchange—A would have realised that the coin was probably fair—thereby becoming less wrong.
Sometimes following the best available answer will lead you to an answer that is incorrect, but from your own perspective it is always the way to maximize your chance of being right.
I don’t think it is true that both parties necessarily wind up with more accurate estimates after updating and agreeing, or even an estimate closer to what they would have obtained by sharing all their data.
The scenario in the grandparent provides an example of an individual’s estimate becoming worse after Aumann agreeing—and also an example of their estimate getting further away from what they would have believed if both parties had shared all their evidence.
I am unable to see where we have any disagreement. If you think we disagree, perhaps this will help you to pinpoint where.
Perhaps I was reading an implication into your comment that you didn’t intend, but I took it that you were saying that Aumann’s Agreement Theorem leads to agreement between the parties, but not necessarily as a result of their each attempting to revise their estimates to what is most likely given the data they have..
Imagine I think there are 200 balls in the urn, but Robin Hanson thinks there are 300 balls in the urn. Once Robin tells me his estimate, and I tell him mine, we should converge upon a common opinion. In essence his opinion serves as a “sufficient statistic” for all of his evidence.
My comments were intended to suggest that the results of going through an Aumann agreement exchange could quite be different from what you would get if the parties shared all their relevant evidence.
The main similarity is that the parties end up agreeing with each other in both cases.
Here is a concrete example. Two referees independently read the same paper, giving it a mark in the range [0,1], where 0 is a definite reject and 1 is a definite accept. They then meet to decide on a joint verdict.
Alice rates the paper at 0.9. Bob rates the paper at 0.1. Assuming their perfect rationality:
How should they proceed to reach an Aumann-style agreement?
How accurate is the resulting common estimate likely to be?
Assume that they both have the same prior over the merits of the papers they receive to review: their true worths are uniformly distributed over [0,1]. They have read the same paper and have honestly attempted to judge it according to the same criteria. They may have other, differing information available to them, but the Aumann agreement process does not involve sharing such information.
I’ve been trying to analyse this in terms of Aumann’s original paper and Scott Aaronson’s more detailed treatment but I am not getting very far. In Aaronson’s framework, if we require a 90% chance of Alice and Bob agreeing to within 0.2, then this can be achieved (Theorem 5) with at most 1/(0.1*0.2^2) = 250 messages in which one referee tells the other their current estimate of the paper. It is as yet unclear to me what calculations Alice and Bob must perform to update their estimates, or what values they might converge on.
In practice, disagreements like this are resolved by sharing not posteriors, but evidence. In this example, Bob might know something that Alice does not, viz. that the authors already published almost the same work in another venue a year ago and that the present paper contains almost nothing new. Or, on the other hand, Bob might simply have missed the point of the paper due to lacking some background knowledge that Alice has.
What they do indirectly share is something like their confidence levels—and how much their confidence is shaken by the confidence of their partner in a different result.
Yes, Aumann agreement is not very realistic—but the point is that the partners can be expected to relatively quickly reach agreement, without very much effort—if they are honest, truth-seekers with some energy for educating others—and know the other is the same way.
So, the prevalance of persistent disagreements suggests that the world is not filled with honest, truth-seekers. Not very surprising, perhaps.
250 rounds of “I update my estimate to...” strikes me as rather a lot of effort, but that’s not the important point here. My question is, assume that Alice and Bob are indeed honest truth-seekers with some energy for educating others, and have common knowledge that this is so. What then will the Aumann agreement process actually look like for this example, where the only thing that is directly communicated is each party’s latest expectation of the value of the paper? Will it converge to the true value of the paper in both of the following scenarios:
The true value is 0.1 because of the prior publications to which the present paper adds little new, publications which Bob knew about but Alice didn’t.
The true value is 0.8 because it’s excellent work which the authors need to improve their exposition of to make it accessible to non-experts like Bob.
No, I don’t think it is true that both parties necessarily wind up with more accurate estimates after updating and agreeing, or even an estimate closer to what they would have obtained by sharing all their data.
That greatly diminishes the value of the theorem, and implies that it fails to justify blaming dishonesty and irrationality for the prevalance of persistent disagreements.
I’m not sure. Aumann’s paper seems to only bill itself as leading to agreement—with relatively little discussion of the properties of what is eventually agreed upon. Anyway, I think you may be expecting too much from it; and I don’t think it fails in the way that you say.
Why should ideal Bayesian rationalists alter their estimates to something that is not more likely to be true according to the available evidence? The theorem states that they reach agreement because it is the most likely way to be correct.
The parties do update according to their available evidence. However, neither has access to all the evidence. Also, evidence can be misleading—and subsets of the evidence are more likely to mislead.
Parties can become less accurate after updating, I think.
For example, consider A in this example.
For another example, say A privately sees 5 heads, and A’s identical twin, B privately sees 7 tails—and then they Auman agree on the issue of whether the coin is fair. A will come out with more confidence in thinking that the coin is biased. If the coin is actually fair, A will have become more wrong.
If A and B had shared all their evidence—instead of going through an Auman agreement exchange—A would have realised that the coin was probably fair—thereby becoming less wrong.
Sometimes following the best available answer will lead you to an answer that is incorrect, but from your own perspective it is always the way to maximize your chance of being right.
To recap, what I originally said here was:
The scenario in the grandparent provides an example of an individual’s estimate becoming worse after Aumann agreeing—and also an example of their estimate getting further away from what they would have believed if both parties had shared all their evidence.
I am unable to see where we have any disagreement. If you think we disagree, perhaps this will help you to pinpoint where.
Perhaps I was reading an implication into your comment that you didn’t intend, but I took it that you were saying that Aumann’s Agreement Theorem leads to agreement between the parties, but not necessarily as a result of their each attempting to revise their estimates to what is most likely given the data they have..
That wasn’t intended. Earlier, I cited this.
http://www.marginalrevolution.com/marginalrevolution/2005/10/robert_aumann_n.html
My comments were intended to suggest that the results of going through an Aumann agreement exchange could quite be different from what you would get if the parties shared all their relevant evidence.
The main similarity is that the parties end up agreeing with each other in both cases.