Wei Dai discusses the actual theorem, and in the last section expresses a sentiment similar to mine. I disapprove of the first paragraph of “Aumann agreement” wiki page (but see also the separate Aumann’s agreement theorem wiki page).
Unless you think I’m so irredeemably irrational that my opinions anticorrelate with truth, then the very fact that I believe something is Bayesian evidence that that something is true
This sentence is problematic. Beliefs are probabilistic, and the import of some rationalist’s estimate varies according to one’s own knowledge. If I am fairly certain that a rationalist has been getting flawed evidence (that is selected to support a proposition) and thinks the evidence is probably fine, that rationalist’s weak belief that that proposition is true is, for me, evidence against the proposition.
Consider: if I’m an honest seeker of truth, and you’re an honest seeker of truth, and we believe each other to be honest, then we can update on each other’s opinions and quickly reach agreement.
Iterative updating is a method rationalists can use when they can’t share information (as humans often can’t do well), but that is a process the result of which is agreement, but not Aumann agreement.
Aumann agreement is a result of two rationalists sharing all information and ideally updating. It’s a thing to know so that one can assess a situation after two reasoners have reached their conclusions based on identical information, because if those conclusions are not identical, then one or both are not perfect rationalists. But one doesn’t get much benefit from knowing the theorem, and wouldn’t even if people actually could share all their information; if one updates properly on evidence, one doesn’t need to know about Aumann agreement to reach proper conclusions because it has nothing to do with the normal process of reasoning about most things, and likewise if one knew the theorem but not how to update, it would be of little help.
As Vladmir_Nesov said:
The crucial point is that it’s not a procedure, it’s a property, an indicator and not a method.
It’s especially unhelpful for humans as we can’t share all our information.
As Wei_Dei said:
Having explained all of that, it seems to me that this theorem is less relevant to a practical rationalist than I thought before I really understood it. After looking at the math, it’s apparent that “common knowledge” is a much stricter requirement than it sounds. The most obvious way to achieve it is for the two agents to simply tell each other I(w) and J(w), after which they share a new, common information partition. But in that case, agreement itself is obvious and there is no need to learn or understand Aumann’s theorem.
So Wei_Dei’s use is fine, as in his post he describe’s its limited usefulness.
at no point in a conversation can Bayesians have common knowledge that they will disagree.
As I don’t understand this at all, perhaps this sentence is fine and I badly misunderstand the concepts here.
Aumann agreement is a result of two rationalists sharing all information and ideally updating.
No, this is not the case. All they need is a common prior and common knowledge of their probabilities. The whole reason Aumann agreement is clever is because you’re not sharing the evidence that convinced you.
So “at no point in a conversation can Bayesians have common knowledge that they will disagree,” means “‘Common knowledge’ is a far stronger condition than it sounds,” and nothing more and nothing less?
See, “knowledge” is of something that is true, or at least actually interpreted input. So if someone can’t have knowledge of it, that implies i’s true and one merely can’t know it. If there can’t be common knowledge, that implies that at least one can’t know the true thing. But the thing in question, “that they will disagree”, is false, right?
I do not understand what the words in the sentence mean. It seems to read:
“At no point can two ideal reasoners both know true fact X, where true fact X is that they will disagree on posteriors, and that each knows that they will disagree on posteriors, etc.”
But the theorem is that they will not disagree on posteriors...
So “at no point in a conversation can Bayesians have common knowledge that they will disagree,” means “‘Common knowledge’ is a far stronger condition than it sounds,” and nothing more and nothing less?
No, for a couple of reasons.
First, I misunderstood the context of that quote. I thought that it was from Wei Dai’s post (because he was the last-named source that you’d quoted). Under this misapprehension, I took him to be pointing out that common knowledge of anything is a fantastically strong condition, and so, in particular, common knowledge of disagreement is practically impossible. It’s theoretically possible for two Bayesians to have common knowledge of disagreement (though, by the theorem, they must have had different priors). But can’t happen in the real world, such as in Luke’s conversations with Anna.
But I now see that this whole line of thought was based on a silly misunderstanding on my part.
In the context of the LW wiki entry, I think that the quote is just supposed to be a restatement of Aumann’s result. In that context, Bayesian reasoners are assumed to have the the same prior (though this could be made clearer). Then I unpack the quote just as you do:
“At no point can two ideal reasoners both know true fact X, where true fact X is that they will disagree on posteriors, and that each knows that they will disagree on posteriors, etc.”
As you point out, by Aumann’s theorem, they won’t disagree on posteriors, so they will never have common knowledge of disagreement, just as the quote says. Conversely, if they have common knowledge of posteriors, but, per the quote, they can’t have common knowledge of disagreement, then those posteriors must agree, which is Aumann’s theorem. In this sense, the quote is equivalent to Aumann’s result.
Apparently the author doesn’t use the word “knowledge” in such a way that to say “A can’t have knowledge of X” is to imply that X is true. (Nor do I, FWIW.)
Do you also object to the use of the term “Aumann agreement” by Wei Dai and on the LW wiki?
Wei Dai discusses the actual theorem, and in the last section expresses a sentiment similar to mine. I disapprove of the first paragraph of “Aumann agreement” wiki page (but see also the separate Aumann’s agreement theorem wiki page).
FWIW, I wrote up a brief explanation and proof of Aumann’s agreement theorem.
The wiki entry does not look good to me.
This sentence is problematic. Beliefs are probabilistic, and the import of some rationalist’s estimate varies according to one’s own knowledge. If I am fairly certain that a rationalist has been getting flawed evidence (that is selected to support a proposition) and thinks the evidence is probably fine, that rationalist’s weak belief that that proposition is true is, for me, evidence against the proposition.
Iterative updating is a method rationalists can use when they can’t share information (as humans often can’t do well), but that is a process the result of which is agreement, but not Aumann agreement.
Aumann agreement is a result of two rationalists sharing all information and ideally updating. It’s a thing to know so that one can assess a situation after two reasoners have reached their conclusions based on identical information, because if those conclusions are not identical, then one or both are not perfect rationalists. But one doesn’t get much benefit from knowing the theorem, and wouldn’t even if people actually could share all their information; if one updates properly on evidence, one doesn’t need to know about Aumann agreement to reach proper conclusions because it has nothing to do with the normal process of reasoning about most things, and likewise if one knew the theorem but not how to update, it would be of little help.
As Vladmir_Nesov said:
It’s especially unhelpful for humans as we can’t share all our information.
As Wei_Dei said:
So Wei_Dei’s use is fine, as in his post he describe’s its limited usefulness.
As I don’t understand this at all, perhaps this sentence is fine and I badly misunderstand the concepts here.
No, this is not the case. All they need is a common prior and common knowledge of their probabilities. The whole reason Aumann agreement is clever is because you’re not sharing the evidence that convinced you.
See, for example, the original paper.
Updated. (My brain, I didn’t edit the comment.)
“Common knowledge” is a far stronger condition than it sounds.
So “at no point in a conversation can Bayesians have common knowledge that they will disagree,” means “‘Common knowledge’ is a far stronger condition than it sounds,” and nothing more and nothing less?
See, “knowledge” is of something that is true, or at least actually interpreted input. So if someone can’t have knowledge of it, that implies i’s true and one merely can’t know it. If there can’t be common knowledge, that implies that at least one can’t know the true thing. But the thing in question, “that they will disagree”, is false, right?
I do not understand what the words in the sentence mean. It seems to read:
“At no point can two ideal reasoners both know true fact X, where true fact X is that they will disagree on posteriors, and that each knows that they will disagree on posteriors, etc.”
But the theorem is that they will not disagree on posteriors...
No, for a couple of reasons.
First, I misunderstood the context of that quote. I thought that it was from Wei Dai’s post (because he was the last-named source that you’d quoted). Under this misapprehension, I took him to be pointing out that common knowledge of anything is a fantastically strong condition, and so, in particular, common knowledge of disagreement is practically impossible. It’s theoretically possible for two Bayesians to have common knowledge of disagreement (though, by the theorem, they must have had different priors). But can’t happen in the real world, such as in Luke’s conversations with Anna.
But I now see that this whole line of thought was based on a silly misunderstanding on my part.
In the context of the LW wiki entry, I think that the quote is just supposed to be a restatement of Aumann’s result. In that context, Bayesian reasoners are assumed to have the the same prior (though this could be made clearer). Then I unpack the quote just as you do:
As you point out, by Aumann’s theorem, they won’t disagree on posteriors, so they will never have common knowledge of disagreement, just as the quote says. Conversely, if they have common knowledge of posteriors, but, per the quote, they can’t have common knowledge of disagreement, then those posteriors must agree, which is Aumann’s theorem. In this sense, the quote is equivalent to Aumann’s result.
Apparently the author doesn’t use the word “knowledge” in such a way that to say “A can’t have knowledge of X” is to imply that X is true. (Nor do I, FWIW.)