The second pleasure, related to the first, is the extremely common result of reaching Aumann agreement after initially disagreeing.
It’s never “Aumann agreement”, it’s just agreement, even if more specifically agreement on actual belief (rather than on ostensible position) reached by forming common understanding.
Wei Dai discusses the actual theorem, and in the last section expresses a sentiment similar to mine. I disapprove of the first paragraph of “Aumann agreement” wiki page (but see also the separate Aumann’s agreement theorem wiki page).
Unless you think I’m so irredeemably irrational that my opinions anticorrelate with truth, then the very fact that I believe something is Bayesian evidence that that something is true
This sentence is problematic. Beliefs are probabilistic, and the import of some rationalist’s estimate varies according to one’s own knowledge. If I am fairly certain that a rationalist has been getting flawed evidence (that is selected to support a proposition) and thinks the evidence is probably fine, that rationalist’s weak belief that that proposition is true is, for me, evidence against the proposition.
Consider: if I’m an honest seeker of truth, and you’re an honest seeker of truth, and we believe each other to be honest, then we can update on each other’s opinions and quickly reach agreement.
Iterative updating is a method rationalists can use when they can’t share information (as humans often can’t do well), but that is a process the result of which is agreement, but not Aumann agreement.
Aumann agreement is a result of two rationalists sharing all information and ideally updating. It’s a thing to know so that one can assess a situation after two reasoners have reached their conclusions based on identical information, because if those conclusions are not identical, then one or both are not perfect rationalists. But one doesn’t get much benefit from knowing the theorem, and wouldn’t even if people actually could share all their information; if one updates properly on evidence, one doesn’t need to know about Aumann agreement to reach proper conclusions because it has nothing to do with the normal process of reasoning about most things, and likewise if one knew the theorem but not how to update, it would be of little help.
As Vladmir_Nesov said:
The crucial point is that it’s not a procedure, it’s a property, an indicator and not a method.
It’s especially unhelpful for humans as we can’t share all our information.
As Wei_Dei said:
Having explained all of that, it seems to me that this theorem is less relevant to a practical rationalist than I thought before I really understood it. After looking at the math, it’s apparent that “common knowledge” is a much stricter requirement than it sounds. The most obvious way to achieve it is for the two agents to simply tell each other I(w) and J(w), after which they share a new, common information partition. But in that case, agreement itself is obvious and there is no need to learn or understand Aumann’s theorem.
So Wei_Dei’s use is fine, as in his post he describe’s its limited usefulness.
at no point in a conversation can Bayesians have common knowledge that they will disagree.
As I don’t understand this at all, perhaps this sentence is fine and I badly misunderstand the concepts here.
Aumann agreement is a result of two rationalists sharing all information and ideally updating.
No, this is not the case. All they need is a common prior and common knowledge of their probabilities. The whole reason Aumann agreement is clever is because you’re not sharing the evidence that convinced you.
So “at no point in a conversation can Bayesians have common knowledge that they will disagree,” means “‘Common knowledge’ is a far stronger condition than it sounds,” and nothing more and nothing less?
See, “knowledge” is of something that is true, or at least actually interpreted input. So if someone can’t have knowledge of it, that implies i’s true and one merely can’t know it. If there can’t be common knowledge, that implies that at least one can’t know the true thing. But the thing in question, “that they will disagree”, is false, right?
I do not understand what the words in the sentence mean. It seems to read:
“At no point can two ideal reasoners both know true fact X, where true fact X is that they will disagree on posteriors, and that each knows that they will disagree on posteriors, etc.”
But the theorem is that they will not disagree on posteriors...
So “at no point in a conversation can Bayesians have common knowledge that they will disagree,” means “‘Common knowledge’ is a far stronger condition than it sounds,” and nothing more and nothing less?
No, for a couple of reasons.
First, I misunderstood the context of that quote. I thought that it was from Wei Dai’s post (because he was the last-named source that you’d quoted). Under this misapprehension, I took him to be pointing out that common knowledge of anything is a fantastically strong condition, and so, in particular, common knowledge of disagreement is practically impossible. It’s theoretically possible for two Bayesians to have common knowledge of disagreement (though, by the theorem, they must have had different priors). But can’t happen in the real world, such as in Luke’s conversations with Anna.
But I now see that this whole line of thought was based on a silly misunderstanding on my part.
In the context of the LW wiki entry, I think that the quote is just supposed to be a restatement of Aumann’s result. In that context, Bayesian reasoners are assumed to have the the same prior (though this could be made clearer). Then I unpack the quote just as you do:
“At no point can two ideal reasoners both know true fact X, where true fact X is that they will disagree on posteriors, and that each knows that they will disagree on posteriors, etc.”
As you point out, by Aumann’s theorem, they won’t disagree on posteriors, so they will never have common knowledge of disagreement, just as the quote says. Conversely, if they have common knowledge of posteriors, but, per the quote, they can’t have common knowledge of disagreement, then those posteriors must agree, which is Aumann’s theorem. In this sense, the quote is equivalent to Aumann’s result.
Apparently the author doesn’t use the word “knowledge” in such a way that to say “A can’t have knowledge of X” is to imply that X is true. (Nor do I, FWIW.)
Yeah; “Aumann agreement” is (to my knowledge) my own invented term by which I mean “Agreement reached by, among other things, taking into account the Bayesian evidence the other’s testimony.”
taking into account [as] the Bayesian evidence the other’s testimony
This seems like usually an unimportant (and/because unreliable/difficult to use) component, most of the work is done by convincing argument, which helps with inferential difficulties, rather than lack of information.
Then it seems like your definition is meaningless. Does your invented term mean something like “sharing information and collaboratively trying to reach the best answer?”
As above, I use “Aumann agreement” to mean “Agreement reached by, among other things, taking into account the Bayesian evidence the other’s testimony.” Vladimir is right that most of the work is done by convincing argument in most cases. However, there are many cases (e.g., “which sentence sounds better in this paragraph?”) where taking the evidence of the other’s opinion actually does change the alternative. Also, Anna and I (for example) have quite a lot of respect for the other’s opinion on many subjects, and so we update more heavily from each other’s testimony than most people would.
I don’t think Aumann agreement is a good term for this; there’s a huge difference between that mathematically precise procedure and the fuzzy process you’re describing.
Aumann agreement is already there, it’s a fact of a certain situation, not a procedure for getting to an agreement, unlike the practice of forming a common understanding Luke talked about. My comment was basically a pun on your use of the word “procedure”.
Agreed. This decision-making method is so common we normally don’t name it. E.g. “I was going to dye my hair, but my friend told me about the terrible experience she had, and now I think I’ll go to a salon instead of trying it at home.” I don’t see a need to make up jargon for “considering the advice of trusted people.”
It seems like the purpose of this post was mostly to share your enjoyment of how wise your coworkers are and how well you cooperate with each other. Which is fine, but let’s not technify it unnecessarily.
It’s never “Aumann agreement”, it’s just agreement, even if more specifically agreement on actual belief (rather than on ostensible position) reached by forming common understanding.
Do you also object to the use of the term “Aumann agreement” by Wei Dai and on the LW wiki?
Wei Dai discusses the actual theorem, and in the last section expresses a sentiment similar to mine. I disapprove of the first paragraph of “Aumann agreement” wiki page (but see also the separate Aumann’s agreement theorem wiki page).
FWIW, I wrote up a brief explanation and proof of Aumann’s agreement theorem.
The wiki entry does not look good to me.
This sentence is problematic. Beliefs are probabilistic, and the import of some rationalist’s estimate varies according to one’s own knowledge. If I am fairly certain that a rationalist has been getting flawed evidence (that is selected to support a proposition) and thinks the evidence is probably fine, that rationalist’s weak belief that that proposition is true is, for me, evidence against the proposition.
Iterative updating is a method rationalists can use when they can’t share information (as humans often can’t do well), but that is a process the result of which is agreement, but not Aumann agreement.
Aumann agreement is a result of two rationalists sharing all information and ideally updating. It’s a thing to know so that one can assess a situation after two reasoners have reached their conclusions based on identical information, because if those conclusions are not identical, then one or both are not perfect rationalists. But one doesn’t get much benefit from knowing the theorem, and wouldn’t even if people actually could share all their information; if one updates properly on evidence, one doesn’t need to know about Aumann agreement to reach proper conclusions because it has nothing to do with the normal process of reasoning about most things, and likewise if one knew the theorem but not how to update, it would be of little help.
As Vladmir_Nesov said:
It’s especially unhelpful for humans as we can’t share all our information.
As Wei_Dei said:
So Wei_Dei’s use is fine, as in his post he describe’s its limited usefulness.
As I don’t understand this at all, perhaps this sentence is fine and I badly misunderstand the concepts here.
No, this is not the case. All they need is a common prior and common knowledge of their probabilities. The whole reason Aumann agreement is clever is because you’re not sharing the evidence that convinced you.
See, for example, the original paper.
Updated. (My brain, I didn’t edit the comment.)
“Common knowledge” is a far stronger condition than it sounds.
So “at no point in a conversation can Bayesians have common knowledge that they will disagree,” means “‘Common knowledge’ is a far stronger condition than it sounds,” and nothing more and nothing less?
See, “knowledge” is of something that is true, or at least actually interpreted input. So if someone can’t have knowledge of it, that implies i’s true and one merely can’t know it. If there can’t be common knowledge, that implies that at least one can’t know the true thing. But the thing in question, “that they will disagree”, is false, right?
I do not understand what the words in the sentence mean. It seems to read:
“At no point can two ideal reasoners both know true fact X, where true fact X is that they will disagree on posteriors, and that each knows that they will disagree on posteriors, etc.”
But the theorem is that they will not disagree on posteriors...
No, for a couple of reasons.
First, I misunderstood the context of that quote. I thought that it was from Wei Dai’s post (because he was the last-named source that you’d quoted). Under this misapprehension, I took him to be pointing out that common knowledge of anything is a fantastically strong condition, and so, in particular, common knowledge of disagreement is practically impossible. It’s theoretically possible for two Bayesians to have common knowledge of disagreement (though, by the theorem, they must have had different priors). But can’t happen in the real world, such as in Luke’s conversations with Anna.
But I now see that this whole line of thought was based on a silly misunderstanding on my part.
In the context of the LW wiki entry, I think that the quote is just supposed to be a restatement of Aumann’s result. In that context, Bayesian reasoners are assumed to have the the same prior (though this could be made clearer). Then I unpack the quote just as you do:
As you point out, by Aumann’s theorem, they won’t disagree on posteriors, so they will never have common knowledge of disagreement, just as the quote says. Conversely, if they have common knowledge of posteriors, but, per the quote, they can’t have common knowledge of disagreement, then those posteriors must agree, which is Aumann’s theorem. In this sense, the quote is equivalent to Aumann’s result.
Apparently the author doesn’t use the word “knowledge” in such a way that to say “A can’t have knowledge of X” is to imply that X is true. (Nor do I, FWIW.)
Yeah; “Aumann agreement” is (to my knowledge) my own invented term by which I mean “Agreement reached by, among other things, taking into account the Bayesian evidence the other’s testimony.”
This seems like usually an unimportant (and/because unreliable/difficult to use) component, most of the work is done by convincing argument, which helps with inferential difficulties, rather than lack of information.
Agreed.
Then it seems like your definition is meaningless. Does your invented term mean something like “sharing information and collaboratively trying to reach the best answer?”
As above, I use “Aumann agreement” to mean “Agreement reached by, among other things, taking into account the Bayesian evidence the other’s testimony.” Vladimir is right that most of the work is done by convincing argument in most cases. However, there are many cases (e.g., “which sentence sounds better in this paragraph?”) where taking the evidence of the other’s opinion actually does change the alternative. Also, Anna and I (for example) have quite a lot of respect for the other’s opinion on many subjects, and so we update more heavily from each other’s testimony than most people would.
I don’t think Aumann agreement is a good term for this; there’s a huge difference between that mathematically precise procedure and the fuzzy process you’re describing.
The crucial point is that it’s not a procedure, it’s a property, an indicator and not a method.
I’m sorry, I don’t see what you’re getting at I’m afraid!
Aumann agreement is already there, it’s a fact of a certain situation, not a procedure for getting to an agreement, unlike the practice of forming a common understanding Luke talked about. My comment was basically a pun on your use of the word “procedure”.
Agreed. This decision-making method is so common we normally don’t name it. E.g. “I was going to dye my hair, but my friend told me about the terrible experience she had, and now I think I’ll go to a salon instead of trying it at home.” I don’t see a need to make up jargon for “considering the advice of trusted people.”
It seems like the purpose of this post was mostly to share your enjoyment of how wise your coworkers are and how well you cooperate with each other. Which is fine, but let’s not technify it unnecessarily.
Wei_Dai used the term back in 2009.