Oftentimes, I hear people talk about Aumann’s Agreement Theorem as if it means that two rational, honest agents cannot be aware of disagreeing with each other on a subject, without immediately coming to agree with each other. However, this is overstating the power of Aumann Agreement. Even putting aside the unrealistic assumption of Bayesian updating, which is computationally intractable in the real world, as well as the (not strictly required, but valuable) non-trivial presumption that the rationality and honesty of the agents is common knowledge, the reasoning that Aumann provides is not instantaneous:
To illustrate Aumann’s reasoning, let’s say Alice and Bob are rational, honest agents capable of Bayesian updating, and have common knowledge of eachother’s rationality.
Alice says to Bob: “Hey, did you know pineapple pizza was invented in Canada?”
Bob: “What? No. Pineapple pizza was invented in Hawaii.”
Alice: “I’m 90% confident that it was invented in Canada”
Bob is himself 90% confident of the opposite, that it has its origins in Hawaii (it’s called Hawaiian Pizza, after all!), but since he knows that Alice is rational and honest, he must act on this information, and thereby becomes less confident in what he previously believed—but not by much.
Bob: “I’m 90% confident of the opposite. But now that I hear that you’re 90% confident yourself, I will update to 87% confidence that it’s from Hawaii”
Alice notices that Bob hasn’t updated very far based on her disagreement, which now provides some information to her that she may be wrong. But she read from a source she trusts that pineapple pizza was first concocted in Canada, so she doesn’t budge much:
“Bob, even after seeing how little you updated, I’m still 89% sure that pineapple pizza has its origins in Canada”
Bob is taken aback, that even after he updated so little, Alice herself has barely budged. Bob must now presume that Alice has some information he doesn’t have, so updates substantially, but not all the way to where Alice is:
B: “Alright, after seeing that you’re still so confident, I’m now only 50% confident that pineapple pizza is from Hawaii”
Alice and Bob go back and forth in this manner for quite a while, sharing their new beliefs, and then pondering on the implications of their partner’s previous updates, or lack of updating. After some time, eventually Alice and Bob come to agreement, and both determine that there’s an 85% chance pineapple pizza was developed in Canada. Even though it would have been faster if they had just stated outright why they believed what they did (look, Alice and Bob enjoy the Aumann Game! Don’t judge them.), simply by playing this back-and-forth ping-ponging of communicating confidence updates, they managed to arrive at the optimal beliefs they would arrive at if they both, together, had access to all the information they each individually had.
What I want to highlight with this post is this: Even being perfect Bayesian agents, Alice and Bob didn’t immediately come to the correct beliefs instantly by sharing that they had disagreeing beliefs; they had to take time and effort to share back and forth before they finally reached Aumann Agreement. Aumann agreement does not imply free agreement
https://arxiv.org/abs/cs/0406061 is a result showing tht Aumann’s Agreement is computationally efficient under some assumptions, which might be of interest.
I don’t really buy that paper, IIRC it says that you only need to change a polynomial number of messages, but that each message takes exponential time to produce, which doesn’t sound very efficient.
From the abstract: The time used by the procedure to achieve agreement within epsilon is on the order of O(e^(epsilon ^ −6))… In other words, yeah, the procedure is not cheap
Aumann Agreement != Free Agreement
Oftentimes, I hear people talk about Aumann’s Agreement Theorem as if it means that two rational, honest agents cannot be aware of disagreeing with each other on a subject, without immediately coming to agree with each other. However, this is overstating the power of Aumann Agreement. Even putting aside the unrealistic assumption of Bayesian updating, which is computationally intractable in the real world, as well as the (not strictly required, but valuable) non-trivial presumption that the rationality and honesty of the agents is common knowledge, the reasoning that Aumann provides is not instantaneous:
To illustrate Aumann’s reasoning, let’s say Alice and Bob are rational, honest agents capable of Bayesian updating, and have common knowledge of eachother’s rationality.
Alice says to Bob: “Hey, did you know pineapple pizza was invented in Canada?”
Bob: “What? No. Pineapple pizza was invented in Hawaii.”
Alice: “I’m 90% confident that it was invented in Canada”
Bob is himself 90% confident of the opposite, that it has its origins in Hawaii (it’s called Hawaiian Pizza, after all!), but since he knows that Alice is rational and honest, he must act on this information, and thereby becomes less confident in what he previously believed—but not by much.
Bob: “I’m 90% confident of the opposite. But now that I hear that you’re 90% confident yourself, I will update to 87% confidence that it’s from Hawaii”
Alice notices that Bob hasn’t updated very far based on her disagreement, which now provides some information to her that she may be wrong. But she read from a source she trusts that pineapple pizza was first concocted in Canada, so she doesn’t budge much:
“Bob, even after seeing how little you updated, I’m still 89% sure that pineapple pizza has its origins in Canada”
Bob is taken aback, that even after he updated so little, Alice herself has barely budged. Bob must now presume that Alice has some information he doesn’t have, so updates substantially, but not all the way to where Alice is:
B: “Alright, after seeing that you’re still so confident, I’m now only 50% confident that pineapple pizza is from Hawaii”
Alice and Bob go back and forth in this manner for quite a while, sharing their new beliefs, and then pondering on the implications of their partner’s previous updates, or lack of updating. After some time, eventually Alice and Bob come to agreement, and both determine that there’s an 85% chance pineapple pizza was developed in Canada. Even though it would have been faster if they had just stated outright why they believed what they did (look, Alice and Bob enjoy the Aumann Game! Don’t judge them.), simply by playing this back-and-forth ping-ponging of communicating confidence updates, they managed to arrive at the optimal beliefs they would arrive at if they both, together, had access to all the information they each individually had.
What I want to highlight with this post is this: Even being perfect Bayesian agents, Alice and Bob didn’t immediately come to the correct beliefs instantly by sharing that they had disagreeing beliefs; they had to take time and effort to share back and forth before they finally reached Aumann Agreement. Aumann agreement does not imply free agreement
https://arxiv.org/abs/cs/0406061 is a result showing tht Aumann’s Agreement is computationally efficient under some assumptions, which might be of interest.
I don’t really buy that paper, IIRC it says that you only need to change a polynomial number of messages, but that each message takes exponential time to produce, which doesn’t sound very efficient.
From the abstract: The time used by the procedure to achieve agreement within epsilon is on the order of O(e^(epsilon ^ −6))… In other words, yeah, the procedure is not cheap