More detailed comment than mine, so strong upvote. However, there’s one important error in the comment:
Of course, humans are not perfectly rational so this rarely happens
Actually it constantly happens. For instance yesterday I had a call with my dad, where I told him about my vacation in Norway, where the Bergen train had been cancelled due to the floods. He believed me, which is an immediate example of Aumann’s agreement theorem applying.
Furthermore, there were a bunch of things that I had to do to handle the cancellations, which also relied on Aumannian agreement. For instance I didn’t know where I could get news about the floods, which was in disagreement with Google and Twitter which had a bunch of concrete suggestions, so I adopted Google’s/Twitter’s view and then investigated further to update more. I also didn’t know where I could get alternate transportation, but again Google had some flight suggestions that I Aumann-agreed to and then investigated further.
As another example, in Norway I was at a museum about an explorer who sailed the atlantic on a bamboo raft. At first I had disagreements with the museum as e.g. I didn’t know that e.g. one of the people on the raft fell in the water and had to be rescued, but the museum told me that he did and so I Aumann-agreed with that.
I think Aumann-agreement is the default thing that happens when communicating, and it’s just that usually it happens so quickly that we don’t even register it as “disagreements”. Persistent public disagreements require that the preconditions for Aumann’s theorem fail, and so our idea of “disagreement” ends up connoting precisely the disagreements where Aumann’s theorem fails.
Good point! Noticeably, some of your examples are ‘one-way’: one party updated while the other did not. In the case of Google/Twitter and the museum, you updated but they didn’t, so this sounds like standard Bayesian updating, not specifically Aumann-like (though maybe this distinction doesn’t matter, as the latter is a special case of the former).
When I wrote the answer, I guess I was thinking about Aumann updating where both parties end up changing their probabilities (ie. Alice starts with a high probability of some proposition P and Bob starts with a low probability for P and, after discussing their disagreement, they converge to a middling probability). This didn’t seem to me to be as common among humans.
In the example with your Dad, it also seems one-way: he updated and you didn’t. However, maybe the fact he didn’t know there was a flood would have caused you to update slightly, but this update would be so small that it was negligible. So I guess you are right and that would count as an Aumann agreement!
Your last paragraph is really good. I will ponder it...
When I wrote the answer, I guess I was thinking about Aumann updating where both parties end up changing their probabilities (ie. Alice starts with a high probability of some proposition P and Bob starts with a low probability for P and, after discussing their disagreement, they converge to a middling probability). This didn’t seem to me to be as common among humans.
I think this is a wrong picture to have in mind for Aumannian updating. It’s about pooling evidence, and sometimes you can end up with more extreme views than you started with. While the exact way you update can vary depending on the prior and the evidence, one simple example I like is this:
You both start with having your log-odds being some vector x according to some shared prior. You then observe some evidence y, updating your log-odds to be x+y, while they observe some independent evidence z, updating their log-odds to be x+z. If you exchange all your information, then this updates your shared log-odds to be x+y+z, which is most likely going to be an even more radical departure from x than either x+y or x+z alone.
While this general argument is overly idealistic because it assumes independent evidence, I think the point that Aumannian agreement doesn’t mean moderation is important.
That said, there is one place where Aumannian agreement locally leads to moderation: If during the conversation, you both learn that the sources you relied on were unreliable, then presumably you would mostly revert to the prior. However, in the context of politics (which is probably the main place where people want to think of this), the sources tend to be political coalitions, so updating that they were unreliable means updating that one cannot trust any political coalition, which in a sense is both common knowledge but also taken seriously is quite radical (because then you need to start doubting all the things you thought you knew).
Good point! Noticeably, some of your examples are ‘one-way’: one party updated while the other did not. In the case of Google/Twitter and the museum, you updated but they didn’t, so this sounds like standard Bayesian updating, not specifically Aumann-like (though maybe this distinction doesn’t matter, as the latter is a special case of the former).
There were a couple of multi-way cases too. For instance, one time we told someone that we intended to take the Bergen train, expecting that this would resolve the disagreement of them not knowing we would take the Bergen train. But then they continued disagreeing, and told us that the Bergen train was cancelled, which instead updated us to think we wouldn’t take the Bergen train.
But I think generally disagreements would be exponentially short? Because if each time you share a piece of information that you expect to change their mind, then the probability that they haven’t changed their mind drops exponentially with the number of pieces of information shared.
More detailed comment than mine, so strong upvote. However, there’s one important error in the comment:
Actually it constantly happens. For instance yesterday I had a call with my dad, where I told him about my vacation in Norway, where the Bergen train had been cancelled due to the floods. He believed me, which is an immediate example of Aumann’s agreement theorem applying.
Furthermore, there were a bunch of things that I had to do to handle the cancellations, which also relied on Aumannian agreement. For instance I didn’t know where I could get news about the floods, which was in disagreement with Google and Twitter which had a bunch of concrete suggestions, so I adopted Google’s/Twitter’s view and then investigated further to update more. I also didn’t know where I could get alternate transportation, but again Google had some flight suggestions that I Aumann-agreed to and then investigated further.
As another example, in Norway I was at a museum about an explorer who sailed the atlantic on a bamboo raft. At first I had disagreements with the museum as e.g. I didn’t know that e.g. one of the people on the raft fell in the water and had to be rescued, but the museum told me that he did and so I Aumann-agreed with that.
I think Aumann-agreement is the default thing that happens when communicating, and it’s just that usually it happens so quickly that we don’t even register it as “disagreements”. Persistent public disagreements require that the preconditions for Aumann’s theorem fail, and so our idea of “disagreement” ends up connoting precisely the disagreements where Aumann’s theorem fails.
Good point! Noticeably, some of your examples are ‘one-way’: one party updated while the other did not. In the case of Google/Twitter and the museum, you updated but they didn’t, so this sounds like standard Bayesian updating, not specifically Aumann-like (though maybe this distinction doesn’t matter, as the latter is a special case of the former).
When I wrote the answer, I guess I was thinking about Aumann updating where both parties end up changing their probabilities (ie. Alice starts with a high probability of some proposition P and Bob starts with a low probability for P and, after discussing their disagreement, they converge to a middling probability). This didn’t seem to me to be as common among humans.
In the example with your Dad, it also seems one-way: he updated and you didn’t. However, maybe the fact he didn’t know there was a flood would have caused you to update slightly, but this update would be so small that it was negligible. So I guess you are right and that would count as an Aumann agreement!
Your last paragraph is really good. I will ponder it...
I think this is a wrong picture to have in mind for Aumannian updating. It’s about pooling evidence, and sometimes you can end up with more extreme views than you started with. While the exact way you update can vary depending on the prior and the evidence, one simple example I like is this:
You both start with having your log-odds being some vector x according to some shared prior. You then observe some evidence y, updating your log-odds to be x+y, while they observe some independent evidence z, updating their log-odds to be x+z. If you exchange all your information, then this updates your shared log-odds to be x+y+z, which is most likely going to be an even more radical departure from x than either x+y or x+z alone.
While this general argument is overly idealistic because it assumes independent evidence, I think the point that Aumannian agreement doesn’t mean moderation is important.
That said, there is one place where Aumannian agreement locally leads to moderation: If during the conversation, you both learn that the sources you relied on were unreliable, then presumably you would mostly revert to the prior. However, in the context of politics (which is probably the main place where people want to think of this), the sources tend to be political coalitions, so updating that they were unreliable means updating that one cannot trust any political coalition, which in a sense is both common knowledge but also taken seriously is quite radical (because then you need to start doubting all the things you thought you knew).
There were a couple of multi-way cases too. For instance, one time we told someone that we intended to take the Bergen train, expecting that this would resolve the disagreement of them not knowing we would take the Bergen train. But then they continued disagreeing, and told us that the Bergen train was cancelled, which instead updated us to think we wouldn’t take the Bergen train.
But I think generally disagreements would be exponentially short? Because if each time you share a piece of information that you expect to change their mind, then the probability that they haven’t changed their mind drops exponentially with the number of pieces of information shared.