Good point! Noticeably, some of your examples are ‘one-way’: one party updated while the other did not. In the case of Google/Twitter and the museum, you updated but they didn’t, so this sounds like standard Bayesian updating, not specifically Aumann-like (though maybe this distinction doesn’t matter, as the latter is a special case of the former).
When I wrote the answer, I guess I was thinking about Aumann updating where both parties end up changing their probabilities (ie. Alice starts with a high probability of some proposition P and Bob starts with a low probability for P and, after discussing their disagreement, they converge to a middling probability). This didn’t seem to me to be as common among humans.
In the example with your Dad, it also seems one-way: he updated and you didn’t. However, maybe the fact he didn’t know there was a flood would have caused you to update slightly, but this update would be so small that it was negligible. So I guess you are right and that would count as an Aumann agreement!
Your last paragraph is really good. I will ponder it...
When I wrote the answer, I guess I was thinking about Aumann updating where both parties end up changing their probabilities (ie. Alice starts with a high probability of some proposition P and Bob starts with a low probability for P and, after discussing their disagreement, they converge to a middling probability). This didn’t seem to me to be as common among humans.
I think this is a wrong picture to have in mind for Aumannian updating. It’s about pooling evidence, and sometimes you can end up with more extreme views than you started with. While the exact way you update can vary depending on the prior and the evidence, one simple example I like is this:
You both start with having your log-odds being some vector x according to some shared prior. You then observe some evidence y, updating your log-odds to be x+y, while they observe some independent evidence z, updating their log-odds to be x+z. If you exchange all your information, then this updates your shared log-odds to be x+y+z, which is most likely going to be an even more radical departure from x than either x+y or x+z alone.
While this general argument is overly idealistic because it assumes independent evidence, I think the point that Aumannian agreement doesn’t mean moderation is important.
That said, there is one place where Aumannian agreement locally leads to moderation: If during the conversation, you both learn that the sources you relied on were unreliable, then presumably you would mostly revert to the prior. However, in the context of politics (which is probably the main place where people want to think of this), the sources tend to be political coalitions, so updating that they were unreliable means updating that one cannot trust any political coalition, which in a sense is both common knowledge but also taken seriously is quite radical (because then you need to start doubting all the things you thought you knew).
Good point! Noticeably, some of your examples are ‘one-way’: one party updated while the other did not. In the case of Google/Twitter and the museum, you updated but they didn’t, so this sounds like standard Bayesian updating, not specifically Aumann-like (though maybe this distinction doesn’t matter, as the latter is a special case of the former).
There were a couple of multi-way cases too. For instance, one time we told someone that we intended to take the Bergen train, expecting that this would resolve the disagreement of them not knowing we would take the Bergen train. But then they continued disagreeing, and told us that the Bergen train was cancelled, which instead updated us to think we wouldn’t take the Bergen train.
But I think generally disagreements would be exponentially short? Because if each time you share a piece of information that you expect to change their mind, then the probability that they haven’t changed their mind drops exponentially with the number of pieces of information shared.
Good point! Noticeably, some of your examples are ‘one-way’: one party updated while the other did not. In the case of Google/Twitter and the museum, you updated but they didn’t, so this sounds like standard Bayesian updating, not specifically Aumann-like (though maybe this distinction doesn’t matter, as the latter is a special case of the former).
When I wrote the answer, I guess I was thinking about Aumann updating where both parties end up changing their probabilities (ie. Alice starts with a high probability of some proposition P and Bob starts with a low probability for P and, after discussing their disagreement, they converge to a middling probability). This didn’t seem to me to be as common among humans.
In the example with your Dad, it also seems one-way: he updated and you didn’t. However, maybe the fact he didn’t know there was a flood would have caused you to update slightly, but this update would be so small that it was negligible. So I guess you are right and that would count as an Aumann agreement!
Your last paragraph is really good. I will ponder it...
I think this is a wrong picture to have in mind for Aumannian updating. It’s about pooling evidence, and sometimes you can end up with more extreme views than you started with. While the exact way you update can vary depending on the prior and the evidence, one simple example I like is this:
You both start with having your log-odds being some vector x according to some shared prior. You then observe some evidence y, updating your log-odds to be x+y, while they observe some independent evidence z, updating their log-odds to be x+z. If you exchange all your information, then this updates your shared log-odds to be x+y+z, which is most likely going to be an even more radical departure from x than either x+y or x+z alone.
While this general argument is overly idealistic because it assumes independent evidence, I think the point that Aumannian agreement doesn’t mean moderation is important.
That said, there is one place where Aumannian agreement locally leads to moderation: If during the conversation, you both learn that the sources you relied on were unreliable, then presumably you would mostly revert to the prior. However, in the context of politics (which is probably the main place where people want to think of this), the sources tend to be political coalitions, so updating that they were unreliable means updating that one cannot trust any political coalition, which in a sense is both common knowledge but also taken seriously is quite radical (because then you need to start doubting all the things you thought you knew).
There were a couple of multi-way cases too. For instance, one time we told someone that we intended to take the Bergen train, expecting that this would resolve the disagreement of them not knowing we would take the Bergen train. But then they continued disagreeing, and told us that the Bergen train was cancelled, which instead updated us to think we wouldn’t take the Bergen train.
But I think generally disagreements would be exponentially short? Because if each time you share a piece of information that you expect to change their mind, then the probability that they haven’t changed their mind drops exponentially with the number of pieces of information shared.