Knowing each other’s probability for a statement requires exchanging information about which statement the probability is assigned to. In basically all of my examples, this was the information exchanged.
In most examples, there’s no common knowledge. In most examples, information is only transmitted one way. This does not allow for Aumann agreement. One side makes one update, then stops. If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they’ve seen nearly strictly better evidence about it than I have. I think this explains most of your examples, without referencing Aumann.
I think I don’t understand what you mean. What’s Aumann agreement? How’s it a useful concept?
It is true that the original theorem relies on common knowledge. In my original post, I phrased it as “a family of theorems” because one can prove various theorems with different assumptions yet similar outcomes. This is a general feature in math, where one shouldn’t get distracted by the boilerplate because the core principle is often more general than the proof. So e.g. the principle you mention, of “If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they’ve seen nearly strictly better evidence about it than I have.”, is something I’d suggest is in the same family as Aumann’s agreement theorem.
The reason for my post is that a lot of people find Aumann’s agreement theorem counterintuitive and feel like its conclusion doesn’t apply to typical real-life disagreements, and therefore assume that there must be some hidden condition that makes it inapplicable in reality. What I think I showed is that Aumann’s agreement theorem defines “disagreement” extremely broadly and once you think about it with such a broad conception it does indeed appear to generally apply in real life, even under far weaker conditions than the original proof requires.
I think this is useful partly because it suggests a better frame for reasoning about disagreement. For instance I provide lots of examples of disagreements that rapidly dissipate, and so if you wish to know why disagreements persist, it can be helpful to think about how persistent disagreements differ from the examples I list (for example many persistent disagreements are about politics, and for politics there are strong incentives for bias, so maybe some people who make political claims are dishonest, suggesting that conflict theory (the idea that political disagreement is due to differences in interests) is more accurate than mistake theory (the idea that political disagreement is due to making reasoning mistakes, which does not seem to predict that disagreement would be specific to politics, but which people might assume is plausible if they haven’t thought about general tendencies for agreement).
More generally I have a whole framework of disagreement and beliefs that I intend to write about.
Knowing each other’s probability for a statement requires exchanging information about which statement the probability is assigned to. In basically all of my examples, this was the information exchanged.
Thank you. I was probably wrong.
In most examples, there’s no common knowledge. In most examples, information is only transmitted one way. This does not allow for Aumann agreement. One side makes one update, then stops.
If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they’ve seen nearly strictly better evidence about it than I have. I think this explains most of your examples, without referencing Aumann.
I think I don’t understand what you mean. What’s Aumann agreement? How’s it a useful concept?
It is true that the original theorem relies on common knowledge. In my original post, I phrased it as “a family of theorems” because one can prove various theorems with different assumptions yet similar outcomes. This is a general feature in math, where one shouldn’t get distracted by the boilerplate because the core principle is often more general than the proof. So e.g. the principle you mention, of “If someone tells me their assigned probability for something, that turns my probability very close to theirs, if I think they’ve seen nearly strictly better evidence about it than I have.”, is something I’d suggest is in the same family as Aumann’s agreement theorem.
The reason for my post is that a lot of people find Aumann’s agreement theorem counterintuitive and feel like its conclusion doesn’t apply to typical real-life disagreements, and therefore assume that there must be some hidden condition that makes it inapplicable in reality. What I think I showed is that Aumann’s agreement theorem defines “disagreement” extremely broadly and once you think about it with such a broad conception it does indeed appear to generally apply in real life, even under far weaker conditions than the original proof requires.
I think this is useful partly because it suggests a better frame for reasoning about disagreement. For instance I provide lots of examples of disagreements that rapidly dissipate, and so if you wish to know why disagreements persist, it can be helpful to think about how persistent disagreements differ from the examples I list (for example many persistent disagreements are about politics, and for politics there are strong incentives for bias, so maybe some people who make political claims are dishonest, suggesting that conflict theory (the idea that political disagreement is due to differences in interests) is more accurate than mistake theory (the idea that political disagreement is due to making reasoning mistakes, which does not seem to predict that disagreement would be specific to politics, but which people might assume is plausible if they haven’t thought about general tendencies for agreement).
More generally I have a whole framework of disagreement and beliefs that I intend to write about.