Why not? They both know they disagree, they both know they both know they disagree, etc… Perhaps Agent 1 doesn’t know 2′s partitioning, or vice versa. Or perhaps their partitionings are common knowledge, but they lack the (computational ability) to actually determine the meet, for example, no?
Wei was hypothesising disagreement due to an incomplete exchange of information. In which case, the parties both know that they disagree, but don’t have the time/energy/resources to sort each other’s opinions out. Then Aumann’s idea doesn’t really apply.
Aaah, okay. Though presumably at least one would know the probabilities that both assigned (and said “I disagree”...) that is, it would generally take a bit of a contrived situation for them to know they disagree, but neither to know anything about the other’s probability other than that it’s different.
(What happens if the successfully exchange probabilities, have unbounded computing power, they have shared common knowledge priors… But they don’t know each other’s partitioning… Or would the latter automatically be computed from the rest?)
Well, if they do know each other’s partitions and are computationally unbounded, then they would reach agreement after one step, wouldn’t they? (or did I misunderstand the theorem?)
Or do you mean If they don’t know each other’s partitions, iterative exchange of updated probabilities effectively transmits the needed information?
You mean “common knowledge” in the technical sense described in the post?
If so, your questions do not appear to make sense.
Why not? They both know they disagree, they both know they both know they disagree, etc… Perhaps Agent 1 doesn’t know 2′s partitioning, or vice versa. Or perhaps their partitionings are common knowledge, but they lack the (computational ability) to actually determine the meet, for example, no?
Wei was hypothesising disagreement due to an incomplete exchange of information. In which case, the parties both know that they disagree, but don’t have the time/energy/resources to sort each other’s opinions out. Then Aumann’s idea doesn’t really apply.
Aaah, okay. Though presumably at least one would know the probabilities that both assigned (and said “I disagree”...) that is, it would generally take a bit of a contrived situation for them to know they disagree, but neither to know anything about the other’s probability other than that it’s different.
(What happens if the successfully exchange probabilities, have unbounded computing power, they have shared common knowledge priors… But they don’t know each other’s partitioning… Or would the latter automatically be computed from the rest?)
Just one round of comparing probabilities is not normally enough for the parties involved to reach agreement, though.
Well, if they do know each other’s partitions and are computationally unbounded, then they would reach agreement after one step, wouldn’t they? (or did I misunderstand the theorem?)
Or do you mean If they don’t know each other’s partitions, iterative exchange of updated probabilities effectively transmits the needed information?