Aaah, okay. Though presumably at least one would know the probabilities that both assigned (and said “I disagree”...) that is, it would generally take a bit of a contrived situation for them to know they disagree, but neither to know anything about the other’s probability other than that it’s different.
(What happens if the successfully exchange probabilities, have unbounded computing power, they have shared common knowledge priors… But they don’t know each other’s partitioning… Or would the latter automatically be computed from the rest?)
Well, if they do know each other’s partitions and are computationally unbounded, then they would reach agreement after one step, wouldn’t they? (or did I misunderstand the theorem?)
Or do you mean If they don’t know each other’s partitions, iterative exchange of updated probabilities effectively transmits the needed information?
Aaah, okay. Though presumably at least one would know the probabilities that both assigned (and said “I disagree”...) that is, it would generally take a bit of a contrived situation for them to know they disagree, but neither to know anything about the other’s probability other than that it’s different.
(What happens if the successfully exchange probabilities, have unbounded computing power, they have shared common knowledge priors… But they don’t know each other’s partitioning… Or would the latter automatically be computed from the rest?)
Just one round of comparing probabilities is not normally enough for the parties involved to reach agreement, though.
Well, if they do know each other’s partitions and are computationally unbounded, then they would reach agreement after one step, wouldn’t they? (or did I misunderstand the theorem?)
Or do you mean If they don’t know each other’s partitions, iterative exchange of updated probabilities effectively transmits the needed information?