There are indeed many cases where Aumann’s agreement theorem seems to apply semantically, but in fact doesn’t apply mathematically. Would there be interest in a top-level post about how Aumann’s agreement theorem can be used in real life, centering mostly around learning from disagreements rather than forcing agreements?
I’d be interested, but I’ll probably disagree. I don’t think Aumann’s agreement theorem can ever be used in real life. There are several reasons, but the simplest is that it requires the people involved share the same partition function over possible worlds. If I recall correctly, this means that they have the same function describing how different observations would restrict the possible worlds they are in. This means that the proof assumes that these two rational agents would agree on the implications of any shared observation—which is almost equivalent to what it is trying to prove!
There are indeed many cases where Aumann’s agreement theorem seems to apply semantically, but in fact doesn’t apply mathematically. Would there be interest in a top-level post about how Aumann’s agreement theorem can be used in real life, centering mostly around learning from disagreements rather than forcing agreements?
I’d be interested, but I’ll probably disagree. I don’t think Aumann’s agreement theorem can ever be used in real life. There are several reasons, but the simplest is that it requires the people involved share the same partition function over possible worlds. If I recall correctly, this means that they have the same function describing how different observations would restrict the possible worlds they are in. This means that the proof assumes that these two rational agents would agree on the implications of any shared observation—which is almost equivalent to what it is trying to prove!
I will include this in the post, if and when I can produce one I think is up to scratch.
What if you represented those disagreements over implications as coming from agents having different logical information?