Huh. In that case, I guess I’m wondering why we care.
That is, if we’re just talking about a mechanism whereby two agents can reach agreement efficiently and we’re OK with them agreeing on conclusions the evidence doesn’t actually support, isn’t it more efficient to, for example, agree to flip a coin and agree on A’s estimate if heads, and B’s estimate if tails?
I can’t speak for all those interested—but I think one common theme is that we see much persistent disagreement in the world when agents share their estimates—Aumann says it is unlikely to be epistemically rational and honest (although it often purports to be) - so what is going on?
Your proposed coin-flip is certainly faster than Aumann agreement—but does not offer such good quality results. In an Aumann agreement, agents take account of each others’ confidence levels.
Huh. In that case, I guess I’m wondering why we care.
That is, if we’re just talking about a mechanism whereby two agents can reach agreement efficiently and we’re OK with them agreeing on conclusions the evidence doesn’t actually support, isn’t it more efficient to, for example, agree to flip a coin and agree on A’s estimate if heads, and B’s estimate if tails?
I can’t speak for all those interested—but I think one common theme is that we see much persistent disagreement in the world when agents share their estimates—Aumann says it is unlikely to be epistemically rational and honest (although it often purports to be) - so what is going on?
Your proposed coin-flip is certainly faster than Aumann agreement—but does not offer such good quality results. In an Aumann agreement, agents take account of each others’ confidence levels.