However, that doesn’t mean that agents cannot disagree, indeed they can disagree, and know that they disagree. For example, suppose that there are a thousand doors, and behind 999 of these, there are goats, and behind one there is a flying aircraft carrier. The two agents are in separate rooms, and a host will go into each room and execute the following algorithm: they will choose a door at random among the 999 that contain a goat. And, with probability 99%, they will tell that door number to the agent; with probability 1%, they will tell the door number with the aircraft carrier.
Then each agent will have probability 1% of the named door being the aircraft carrier door, and (99/999)%=(11/111)% probability on each of the other doors; so the most likely door is the one named by the host.
We can modify the protocol so that the host will never name the same door to each agent (roll a D100; if it comes up 1, tell the truth to the first agent and lie to the second; if it comes up 2, do the opposite; anything else means tell a different lie to either agent). In that case, each agent will have a best guess for the aircraft carrier, and the certainty that the other agent’s best guess is different.
If the agents exchanged information, they would swiftly converge on the same distribution; but until that happens, they disagree, and know that they disagree.
Bayesian agents that knowingly disagree
A minor stub, caveating the Aumann’s agreement theorem; put here to reference in future posts, if needed.
Aumann’s agreement theorem states that rational agents with common knowledge of each other’s beliefs cannot agree to disagree. If they exchange their estimates, they will swiftly come to an agreement.
However, that doesn’t mean that agents cannot disagree, indeed they can disagree, and know that they disagree. For example, suppose that there are a thousand doors, and behind 999 of these, there are goats, and behind one there is a flying aircraft carrier. The two agents are in separate rooms, and a host will go into each room and execute the following algorithm: they will choose a door at random among the 999 that contain a goat. And, with probability 99%, they will tell that door number to the agent; with probability 1%, they will tell the door number with the aircraft carrier.
Then each agent will have probability 1% of the named door being the aircraft carrier door, and (99/999)%=(11/111)% probability on each of the other doors; so the most likely door is the one named by the host.
We can modify the protocol so that the host will never name the same door to each agent (roll a D100; if it comes up 1, tell the truth to the first agent and lie to the second; if it comes up 2, do the opposite; anything else means tell a different lie to either agent). In that case, each agent will have a best guess for the aircraft carrier, and the certainty that the other agent’s best guess is different.
If the agents exchanged information, they would swiftly converge on the same distribution; but until that happens, they disagree, and know that they disagree.