The conditions [1] are sufficient for the conclusion [2] (as shown by [4]) but are not all necessary.
Honesty is not required.
If this is surprising, then it might be useful to consider that ‘having common priors’ is kind of like being able to read people’s minds—what they are thinking will be within the space of possibilities you consider. Things such rational agents say to each other may be surprising, but never un-conceived of; never inconceivable. And with each (new) piece of information they acquire they come closer to the truth—whether the words they hear are “true” or “false”, it matters not—only what evidence ‘hearing those words’ is. Under such circumstances lies may be useless. Not because rational agents are incapable of lying, but because, they possess impossible computational abilities that ensure convergence of shared beliefs* (in their minds) after they meet—a state of affairs which does not tell you anything about their words.
Events may proceed in a fashion such that a third observer (that isn’t a “rational agent”) such as you or I, might say “they agreed to disagree”. Aumann’s agreement theorem doesn’t tell us that this will never happen, only that such an observer would be wrong about what actually happened to their (internal) beliefs, however their professed (or performed beliefs) hold otherwise.
One consequence of this is how such a conversation might go—the rational agents might simply state the probabilities they give for a proposition, rather than discussing the evidence, because they can assess the evidence from each other’s responses, because they already know all the evidence that ‘might be’.
*Which need not be at or between where they started. Two Bayesian with different evidence that has led them to believe something is very unlikely, after meeting may conclude that it is very likely—if that is the assessment they would give had they had both pieces of information to begin with.
If an agent is not honest, ey can decide to say only things that provide no evidence regarding the question in hand to the other agent. In this case convergence is not guaranteed. For example, Alice assigns probability 35% to “will it rain tomorrow” but, when asked, says the probability is 21% regardless of what the actual evidence is. Bob assigns probability 89% to “will it rain tomorrow” but, when asked, says the probability is 42% regardless of what the actual evidence is. Alice knows Bob always answers 42%. Bob knows Alice always answers 21%. If they talk to each other, their probabilities will not converge (they won’t change at all).
Yes, it can luckily happen that the lies still contain enough information for them to converge, but I’m not sure why do you seem to think it is an important or natural situation?
I don’t think the ‘rational agents’ in question are a good model for people, or that the theoretical situation is anything close to natural. Aside from the myriad ways they are different*, the result of ‘rational people’** interacting seems like an empirical question. Perhaps a theory that models people better will come up with the same results—and offer suggestions for how people can improve.
The addition of the word “honest” seems like it comes from an awareness of how the model is flawed. I pointed out how this differs from the model, because the model is somewhat unintuitive, and makes rather large assumptions—and it’s not clear how well the result holds up as the gap between those assumptions and reality are removed.
Yes, to ask for a theory that enables constructing, or approximating, the agents described therein would be asking for a lot, but that might clearly establish how the theory relates to reality/people interacting with each other.
*like having the ability to compute uncomputable things instantly (with no mistakes),
The addition of the word “honest” doesn’t come from an awareness of how the model is flawed. It is one of the explicit assumptions in the model. So, I’m still not sure what point are you going for here.
I think that applying Aumann’s theorem to people is mostly interesting in the prescriptive rather than descriptive sense. That is, the theorem tells us that our ability to converge can serve as a test of our rationality, to the extent that we are honest and share the same prior, and all of this is common knowledge. (This last assumption might be the hardest to make sense of. Hanson tried to justify it but IMO not quite convincingly.) Btw, you don’t need to compute uncomputable things, much less instantly. Scott Aaronson derived a version of the theorem with explicit computational complexity and query complexity bounds that don’t seem prohibitive.
Given all the difficulties, I am not sure how to apply it in the real world and whether that’s even possible. I do think it’s interesting to think about it. But, to the extent it is possible, it definitely requires honesty.
The conditions [1] are sufficient for the conclusion [2] (as shown by [4]) but are not all necessary.
Honesty is not required.
If this is surprising, then it might be useful to consider that ‘having common priors’ is kind of like being able to read people’s minds—what they are thinking will be within the space of possibilities you consider. Things such rational agents say to each other may be surprising, but never un-conceived of; never inconceivable. And with each (new) piece of information they acquire they come closer to the truth—whether the words they hear are “true” or “false”, it matters not—only what evidence ‘hearing those words’ is. Under such circumstances lies may be useless. Not because rational agents are incapable of lying, but because, they possess impossible computational abilities that ensure convergence of shared beliefs* (in their minds) after they meet—a state of affairs which does not tell you anything about their words.
Events may proceed in a fashion such that a third observer (that isn’t a “rational agent”) such as you or I, might say “they agreed to disagree”. Aumann’s agreement theorem doesn’t tell us that this will never happen, only that such an observer would be wrong about what actually happened to their (internal) beliefs, however their professed (or performed beliefs) hold otherwise.
One consequence of this is how such a conversation might go—the rational agents might simply state the probabilities they give for a proposition, rather than discussing the evidence, because they can assess the evidence from each other’s responses, because they already know all the evidence that ‘might be’.
*Which need not be at or between where they started. Two Bayesian with different evidence that has led them to believe something is very unlikely, after meeting may conclude that it is very likely—if that is the assessment they would give had they had both pieces of information to begin with.
If an agent is not honest, ey can decide to say only things that provide no evidence regarding the question in hand to the other agent. In this case convergence is not guaranteed. For example, Alice assigns probability 35% to “will it rain tomorrow” but, when asked, says the probability is 21% regardless of what the actual evidence is. Bob assigns probability 89% to “will it rain tomorrow” but, when asked, says the probability is 42% regardless of what the actual evidence is. Alice knows Bob always answers 42%. Bob knows Alice always answers 21%. If they talk to each other, their probabilities will not converge (they won’t change at all).
Yes, it can luckily happen that the lies still contain enough information for them to converge, but I’m not sure why do you seem to think it is an important or natural situation?
I don’t think the ‘rational agents’ in question are a good model for people, or that the theoretical situation is anything close to natural. Aside from the myriad ways they are different*, the result of ‘rational people’** interacting seems like an empirical question. Perhaps a theory that models people better will come up with the same results—and offer suggestions for how people can improve.
The addition of the word “honest” seems like it comes from an awareness of how the model is flawed. I pointed out how this differs from the model, because the model is somewhat unintuitive, and makes rather large assumptions—and it’s not clear how well the result holds up as the gap between those assumptions and reality are removed.
Yes, to ask for a theory that enables constructing, or approximating, the agents described therein would be asking for a lot, but that might clearly establish how the theory relates to reality/people interacting with each other.
*like having the ability to compute uncomputable things instantly (with no mistakes),
**Who are computationally bounded, etc.
The addition of the word “honest” doesn’t come from an awareness of how the model is flawed. It is one of the explicit assumptions in the model. So, I’m still not sure what point are you going for here.
I think that applying Aumann’s theorem to people is mostly interesting in the prescriptive rather than descriptive sense. That is, the theorem tells us that our ability to converge can serve as a test of our rationality, to the extent that we are honest and share the same prior, and all of this is common knowledge. (This last assumption might be the hardest to make sense of. Hanson tried to justify it but IMO not quite convincingly.) Btw, you don’t need to compute uncomputable things, much less instantly. Scott Aaronson derived a version of the theorem with explicit computational complexity and query complexity bounds that don’t seem prohibitive.
Given all the difficulties, I am not sure how to apply it in the real world and whether that’s even possible. I do think it’s interesting to think about it. But, to the extent it is possible, it definitely requires honesty.