If an agent is not honest, ey can decide to say only things that provide no evidence regarding the question in hand to the other agent. In this case convergence is not guaranteed. For example, Alice assigns probability 35% to “will it rain tomorrow” but, when asked, says the probability is 21% regardless of what the actual evidence is. Bob assigns probability 89% to “will it rain tomorrow” but, when asked, says the probability is 42% regardless of what the actual evidence is. Alice knows Bob always answers 42%. Bob knows Alice always answers 21%. If they talk to each other, their probabilities will not converge (they won’t change at all).
Yes, it can luckily happen that the lies still contain enough information for them to converge, but I’m not sure why do you seem to think it is an important or natural situation?
I don’t think the ‘rational agents’ in question are a good model for people, or that the theoretical situation is anything close to natural. Aside from the myriad ways they are different*, the result of ‘rational people’** interacting seems like an empirical question. Perhaps a theory that models people better will come up with the same results—and offer suggestions for how people can improve.
The addition of the word “honest” seems like it comes from an awareness of how the model is flawed. I pointed out how this differs from the model, because the model is somewhat unintuitive, and makes rather large assumptions—and it’s not clear how well the result holds up as the gap between those assumptions and reality are removed.
Yes, to ask for a theory that enables constructing, or approximating, the agents described therein would be asking for a lot, but that might clearly establish how the theory relates to reality/people interacting with each other.
*like having the ability to compute uncomputable things instantly (with no mistakes),
The addition of the word “honest” doesn’t come from an awareness of how the model is flawed. It is one of the explicit assumptions in the model. So, I’m still not sure what point are you going for here.
I think that applying Aumann’s theorem to people is mostly interesting in the prescriptive rather than descriptive sense. That is, the theorem tells us that our ability to converge can serve as a test of our rationality, to the extent that we are honest and share the same prior, and all of this is common knowledge. (This last assumption might be the hardest to make sense of. Hanson tried to justify it but IMO not quite convincingly.) Btw, you don’t need to compute uncomputable things, much less instantly. Scott Aaronson derived a version of the theorem with explicit computational complexity and query complexity bounds that don’t seem prohibitive.
Given all the difficulties, I am not sure how to apply it in the real world and whether that’s even possible. I do think it’s interesting to think about it. But, to the extent it is possible, it definitely requires honesty.
If an agent is not honest, ey can decide to say only things that provide no evidence regarding the question in hand to the other agent. In this case convergence is not guaranteed. For example, Alice assigns probability 35% to “will it rain tomorrow” but, when asked, says the probability is 21% regardless of what the actual evidence is. Bob assigns probability 89% to “will it rain tomorrow” but, when asked, says the probability is 42% regardless of what the actual evidence is. Alice knows Bob always answers 42%. Bob knows Alice always answers 21%. If they talk to each other, their probabilities will not converge (they won’t change at all).
Yes, it can luckily happen that the lies still contain enough information for them to converge, but I’m not sure why do you seem to think it is an important or natural situation?
I don’t think the ‘rational agents’ in question are a good model for people, or that the theoretical situation is anything close to natural. Aside from the myriad ways they are different*, the result of ‘rational people’** interacting seems like an empirical question. Perhaps a theory that models people better will come up with the same results—and offer suggestions for how people can improve.
The addition of the word “honest” seems like it comes from an awareness of how the model is flawed. I pointed out how this differs from the model, because the model is somewhat unintuitive, and makes rather large assumptions—and it’s not clear how well the result holds up as the gap between those assumptions and reality are removed.
Yes, to ask for a theory that enables constructing, or approximating, the agents described therein would be asking for a lot, but that might clearly establish how the theory relates to reality/people interacting with each other.
*like having the ability to compute uncomputable things instantly (with no mistakes),
**Who are computationally bounded, etc.
The addition of the word “honest” doesn’t come from an awareness of how the model is flawed. It is one of the explicit assumptions in the model. So, I’m still not sure what point are you going for here.
I think that applying Aumann’s theorem to people is mostly interesting in the prescriptive rather than descriptive sense. That is, the theorem tells us that our ability to converge can serve as a test of our rationality, to the extent that we are honest and share the same prior, and all of this is common knowledge. (This last assumption might be the hardest to make sense of. Hanson tried to justify it but IMO not quite convincingly.) Btw, you don’t need to compute uncomputable things, much less instantly. Scott Aaronson derived a version of the theorem with explicit computational complexity and query complexity bounds that don’t seem prohibitive.
Given all the difficulties, I am not sure how to apply it in the real world and whether that’s even possible. I do think it’s interesting to think about it. But, to the extent it is possible, it definitely requires honesty.