He shows that you do not have to exchange very much information to come to agreement. Now maybe this does not address the question of the potential intractability of the deductions to reach agreement (the wannabe papers may do this) but I think it shows that it is not necessary to exchange all relevant information.
The bottom line for me is the flavor of the Aumann theorem: that there must be a reason why the other person is being so stubborn as not to be convinced by your own tenacity. I think this insight is the key to the whole conclusion and it is totally overlooked by most disagreers.
I haven’t read the whole paper yet, but here’s one quote from it (page 5):
The dependence, alas, is exponential in 1 / (δ^3 ε^6), so our simulation procedure is still not practical. However, we expect that both the procedure and its analysis can be considerably improved.
Scott is talking about the computational complexity of his agreement protocol here. Even if we can improve the complexity to something that is considered practical from a computer science perspective, that will still likely be impractical for human beings, most of whom can’t even multiply 3 digit numbers in their heads.
To quote from the abstract of Scott Aaronson’s paper:
“A celebrated 1976 theorem of Aumann asserts that honest, rational Bayesian agents with common priors will never agree to disagree”: if their opinions about any topic are common knowledge, then those opinions must be equal.”
Even “honest, rational, Bayesian agents” seems too weak. Goal-directed agents who are forced to signal their opinions to others can benefit from voluntarily deceiving themselves in order to effectively deceive others. Their self-deception makes their opinions more credible—since they honestly believe them.
If an agent honestly believes what they are saying, it is difficult to accuse them of dishonesty—and such an agent’s understanding of Bayesian probability theory may be immaculate.
Such agents are not constrained to agree by Aumann’s disagreement theorem.
Goal-directed agents who are forced to signal their opinions to others can benefit from voluntarily deceiving themselves in order to effectively deceive others. Their self-deception makes their opinions more credible—since they honestly believe them.
This seems to reflect human cognitive architecture more than a general fact about optimal agents or even most/all goal-directed agents. That humans are not optimal is nothing new around here, nor that the agreement theorems have little relevance to real human arguments. (I can’t be the only one to read the papers and think, ‘hell, I don’t trust myself as far as even the weakened models, much less Creationists and whatnot’, and have little use for them.)
The reason is often that you regard your own perceptions and conclusion as trustworthy and in accordance with your own aims—whereas you don’t have a very good reason to believe the other person is operating in your interests (rather than selfishly trying to manipulate you to serve their own interests). They may reason in much the same way.
Probably much the same circuitry continues to operate even in those very rare cases where two truth-seekers meet, and convince each other of their sincerity.
Yes, I looked at that paper, and also Agreeing To Disagree: A Survey by Giacomo Bonanno and Klaus Nehring.
How about Scott Aaronson:
http://www.scottaaronson.com/papers/agree-econ.pdf
He shows that you do not have to exchange very much information to come to agreement. Now maybe this does not address the question of the potential intractability of the deductions to reach agreement (the wannabe papers may do this) but I think it shows that it is not necessary to exchange all relevant information.
The bottom line for me is the flavor of the Aumann theorem: that there must be a reason why the other person is being so stubborn as not to be convinced by your own tenacity. I think this insight is the key to the whole conclusion and it is totally overlooked by most disagreers.
I haven’t read the whole paper yet, but here’s one quote from it (page 5):
Scott is talking about the computational complexity of his agreement protocol here. Even if we can improve the complexity to something that is considered practical from a computer science perspective, that will still likely be impractical for human beings, most of whom can’t even multiply 3 digit numbers in their heads.
To quote from the abstract of Scott Aaronson’s paper:
“A celebrated 1976 theorem of Aumann asserts that honest, rational Bayesian agents with common priors will never agree to disagree”: if their opinions about any topic are common knowledge, then those opinions must be equal.”
Even “honest, rational, Bayesian agents” seems too weak. Goal-directed agents who are forced to signal their opinions to others can benefit from voluntarily deceiving themselves in order to effectively deceive others. Their self-deception makes their opinions more credible—since they honestly believe them.
If an agent honestly believes what they are saying, it is difficult to accuse them of dishonesty—and such an agent’s understanding of Bayesian probability theory may be immaculate.
Such agents are not constrained to agree by Aumann’s disagreement theorem.
This seems to reflect human cognitive architecture more than a general fact about optimal agents or even most/all goal-directed agents. That humans are not optimal is nothing new around here, nor that the agreement theorems have little relevance to real human arguments. (I can’t be the only one to read the papers and think, ‘hell, I don’t trust myself as far as even the weakened models, much less Creationists and whatnot’, and have little use for them.)
The reason is often that you regard your own perceptions and conclusion as trustworthy and in accordance with your own aims—whereas you don’t have a very good reason to believe the other person is operating in your interests (rather than selfishly trying to manipulate you to serve their own interests). They may reason in much the same way.
Probably much the same circuitry continues to operate even in those very rare cases where two truth-seekers meet, and convince each other of their sincerity.