To quote from the abstract of Scott Aaronson’s paper:
“A celebrated 1976 theorem of Aumann asserts that honest, rational Bayesian agents with common priors will never agree to disagree”: if their opinions about any topic are common knowledge, then those opinions must be equal.”
Even “honest, rational, Bayesian agents” seems too weak. Goal-directed agents who are forced to signal their opinions to others can benefit from voluntarily deceiving themselves in order to effectively deceive others. Their self-deception makes their opinions more credible—since they honestly believe them.
If an agent honestly believes what they are saying, it is difficult to accuse them of dishonesty—and such an agent’s understanding of Bayesian probability theory may be immaculate.
Such agents are not constrained to agree by Aumann’s disagreement theorem.
Goal-directed agents who are forced to signal their opinions to others can benefit from voluntarily deceiving themselves in order to effectively deceive others. Their self-deception makes their opinions more credible—since they honestly believe them.
This seems to reflect human cognitive architecture more than a general fact about optimal agents or even most/all goal-directed agents. That humans are not optimal is nothing new around here, nor that the agreement theorems have little relevance to real human arguments. (I can’t be the only one to read the papers and think, ‘hell, I don’t trust myself as far as even the weakened models, much less Creationists and whatnot’, and have little use for them.)
To quote from the abstract of Scott Aaronson’s paper:
“A celebrated 1976 theorem of Aumann asserts that honest, rational Bayesian agents with common priors will never agree to disagree”: if their opinions about any topic are common knowledge, then those opinions must be equal.”
Even “honest, rational, Bayesian agents” seems too weak. Goal-directed agents who are forced to signal their opinions to others can benefit from voluntarily deceiving themselves in order to effectively deceive others. Their self-deception makes their opinions more credible—since they honestly believe them.
If an agent honestly believes what they are saying, it is difficult to accuse them of dishonesty—and such an agent’s understanding of Bayesian probability theory may be immaculate.
Such agents are not constrained to agree by Aumann’s disagreement theorem.
This seems to reflect human cognitive architecture more than a general fact about optimal agents or even most/all goal-directed agents. That humans are not optimal is nothing new around here, nor that the agreement theorems have little relevance to real human arguments. (I can’t be the only one to read the papers and think, ‘hell, I don’t trust myself as far as even the weakened models, much less Creationists and whatnot’, and have little use for them.)