Let me break down these “justifications” a little:
Clearly, the Other’s object-level arguments are flawed; no amount of trust that I can have for another person will make me believe that rocks fall upward.
This points to the fact that the other is irrational. It is perfectly reasonable for two people to disagree when at least one of them is irrational. (It might be enough to argue that at least one of the two of you is irrational, since it is possible that your own reasoning apparatus is badly broken.)
Clearly, the Other is not taking my arguments into account; there’s an obvious asymmetry in how well I understand them and have integrated their evidence, versus how much they understand me and have integrated mine.
This would not actually explain the disagreement. Even an Other who refused to study your arguments (say, he didn’t have time), but who nevertheless maintains his position, should be evidence that he has good reason for his views. Otherwise, why would your own greater understanding of the arguments on both sides (not to mention your own persistence in your position) not persuade him? Assuming he is rational (and thinks you are, etc) the only possible explanation is that he has good reasons, something you are not seeing. And that should persuade you to start changing your mind.
Clearly, the Other is completely biased in how much they trust themselves over others, versus how I humbly and evenhandedly discount my own beliefs alongside theirs.
Again this is basically evidence that he is irrational, and reduces to case 1.
The Aumann results require that the two of your are honest, truth-seeking Bayesian wannabes, to first approximation, and that you see each other that way. The key idea is not whether the two of you can understand each other’s arguments, but that refusal to change position sends a very strong signal about the strength of the evidence.
If the two of you are wrapping things up by preparing to agree to disagree, you have to bite the bullet and say that the other is being irrational, or is lying, or is not truth seeking. There is no respectful way to agree to disagree. You must either be extremely rude, or reach agreement.
If the two of you are wrapping things up by preparing to agree to disagree, you have to bite the bullet and say that the other is being irrational, or is lying, or is not truth seeking. There is no respectful way to agree to disagree. You must either be extremely rude, or reach agreement.
Hal, is this still your position, a year later? If so, I’d like to argue against it. Robin Hanson wrote in http://hanson.gmu.edu/disagree.pdf (page 9):
Since Bayesians with a common prior cannot agree to disagree, to what can we attribute persistent
human disagreement? We can generalize the concept of a Bayesian to that of a Bayesian wannabe,
who makes computational errors while attempting to be Bayesian. Agreements to disagree can
then arise from pure differences in priors, or from pure differences in computation, but it is not
clear how rational these disagreements are. Disagreements due to differing information seem more
rational, but for Bayesians disagreements cannot arise due to differing information alone.
Robin argues in another paper that differences in priors really are irrational. I presume that he believes that differences in computation are also irrational, although I don’t know if he made a detailed case for it somewhere.
Suppose we grant that these differences are irrational. It seems to me that disagreements can still be “reasonable”, if we don’t know how to resolve these differences, even in principle. Because we are products of evolution, we probably have random differences in priors and computation, and since at this point we don’t seem to know how to resolve these differences, many disagreements may be both honest and reasonable. Therefore, there is no need to conclude that the other disagreer must be be irrational (as an individual), or is lying, or is not truth seeking.
Assuming that the above is correct, I think the role of a debate between two Bayesian wannabes should be to pinpoint the exact differences in priors and computation that caused the disagreement, not to reach immediate agreement. Once those differences are identified, we can try to find or invent new tools for resolving them, perhaps tools specific to the difference at hand.
My Bayesian wannabe paper is an argument against disagreement based on computation differences. You can “resolve” a disagreement by moving your opinion in the direction of the other opinion. If failing to do this reduces your average accuracy, I feel I can call that failure “irrational”.
It would be clearer if you said “epistemically irrational”. Instrumental rationality can be consistent with sticking to your guns—especially if your aim involves appearing to be exceptionally confident in your own views.
You can “resolve” a disagreement by moving your opinion in the direction of the other opinion. If failing to do this reduces your average accuracy, I feel I can call that failure “irrational”.
Do you have a suggestion for how much one should move one’s opinion in the direction of the other opinion, and an argument that doing so would improve average accuracy?
If you don’t have time for that, can you just explain what you mean by “average”? Average over what, using what distribution, and according to whose computation?
Let me break down these “justifications” a little:
This points to the fact that the other is irrational. It is perfectly reasonable for two people to disagree when at least one of them is irrational. (It might be enough to argue that at least one of the two of you is irrational, since it is possible that your own reasoning apparatus is badly broken.) This would not actually explain the disagreement. Even an Other who refused to study your arguments (say, he didn’t have time), but who nevertheless maintains his position, should be evidence that he has good reason for his views. Otherwise, why would your own greater understanding of the arguments on both sides (not to mention your own persistence in your position) not persuade him? Assuming he is rational (and thinks you are, etc) the only possible explanation is that he has good reasons, something you are not seeing. And that should persuade you to start changing your mind. Again this is basically evidence that he is irrational, and reduces to case 1.The Aumann results require that the two of your are honest, truth-seeking Bayesian wannabes, to first approximation, and that you see each other that way. The key idea is not whether the two of you can understand each other’s arguments, but that refusal to change position sends a very strong signal about the strength of the evidence.
If the two of you are wrapping things up by preparing to agree to disagree, you have to bite the bullet and say that the other is being irrational, or is lying, or is not truth seeking. There is no respectful way to agree to disagree. You must either be extremely rude, or reach agreement.
Hal, is this still your position, a year later? If so, I’d like to argue against it. Robin Hanson wrote in http://hanson.gmu.edu/disagree.pdf (page 9):
Robin argues in another paper that differences in priors really are irrational. I presume that he believes that differences in computation are also irrational, although I don’t know if he made a detailed case for it somewhere.
Suppose we grant that these differences are irrational. It seems to me that disagreements can still be “reasonable”, if we don’t know how to resolve these differences, even in principle. Because we are products of evolution, we probably have random differences in priors and computation, and since at this point we don’t seem to know how to resolve these differences, many disagreements may be both honest and reasonable. Therefore, there is no need to conclude that the other disagreer must be be irrational (as an individual), or is lying, or is not truth seeking.
Assuming that the above is correct, I think the role of a debate between two Bayesian wannabes should be to pinpoint the exact differences in priors and computation that caused the disagreement, not to reach immediate agreement. Once those differences are identified, we can try to find or invent new tools for resolving them, perhaps tools specific to the difference at hand.
My Bayesian wannabe paper is an argument against disagreement based on computation differences. You can “resolve” a disagreement by moving your opinion in the direction of the other opinion. If failing to do this reduces your average accuracy, I feel I can call that failure “irrational”.
It would be clearer if you said “epistemically irrational”. Instrumental rationality can be consistent with sticking to your guns—especially if your aim involves appearing to be exceptionally confident in your own views.
Do you have a suggestion for how much one should move one’s opinion in the direction of the other opinion, and an argument that doing so would improve average accuracy?
If you don’t have time for that, can you just explain what you mean by “average”? Average over what, using what distribution, and according to whose computation?
How confident are you? How confident do you think your opponent is? Use those estimates to derive the distance you move.