If the two of you are wrapping things up by preparing to agree to disagree, you have to bite the bullet and say that the other is being irrational, or is lying, or is not truth seeking. There is no respectful way to agree to disagree. You must either be extremely rude, or reach agreement.
Hal, is this still your position, a year later? If so, I’d like to argue against it. Robin Hanson wrote in http://hanson.gmu.edu/disagree.pdf (page 9):
Since Bayesians with a common prior cannot agree to disagree, to what can we attribute persistent
human disagreement? We can generalize the concept of a Bayesian to that of a Bayesian wannabe,
who makes computational errors while attempting to be Bayesian. Agreements to disagree can
then arise from pure differences in priors, or from pure differences in computation, but it is not
clear how rational these disagreements are. Disagreements due to differing information seem more
rational, but for Bayesians disagreements cannot arise due to differing information alone.
Robin argues in another paper that differences in priors really are irrational. I presume that he believes that differences in computation are also irrational, although I don’t know if he made a detailed case for it somewhere.
Suppose we grant that these differences are irrational. It seems to me that disagreements can still be “reasonable”, if we don’t know how to resolve these differences, even in principle. Because we are products of evolution, we probably have random differences in priors and computation, and since at this point we don’t seem to know how to resolve these differences, many disagreements may be both honest and reasonable. Therefore, there is no need to conclude that the other disagreer must be be irrational (as an individual), or is lying, or is not truth seeking.
Assuming that the above is correct, I think the role of a debate between two Bayesian wannabes should be to pinpoint the exact differences in priors and computation that caused the disagreement, not to reach immediate agreement. Once those differences are identified, we can try to find or invent new tools for resolving them, perhaps tools specific to the difference at hand.
My Bayesian wannabe paper is an argument against disagreement based on computation differences. You can “resolve” a disagreement by moving your opinion in the direction of the other opinion. If failing to do this reduces your average accuracy, I feel I can call that failure “irrational”.
It would be clearer if you said “epistemically irrational”. Instrumental rationality can be consistent with sticking to your guns—especially if your aim involves appearing to be exceptionally confident in your own views.
You can “resolve” a disagreement by moving your opinion in the direction of the other opinion. If failing to do this reduces your average accuracy, I feel I can call that failure “irrational”.
Do you have a suggestion for how much one should move one’s opinion in the direction of the other opinion, and an argument that doing so would improve average accuracy?
If you don’t have time for that, can you just explain what you mean by “average”? Average over what, using what distribution, and according to whose computation?
Hal, is this still your position, a year later? If so, I’d like to argue against it. Robin Hanson wrote in http://hanson.gmu.edu/disagree.pdf (page 9):
Robin argues in another paper that differences in priors really are irrational. I presume that he believes that differences in computation are also irrational, although I don’t know if he made a detailed case for it somewhere.
Suppose we grant that these differences are irrational. It seems to me that disagreements can still be “reasonable”, if we don’t know how to resolve these differences, even in principle. Because we are products of evolution, we probably have random differences in priors and computation, and since at this point we don’t seem to know how to resolve these differences, many disagreements may be both honest and reasonable. Therefore, there is no need to conclude that the other disagreer must be be irrational (as an individual), or is lying, or is not truth seeking.
Assuming that the above is correct, I think the role of a debate between two Bayesian wannabes should be to pinpoint the exact differences in priors and computation that caused the disagreement, not to reach immediate agreement. Once those differences are identified, we can try to find or invent new tools for resolving them, perhaps tools specific to the difference at hand.
My Bayesian wannabe paper is an argument against disagreement based on computation differences. You can “resolve” a disagreement by moving your opinion in the direction of the other opinion. If failing to do this reduces your average accuracy, I feel I can call that failure “irrational”.
It would be clearer if you said “epistemically irrational”. Instrumental rationality can be consistent with sticking to your guns—especially if your aim involves appearing to be exceptionally confident in your own views.
Do you have a suggestion for how much one should move one’s opinion in the direction of the other opinion, and an argument that doing so would improve average accuracy?
If you don’t have time for that, can you just explain what you mean by “average”? Average over what, using what distribution, and according to whose computation?
How confident are you? How confident do you think your opponent is? Use those estimates to derive the distance you move.