A lot depends on whether this is a high-bandwidth discussion/debate, or an anonymous post/read of public statements (or, on messages boards, somewhere in between). In the interactive case, Alice and Bob could focus on cruxes and specific points of agreement/disagreement. In the public/semi-public case, it’s rare that either side puts that much effort in.
I’ll also note that a lot of topics on which such disagreements persist are massively multidimensional and hard to quantify degree of closeness, so “agreement” is very hard to define. No two humans (and likely no two distinct real agents) have identical priors, so Aumann’s Agreement Theorem doesn’t apply—they don’t HAVE to agree.
And finally, it’s not clear how important the disagreements are, compared to the dimensions where the distance is small (near-agreement). Intellectuals focus on the disagreement, both because it’s the interesting part, and because that’s where some amount of status comes from. A whole lot of these disagreements end up having zero practical impact. Though, of course, some DO matter, and it’s a whole separate domain of disagreement which dimensions are important to agree on...
I’m talking specifically about discussions on LW. Of course in reality Alice ignores Bob’s comment 90% of the time, and that’s a problem in it’s own right. It would be ideal if people who have distinct information would choose to exchange that information.
I picked a specific and reasonably grounded topic, “x-risk”, or “the probability that we all die in the next 10 years”, which is one number, so not hard to compare, unless you want to break it down by cause of death. In contrived philosophical discussions, it can certainly be hard to determine who agrees on what, but I have a hunch that this is the least of the problems in those discussions.
A lot of things have zero practical impact, and that’s also a problem in it’s own right. It seems to me that we’re barely ever having “is working on this problem going to have practical impact?” type of discussions.
A lot depends on whether this is a high-bandwidth discussion/debate, or an anonymous post/read of public statements (or, on messages boards, somewhere in between). In the interactive case, Alice and Bob could focus on cruxes and specific points of agreement/disagreement. In the public/semi-public case, it’s rare that either side puts that much effort in.
I’ll also note that a lot of topics on which such disagreements persist are massively multidimensional and hard to quantify degree of closeness, so “agreement” is very hard to define. No two humans (and likely no two distinct real agents) have identical priors, so Aumann’s Agreement Theorem doesn’t apply—they don’t HAVE to agree.
And finally, it’s not clear how important the disagreements are, compared to the dimensions where the distance is small (near-agreement). Intellectuals focus on the disagreement, both because it’s the interesting part, and because that’s where some amount of status comes from. A whole lot of these disagreements end up having zero practical impact. Though, of course, some DO matter, and it’s a whole separate domain of disagreement which dimensions are important to agree on...
I’m talking specifically about discussions on LW. Of course in reality Alice ignores Bob’s comment 90% of the time, and that’s a problem in it’s own right. It would be ideal if people who have distinct information would choose to exchange that information.
I picked a specific and reasonably grounded topic, “x-risk”, or “the probability that we all die in the next 10 years”, which is one number, so not hard to compare, unless you want to break it down by cause of death. In contrived philosophical discussions, it can certainly be hard to determine who agrees on what, but I have a hunch that this is the least of the problems in those discussions.
A lot of things have zero practical impact, and that’s also a problem in it’s own right. It seems to me that we’re barely ever having “is working on this problem going to have practical impact?” type of discussions.