The reason why I am not optimistic about this sort of thing is because many people know someone clever who has radically different political opinions from them, and they often talk about politics quite a bit. So those sort of Aumann updates often happen, but they often end at a stance like “we both understand each other’s opinions of the facts, but have different value systems, and so disagree” or something like “we both assign the same likelihood ratio to evidence, but have very different priors.”
I guess my thought was that LWers are likely to think that its possible to implement values incoherently (ie correctably), and so might have much more to say (and learn) other than your average “clever person”. Scope neglect, cognitive dissonance, etc, etc.
My guess would be that really solid rationalists might turn out to disagree with each other over really deep values, like one being primarily selfish and sadistic while another has lots of empathy and each can see that each has built a personal narrative around such tendencies, but I wouldn’t expect them to disagree, for example, over whether someone was really experiencing pain or not. I wouldn’t expect them to get bogged down in a hairsplitting semantic claim about whether a particular physical entity “counts as a person” for the sake of a given moral code.
And “we just have different priors” usually actually means “that would take too long to explain” from what I can tell. Pretty much all of us started out as babies, and most of us have more or less the same sensory apparatus and went through Piaget’s stages and so on and so forth. Taking that common starting point and “all of life” as the evidence, it seems likely that differences in opinion could take days or weeks or months of discussion to resolve, rather than 10 minutes of rhetorical hand waving. I saw an evangelical creationist argued into direct admission that creationism is formally irrational once, but it took the rationalist about 15 hours over the course of several days to do (and that topic is basically a slam dunk). I wouldn’t expect issues that are legitimately fuzzy and emotionally fraught to be dramatically easier than that was.
...spelling this out, it seems likely to me that being someone’s aumann chavutra could involve substantially more intellectual intimacy than most people are up for. Perhaps it would be good to have some kind of formal non-disclosure contract or something like that first, as with a therapist, confessor, or lawyer?
Taking that common starting point and “all of life” as the evidence, it seems likely that differences in opinion could take days or weeks or months of discussion to resolve, rather than 10 minutes of rhetorical hand waving.
All of our lives, or even a month of it, probably imparted to us far more evidence than we could explain to each other in a month of discussion. The trouble is that much of the learning got lodged in memory regions that are practically inaccessible to the verbal parts of our brains. I can’t define Xs and you can’t define Ys, but we know them when we see them.
“We just have different priors” is probably not the best way to describe these cognitive differences—I agree with you there. But we could still be at a loss to verbally reason our way through them.
I don’t think people have any sort of capacity to fully describe their entire audio/video experience in full resolution, but if you think about the real barriers to more limited communication I predict that you’ll be able to imagine plausible attempts to circumvent these barriers for the specific purpose of developing a model of a particular real world domain in common with someone with enough precision to derive similar strategic conclusions in limited domains.
I can’t define Xs and you can’t define Ys, but we know them when we see them.
Maybe I’m misunderstanding you, but my impression is that this is what extensive definitions and rationalist taboo are for: the first to inspire words and the second to trim away confusing connotations that already adhere to the words people have started to use. The procedure for handling the apparently incommensurable “know it when I see it” concepts of each party is thus to coin new words in private for the sake of the conversation, master the common vocabulary, and then communicate while using these new terms and see if the reasonable predictions of the novel common understanding square with observable reality.
A lot of times I expect that each person will turn out to have been somewhat confused, perhaps by committing a kind of fallacy of equivocation due to lumping genuinely distinct things under the same “know it when I see it” concept, which (in the course of the conversation) could be converted to a single word and explored thoroughly enough to detect the confusion, perhaps suggesting the need for more refined sub-concepts that “cut reality at the joints”.
When I think of having a conversation with a skilled rationalist, I expect them to be able to deploy these sorts of skills on the most important seeming source of disagreement, rather than having to fall back to “agreeing to disagree”. They might still do so if the estimated cost of the time in conversation is lower the the expected benefit of agreement, but they wouldn’t be forced to it out of raw incapacity. That is, it wouldn’t be a matter of incapacity, but a matter of a pragmatically reasonable lack of interest. In some sense, one or both of us would be too materially, intellectually, or relationally impoverished to be able to afford thinking clearly together on that subject.
However, notice how far the proposal has come from “talking about politics in a web forum”. It starts to appear as though it would be a feat of communication for two relatively richly endowed people, in private, to rationally update with each other on a single conceptually tricky and politically contentious point. If that conversational accomplishment seems difficult for many people here, does it seem easier or more likely to work for many people at different levels of skill, to individually spend fewer hours, in public, writing for a wide and heterogeneously knowledgeable audience, who can provide no meaningful feedback, on that same conceptually tricky and politically contentious point?
The reason why I am not optimistic about this sort of thing is because many people know someone clever who has radically different political opinions from them, and they often talk about politics quite a bit. So those sort of Aumann updates often happen, but they often end at a stance like “we both understand each other’s opinions of the facts, but have different value systems, and so disagree” or something like “we both assign the same likelihood ratio to evidence, but have very different priors.”
I guess my thought was that LWers are likely to think that its possible to implement values incoherently (ie correctably), and so might have much more to say (and learn) other than your average “clever person”. Scope neglect, cognitive dissonance, etc, etc.
My guess would be that really solid rationalists might turn out to disagree with each other over really deep values, like one being primarily selfish and sadistic while another has lots of empathy and each can see that each has built a personal narrative around such tendencies, but I wouldn’t expect them to disagree, for example, over whether someone was really experiencing pain or not. I wouldn’t expect them to get bogged down in a hairsplitting semantic claim about whether a particular physical entity “counts as a person” for the sake of a given moral code.
And “we just have different priors” usually actually means “that would take too long to explain” from what I can tell. Pretty much all of us started out as babies, and most of us have more or less the same sensory apparatus and went through Piaget’s stages and so on and so forth. Taking that common starting point and “all of life” as the evidence, it seems likely that differences in opinion could take days or weeks or months of discussion to resolve, rather than 10 minutes of rhetorical hand waving. I saw an evangelical creationist argued into direct admission that creationism is formally irrational once, but it took the rationalist about 15 hours over the course of several days to do (and that topic is basically a slam dunk). I wouldn’t expect issues that are legitimately fuzzy and emotionally fraught to be dramatically easier than that was.
...spelling this out, it seems likely to me that being someone’s aumann chavutra could involve substantially more intellectual intimacy than most people are up for. Perhaps it would be good to have some kind of formal non-disclosure contract or something like that first, as with a therapist, confessor, or lawyer?
All of our lives, or even a month of it, probably imparted to us far more evidence than we could explain to each other in a month of discussion. The trouble is that much of the learning got lodged in memory regions that are practically inaccessible to the verbal parts of our brains. I can’t define Xs and you can’t define Ys, but we know them when we see them.
“We just have different priors” is probably not the best way to describe these cognitive differences—I agree with you there. But we could still be at a loss to verbally reason our way through them.
I don’t think people have any sort of capacity to fully describe their entire audio/video experience in full resolution, but if you think about the real barriers to more limited communication I predict that you’ll be able to imagine plausible attempts to circumvent these barriers for the specific purpose of developing a model of a particular real world domain in common with someone with enough precision to derive similar strategic conclusions in limited domains.
Maybe I’m misunderstanding you, but my impression is that this is what extensive definitions and rationalist taboo are for: the first to inspire words and the second to trim away confusing connotations that already adhere to the words people have started to use. The procedure for handling the apparently incommensurable “know it when I see it” concepts of each party is thus to coin new words in private for the sake of the conversation, master the common vocabulary, and then communicate while using these new terms and see if the reasonable predictions of the novel common understanding square with observable reality.
A lot of times I expect that each person will turn out to have been somewhat confused, perhaps by committing a kind of fallacy of equivocation due to lumping genuinely distinct things under the same “know it when I see it” concept, which (in the course of the conversation) could be converted to a single word and explored thoroughly enough to detect the confusion, perhaps suggesting the need for more refined sub-concepts that “cut reality at the joints”.
When I think of having a conversation with a skilled rationalist, I expect them to be able to deploy these sorts of skills on the most important seeming source of disagreement, rather than having to fall back to “agreeing to disagree”. They might still do so if the estimated cost of the time in conversation is lower the the expected benefit of agreement, but they wouldn’t be forced to it out of raw incapacity. That is, it wouldn’t be a matter of incapacity, but a matter of a pragmatically reasonable lack of interest. In some sense, one or both of us would be too materially, intellectually, or relationally impoverished to be able to afford thinking clearly together on that subject.
However, notice how far the proposal has come from “talking about politics in a web forum”. It starts to appear as though it would be a feat of communication for two relatively richly endowed people, in private, to rationally update with each other on a single conceptually tricky and politically contentious point. If that conversational accomplishment seems difficult for many people here, does it seem easier or more likely to work for many people at different levels of skill, to individually spend fewer hours, in public, writing for a wide and heterogeneously knowledgeable audience, who can provide no meaningful feedback, on that same conceptually tricky and politically contentious point?