Let’s say I’m right, and a key barrier to changing minds is the perception that listening and carefully considering the other person’s point of view amounts to an identity threat.
An interest in evolution might threaten a Christian’s identity.
Listening to pro-vaccine arguments might threaten a conservative farmer’s identity.
Worrying about speculative AI x-risks might threaten an AI capability researcher’s identity.
I would go further and claim that open-minded consideration of suggestions that rationalists ought to get more comfortable with symmetric weapons, like this one, might threaten a rationalist’s identity. As would considering the idea that one has an identity as a rationalist that could be threatened by open-minded consideration of certain ideas!
If I am correct, then it would be important not to persuade others that this claim is correct (that’s the job of honest, facts-and-logic argument), but to avoid a reflexive, identity-driven refusal to consider a facts-and-logic argument. I’ve already lost that opportunity in this post, which wasn’t written with the goal of pre-empting such a reflexive dismissal. But in future posts, how might one do such a thing?
I don’t think it’s as simple as saying something explicit like “you can be a rationalist and still consider X,” or “if you were a good rationalist, you should consider X with an open mind.” Such statements feel like identity threats, and it’s exactly that perception that we’re trying to avoid!
I also don’t think you just make the argument, relying on the rationalist form of the argument or your in-group status to avoid the identity threat. A fundamentalist preacher who starts sermonizing on Sunday morning about the truth of evolution is not going to be treated with much more receptivity by the congregation than they’d demonstrate if teleported into a Richard Dawkins lecture.
Instead, I think you have to offer up a convincing portrait of how it is that a dyed-in-the-wool, passionate rationalist might come to seriously consider outgroup ideas as an expression of rationalism. Scott Alexander, more than any other rationalist writer I’ve seen, does this extremely well. When I read his best work, and even his average work, I usually come away convinced that he really made an effort to understand the other side, not just by his in-group standards, but by the standards of the other side, before he made a judgment about what was true or not. That doesn’t necessarily mean that Scott will be convincing to the people he’s disagreeing with (indeed, he often does not persuade them), but it does mean that he can be convincing to rationalists, because he seems to be exemplifying how one can be a rationalist while deeply considering perspectives that are not the consensus within the rationalist community.
Eliezer does something really different. He seems to try and assemble an argument for his own point of view so thorough and with so many layers of meta-correctness that it seems as if there’s simply noplace that a truthseeker could possibly arrive at except the conclusion that Eliezer himself has arrived at. Eliezer has explanations for how his opponents have arrived at error, and why it makes sense to them, but it’s almost always presented as an unfortunate error that results from avoidable human cognitive biases that he himself has overcome to a greater degree than his opponents. This is often useful, but it doesn’t exemplify how a rationalist can deeply consider points of view that aren’t associated with the rationalist identity while preserving their identity as a rationalist intact. Indeed, Eliezer often comes across as if disagreeing with him might threaten your identity as a rationalist!
There are a lot of other valuable writers in the LessWrong world, but their output usually strikes me as very much “by rationalists, for rationalists.”
Let’s say I’m right, and a key barrier to changing minds is the perception that listening and carefully considering the other person’s point of view amounts to an identity threat.
An interest in evolution might threaten a Christian’s identity.
Listening to pro-vaccine arguments might threaten a conservative farmer’s identity.
Worrying about speculative AI x-risks might threaten an AI capability researcher’s identity.
I would go further and claim that open-minded consideration of suggestions that rationalists ought to get more comfortable with symmetric weapons, like this one, might threaten a rationalist’s identity. As would considering the idea that one has an identity as a rationalist that could be threatened by open-minded consideration of certain ideas!
If I am correct, then it would be important not to persuade others that this claim is correct (that’s the job of honest, facts-and-logic argument), but to avoid a reflexive, identity-driven refusal to consider a facts-and-logic argument. I’ve already lost that opportunity in this post, which wasn’t written with the goal of pre-empting such a reflexive dismissal. But in future posts, how might one do such a thing?
I don’t think it’s as simple as saying something explicit like “you can be a rationalist and still consider X,” or “if you were a good rationalist, you should consider X with an open mind.” Such statements feel like identity threats, and it’s exactly that perception that we’re trying to avoid!
I also don’t think you just make the argument, relying on the rationalist form of the argument or your in-group status to avoid the identity threat. A fundamentalist preacher who starts sermonizing on Sunday morning about the truth of evolution is not going to be treated with much more receptivity by the congregation than they’d demonstrate if teleported into a Richard Dawkins lecture.
Instead, I think you have to offer up a convincing portrait of how it is that a dyed-in-the-wool, passionate rationalist might come to seriously consider outgroup ideas as an expression of rationalism. Scott Alexander, more than any other rationalist writer I’ve seen, does this extremely well. When I read his best work, and even his average work, I usually come away convinced that he really made an effort to understand the other side, not just by his in-group standards, but by the standards of the other side, before he made a judgment about what was true or not. That doesn’t necessarily mean that Scott will be convincing to the people he’s disagreeing with (indeed, he often does not persuade them), but it does mean that he can be convincing to rationalists, because he seems to be exemplifying how one can be a rationalist while deeply considering perspectives that are not the consensus within the rationalist community.
Eliezer does something really different. He seems to try and assemble an argument for his own point of view so thorough and with so many layers of meta-correctness that it seems as if there’s simply noplace that a truthseeker could possibly arrive at except the conclusion that Eliezer himself has arrived at. Eliezer has explanations for how his opponents have arrived at error, and why it makes sense to them, but it’s almost always presented as an unfortunate error that results from avoidable human cognitive biases that he himself has overcome to a greater degree than his opponents. This is often useful, but it doesn’t exemplify how a rationalist can deeply consider points of view that aren’t associated with the rationalist identity while preserving their identity as a rationalist intact. Indeed, Eliezer often comes across as if disagreeing with him might threaten your identity as a rationalist!
There are a lot of other valuable writers in the LessWrong world, but their output usually strikes me as very much “by rationalists, for rationalists.”