Here’s a second thought that came to mind, which again doesn’t seem especially critical to this post’s aims...
You write:
Someone who can both predict my beliefs and disagrees with me is someone I should listen to carefully. They seem to both understand my model and still reject it, and this suggests they know something I don’t.
I think I understand the rationale for this statement (though I didn’t read the linked Science article), and I think it will sometimes be true and important. But I think that those sentences might overstate the point. In particular, I think that those sentences implicitly presume that this other person is genuinely primarily trying to form accurate beliefs, and perhaps also that they’re doing so in a way that’s relatively free from bias.
But (almost?) everyone is at least sometimes primarily aiming (perhaps unconsciously) at something other than forming accurate beliefs, even when it superficially looks like they’re aiming at forming accurate beliefs. For example, they may be engaging in “ideologically motivated cognition[, i.e.] a form of information processing that promotes individuals’ interests in forming and maintaining beliefs that signify their loyalty to important affinity groups”. The linked study also notes that “subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition”.
So I think it might be common for people to be able to predict my beliefs anddisagree with me, but with their disagreement not being based on knowing more or having better reasoning process but rather finding ways to continue to hold beliefs that they’re (in some sense) “motivated” to hold for some other reason.
Additionally, some people may genuinely be trying to form accurate beliefs, but with unusually bad epistemics / unusually major bias. If so, they may be able to predict my beliefs and disagree with me, but with their disagreement not being based on knowing more or having better reasoning process but rather being a result of their bad epistemics / biases.
Of course, we should be very careful with assuming that any of the above is why a person disagrees with us! See also this and this.
The claims I’d more confidently agree with are:
Someone who can both predict my beliefs and disagrees with me might be someone I should listen to carefully. They seem to both understand my model and still reject it, and this suggests they might know something I don’t (especially if they seem to be genuinely trying to form accurate beliefs and to do so via a reasonable process).
(Or maybe having that parenthetical at the end would be bad via making people feel licensed to dismiss people who disagree with them as just biased.)
Fair points. I think that the fact that they can predict one’s beliefs is minor evidence they will be EV-positive to listen to. You also have to take into account the challenge of learning from them.
All that said, this sort of technique is fairly prosaic. I’m aiming for a future much better; where key understandings are all in optimized prediction applications and people generally pay attention to those.
Here’s a second thought that came to mind, which again doesn’t seem especially critical to this post’s aims...
You write:
I think I understand the rationale for this statement (though I didn’t read the linked Science article), and I think it will sometimes be true and important. But I think that those sentences might overstate the point. In particular, I think that those sentences implicitly presume that this other person is genuinely primarily trying to form accurate beliefs, and perhaps also that they’re doing so in a way that’s relatively free from bias.
But (almost?) everyone is at least sometimes primarily aiming (perhaps unconsciously) at something other than forming accurate beliefs, even when it superficially looks like they’re aiming at forming accurate beliefs. For example, they may be engaging in “ideologically motivated cognition[, i.e.] a form of information processing that promotes individuals’ interests in forming and maintaining beliefs that signify their loyalty to important affinity groups”. The linked study also notes that “subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition”.
So I think it might be common for people to be able to predict my beliefs and disagree with me, but with their disagreement not being based on knowing more or having better reasoning process but rather finding ways to continue to hold beliefs that they’re (in some sense) “motivated” to hold for some other reason.
Additionally, some people may genuinely be trying to form accurate beliefs, but with unusually bad epistemics / unusually major bias. If so, they may be able to predict my beliefs and disagree with me, but with their disagreement not being based on knowing more or having better reasoning process but rather being a result of their bad epistemics / biases.
Of course, we should be very careful with assuming that any of the above is why a person disagrees with us! See also this and this.
The claims I’d more confidently agree with are:
(Or maybe having that parenthetical at the end would be bad via making people feel licensed to dismiss people who disagree with them as just biased.)
Fair points. I think that the fact that they can predict one’s beliefs is minor evidence they will be EV-positive to listen to. You also have to take into account the challenge of learning from them.
All that said, this sort of technique is fairly prosaic. I’m aiming for a future much better; where key understandings are all in optimized prediction applications and people generally pay attention to those.