I agree this is a big factor, and might be the main pathway through which people end up believing what people believe the believe. If I had to guess, I’d guess you’re right.
E.g., if there’s a evidence E in favor of H and evidence E’ against H, if the group is really into thinking about and talking about E as a topic, then the group will probably end up believing H too much.
I think it would be great if you or someone wrote a post about this (or whatever you meant by your comment) and pointed to some examples. I think the LessWrong community is somewhat plagued by attentional bias leading to collective epistemic blind spots. (Not necessarily more than other communities; just different blind spots.)
I agree this is a big factor, and might be the main pathway through which people end up believing what people believe the believe. If I had to guess, I’d guess you’re right.
E.g., if there’s a evidence E in favor of H and evidence E’ against H, if the group is really into thinking about and talking about E as a topic, then the group will probably end up believing H too much.
I think it would be great if you or someone wrote a post about this (or whatever you meant by your comment) and pointed to some examples. I think the LessWrong community is somewhat plagued by attentional bias leading to collective epistemic blind spots. (Not necessarily more than other communities; just different blind spots.)