[Question] Collider bias as a cognitive blindspot?

Zack M. Davis summarizes collider bias as follows:

The explaining-away effect (or, collider bias; or, Berkson’s paradox) is a statistical phenomenon in which statistically independent causes with a common effect become anticorrelated when conditioning on the effect.

In the language of d-separation, if you have a causal graph X → Z ← Y, then conditioning on Z unblocks the path between X and Y.

… if you have a sore throat and cough, and aren’t sure whether you have the flu or mono, you should be relieved to find out it’s “just” a flu, because that decreases the probability that you have mono. You could be inflected with both the influenza and mononucleosis viruses, but if the flu is completely sufficient to explain your symptoms, there’s no additional reason to expect mono.[1]

Wikipedia gives a further example:

Suppose Alex will only date a man if his niceness plus his handsomeness exceeds some threshold. Then nicer men do not have to be as handsome to qualify for Alex’s dating pool. So, among the men that Alex dates, Alex may observe that the nicer ones are less handsome on average (and vice versa), even if these traits are uncorrelated in the general population. Note that this does not mean that men in the dating pool compare unfavorably with men in the population. On the contrary, Alex’s selection criterion means that Alex has high standards. The average nice man that Alex dates is actually more handsome than the average man in the population (since even among nice men, the ugliest portion of the population is skipped). Berkson’s negative correlation is an effect that arises within the dating pool: the rude men that Alex dates must have been even more handsome to qualify.

No crazy psychoanalysis, just a simple statistical artifact. (On a meta level, perhaps attractive people are meaner for some reason, but a priori, doesn’t collider bias explain away the need for other explanations?)

This seems like it could be everywhere. Most things have more than one causal parent; if it has many parents, there’s probably a pair which is independent. Then some degree of collider bias will occur for almost all probability distributions represented by the causal diagram, since collider bias will exist if (in the linked formalism). And if we don’t notice it unless we make a serious effort to reason about the causal structure of a problem, then we might spend time arguing about statistical artifacts, making up theories to explain things which don’t need explaining!

In The Book of Why, Judea Pearl speculates (emphasis mine):

Our brains are not wired to do probability problems, but they are wired to do causal problems. And this causal wiring produces systematic probabilistic mistakes, like optical illusions. Because there is no causal connection between [ and in ], either directly or through a common cause, [people] find it utterly incomprehensible that there is a probabilistic association. Our brains are not prepared to accept causeless correlations, and we need special training—through examples like the Monty Hall paradox… - to identify situations where they can arise. Once we have “rewired our brains” to recognize colliders, the paradox ceases to be confusing.

But how is this done? Perhaps one simply meditates on the wisdom of causal diagrams, understands the math, and thereby comes to properly intuitively reason about colliders, or at least reliably recognize them.

This question serves two purposes:

  1. If anyone has rewired their brain thusly, I’d love to hear how.

    1. It’s not clear to me that the obvious kind of trigger-action-plan will trigger on non-trivial, non-obvious instances of collider bias.

  2. To draw attention to this potential bias, since I wasn’t able to find prior discussion on LessWrong.