I had sort of forgotten that “bias” could be taste in music or differential human outcomes based on “biased” treatment. Noticing that collision was helpful to me.
Also, I think there is an interesting quirk in the LW/local usage of the term “bias” and its general stance towards epistemology. The local culture is really really into “overcoming biases” with a zeal and cultural functionality that has echoes in the Christian doctrine of Original Sin.
(Not that this is bad! Assuming that people are in cognitive error by default because of biases is useful for getting people to actually listen with some measure of generosity to inferentially distant third parties and teachers and so on. Also, the “biases” framing powers a pretty good sales pitch for learning about meta-cognition because loss aversion is a known bias that people who need meta-cognitive training probably have. Given knowledge of loss aversion, you should naively expect people who need a rationality upgrade to be about three times more interested in avoiding cognitive downsides as compared to their enthusiasm for cognitive upgrades. The very name of the website “less wrong” is great marketing from this perspective :-P)
In any case, in academic psychology it is generally acknowledged that “biases” and “heuristics” are in some sense equivalent. Specifically, both processes involve leaping from hints to conclusions with locally inadequate justification. When this happens in a way that can be retrospectively determined to be incorrect, it gets negative valence and we call it a “bias”. When it comes out well so that it seems like a dandy cognitive labor saving device, it gets positive valence and we call it a “heuristic”.
The key insight is that heuristics are heuristics only in limited domains, and no technique that we call a heuristic can be profitably deployed outside it’s appropriate context. When someone attempts to deploy a heuristic in a completely generic way, they transport it outside of the context it was tuned for and it becomes a bias. In the meantime, there are distinct techniques that are neither biases nor heuristics, but they generally take much longer to compute, or require more data gathering than busy people with busy competitors have time for.
Cialdini’s book Influence has a bunch of great examples of contextually dependent cognitive shortcuts. If you lived in a small social context that had been self contained, poorly mixed, and functional for a long period of time in the past, it would be a pretty great life heuristic to trust and copy people who were benevolent towards you, similar to you, but slightly higher status. Doing the same “trust and copy” routine with people you see on TV, people on random street corners, or with professional modern/urban sales people who have read Cialdini is much less advisable. The heuristic becomes a bias because the social context has changed.
The issue of context and generalization can get really deep, and (so far as I’m aware) is not a solved subject with a widely recognized popular solution. An entry point into substantive literature on what is sometimes called “the foundations of inference” is Wolpert and Macready’s “no free lunch theorem” and thematically associated mathematical work having to do with compression and sorting.
A deep (and admittedly somewhat hand-wavy) conclusion that falls out of this work is that for inference or evolution or thinking to ever find any sort of “purchase”, there must be substantial structural and/or energetic redundancy in the local “reality”. Otherwise it would be be pointless and/or impossible to progressively accumulate things like: (1) knowledge worth remembering or (2) adaptations worth having or (3) heuristics of seemingly generic utility. If physics was pure chaos and noise, there would be no life, no brains, and no point for those brains to be concerned with such abstruse concepts as epistemology in the first place.
This loops back around to the OP’s classification of some people as “magical thinkers”. Many humans do not seem to feel in their bones that they exist within a logically consistent mesh of redundantly patterned causation. They seem to model the world as being mostly chaos, with some moderately powerful agent(s) that approve or disapprove of various rituals being followed. I think what the OP is asking for is a way to convey “the feeling of existing within a logically consistent mesh of redundantly patterned causation such that various inference techniques are justified” to arbitrary humans via a few thousand words, but (tragically?) at the present time I do not know how to do it.
I had sort of forgotten that “bias” could be taste in music or differential human outcomes based on “biased” treatment. Noticing that collision was helpful to me.
Also, I think there is an interesting quirk in the LW/local usage of the term “bias” and its general stance towards epistemology. The local culture is really really into “overcoming biases” with a zeal and cultural functionality that has echoes in the Christian doctrine of Original Sin.
(Not that this is bad! Assuming that people are in cognitive error by default because of biases is useful for getting people to actually listen with some measure of generosity to inferentially distant third parties and teachers and so on. Also, the “biases” framing powers a pretty good sales pitch for learning about meta-cognition because loss aversion is a known bias that people who need meta-cognitive training probably have. Given knowledge of loss aversion, you should naively expect people who need a rationality upgrade to be about three times more interested in avoiding cognitive downsides as compared to their enthusiasm for cognitive upgrades. The very name of the website “less wrong” is great marketing from this perspective :-P)
In any case, in academic psychology it is generally acknowledged that “biases” and “heuristics” are in some sense equivalent. Specifically, both processes involve leaping from hints to conclusions with locally inadequate justification. When this happens in a way that can be retrospectively determined to be incorrect, it gets negative valence and we call it a “bias”. When it comes out well so that it seems like a dandy cognitive labor saving device, it gets positive valence and we call it a “heuristic”.
The key insight is that heuristics are heuristics only in limited domains, and no technique that we call a heuristic can be profitably deployed outside it’s appropriate context. When someone attempts to deploy a heuristic in a completely generic way, they transport it outside of the context it was tuned for and it becomes a bias. In the meantime, there are distinct techniques that are neither biases nor heuristics, but they generally take much longer to compute, or require more data gathering than busy people with busy competitors have time for.
Cialdini’s book Influence has a bunch of great examples of contextually dependent cognitive shortcuts. If you lived in a small social context that had been self contained, poorly mixed, and functional for a long period of time in the past, it would be a pretty great life heuristic to trust and copy people who were benevolent towards you, similar to you, but slightly higher status. Doing the same “trust and copy” routine with people you see on TV, people on random street corners, or with professional modern/urban sales people who have read Cialdini is much less advisable. The heuristic becomes a bias because the social context has changed.
The issue of context and generalization can get really deep, and (so far as I’m aware) is not a solved subject with a widely recognized popular solution. An entry point into substantive literature on what is sometimes called “the foundations of inference” is Wolpert and Macready’s “no free lunch theorem” and thematically associated mathematical work having to do with compression and sorting.
A deep (and admittedly somewhat hand-wavy) conclusion that falls out of this work is that for inference or evolution or thinking to ever find any sort of “purchase”, there must be substantial structural and/or energetic redundancy in the local “reality”. Otherwise it would be be pointless and/or impossible to progressively accumulate things like: (1) knowledge worth remembering or (2) adaptations worth having or (3) heuristics of seemingly generic utility. If physics was pure chaos and noise, there would be no life, no brains, and no point for those brains to be concerned with such abstruse concepts as epistemology in the first place.
This loops back around to the OP’s classification of some people as “magical thinkers”. Many humans do not seem to feel in their bones that they exist within a logically consistent mesh of redundantly patterned causation. They seem to model the world as being mostly chaos, with some moderately powerful agent(s) that approve or disapprove of various rituals being followed. I think what the OP is asking for is a way to convey “the feeling of existing within a logically consistent mesh of redundantly patterned causation such that various inference techniques are justified” to arbitrary humans via a few thousand words, but (tragically?) at the present time I do not know how to do it.