The most compact explanation of compartmentalization I’ve come up with is:
You are shown a convincing argument for the proposition that paraquat weedkiller is an effective health tonic.
You believe the argument, you want to be healthy, and you can’t think of any logical reason you shouldn’t act on your belief.
Nonetheless, you fail to actually drink any paraquat weedkiller.
Sure, there was in fact a flaw somewhere in the argument. But evolution wasn’t able to give us brains that are infinitely good at spotting flaws in arguments. The best it could come up with was brains that do not in fact think, say and do all the things they logically should based on their declaratively held beliefs.
That one strikes me mostly as being a subconscious standard of proof that says “it requires X level of confidence to accept an argument as convincing, but X+n confidence before I am so convinced as to do something that I intuitively expect to kill me.”
It also strikes me as eminently sensible, and thus doesn’t do well at illustrating compartmentalisation as a bad thing :)
It was explicitly written to show the evolutionary benefit of compartmentalization, not the problems it causes. Heuristics endure because (in the ancestral environment) they worked better on average than their alternatives.
While not disagreeing with the main thrust here, thinking of compartmentalization as something explicitly selected for may be misleading.
Not everything in an evolved system is explicitly selected for, after all. And it seems at least equally plausible to me that compartmentalization is a side-effect of a brain that incorporates innumerable mechanisms for deriving cognitive outputs (including beliefs, assertions, behaviors, etc.) from inputs that themselves evolved independently.
That is, if each mechanism is of selection-value to its host, and the combination of them is at least as valuable as the sum of them individually, then a genome that happens to combine those mechanisms is selected for. If the additional work to reconcile those mechanisms (in the sense we’re discussing here) either isn’t cost-effective, or just didn’t happen to get generated in the course of random mutation, then a genome that reconciles them doesn’t get selected for.
The most compact explanation of compartmentalization I’ve come up with is:
You are shown a convincing argument for the proposition that paraquat weedkiller is an effective health tonic.
You believe the argument, you want to be healthy, and you can’t think of any logical reason you shouldn’t act on your belief.
Nonetheless, you fail to actually drink any paraquat weedkiller.
Sure, there was in fact a flaw somewhere in the argument. But evolution wasn’t able to give us brains that are infinitely good at spotting flaws in arguments. The best it could come up with was brains that do not in fact think, say and do all the things they logically should based on their declaratively held beliefs.
That one strikes me mostly as being a subconscious standard of proof that says “it requires X level of confidence to accept an argument as convincing, but X+n confidence before I am so convinced as to do something that I intuitively expect to kill me.”
It also strikes me as eminently sensible, and thus doesn’t do well at illustrating compartmentalisation as a bad thing :)
It was explicitly written to show the evolutionary benefit of compartmentalization, not the problems it causes. Heuristics endure because (in the ancestral environment) they worked better on average than their alternatives.
While not disagreeing with the main thrust here, thinking of compartmentalization as something explicitly selected for may be misleading.
Not everything in an evolved system is explicitly selected for, after all. And it seems at least equally plausible to me that compartmentalization is a side-effect of a brain that incorporates innumerable mechanisms for deriving cognitive outputs (including beliefs, assertions, behaviors, etc.) from inputs that themselves evolved independently.
That is, if each mechanism is of selection-value to its host, and the combination of them is at least as valuable as the sum of them individually, then a genome that happens to combine those mechanisms is selected for. If the additional work to reconcile those mechanisms (in the sense we’re discussing here) either isn’t cost-effective, or just didn’t happen to get generated in the course of random mutation, then a genome that reconciles them doesn’t get selected for.