I tend to draw a very sharp line between anything that happens inside a brain and anything that happened in evolutionary history. There are good reasons for this! Anything originally computed in a brain can be expected to be recomputed, on the fly, in response to changing circumstances.
Consider, for example, the hypothesis that managers behave rudely toward subordinates “to signal their higher status”. This hypothesis then has two natural subdivisions:
If rudeness is an executing adaptation as such—something historically linked to the fact it signaled high status, but not psychologically linked to status drives—then we might experiment and find that, say, the rudeness of high-status men to lower-status men depended on the number of desirable women watching, but that they weren’t aware of this fact. Or maybe that people are just as rude when posting completely anonymously on the Internet (or more rude; they can now indulge their adapted penchant to be rude without worrying about the now-nonexistent reputational consequences).
If rudeness is a conscious or subconscious strategy to signal high status (which is itself a universal adapted desire), then we’re more likely to expect the style of rudeness to be culturally variable, like clothes or jewelry; different kinds of rudeness will send different signals in different places. People will be most likely to be rude (in the culturally indicated fashion) in front of those whom they have the greatest psychological desire to impress with their own high status.
When someone says, “People do X to signal Y”, I tend to hear, “People do X when they consciously or subconsciously expect it to signal Y”, not, “Evolution built people to do X as an adaptation that executes given such-and-such circumstances, because in the ancestral environment, X signaled Y.”
I apologize, Robin, if this means I misunderstood you. But I think it really is important to use different words that draw a hard boundary between the evolutionary computation and the cognitive computation—“People are adapted to do X because it signaled Y”, versus “People do X because they expect it to signal Y”.
“Distal cause” and “proximate cause” doesn’t seem good enough, when there’s such a sharp boundary within the causal network about what gets computed, how it got computed, and when it will be recomputed. Yes, we have epistemic leakage across this boundary—we can try to fill in our leftover uncertainty about psychology using evolutionary predictions—but it’s epistemic leakage between two very different subjects.
I’ve noticed that I am, in general, less cynical than Robin, and I would offer up the guess for refutation (it is dangerous to reason about other people’s psychologies) that Robin doesn’t draw a sharp boundary across his cynicism at the evolutionary-cognitive boundary. When Robin asks “Are people doing X mostly for the sake of Y?” he seems to answer the same “Yes”, and feel more or less the same way about that answer, whether or not the reasoning goes through an evolutionary step along the way.
I would be very disturbed to learn that parents, in general, showed no grief for the loss of a child who they consciously believed to be sterile. The actual experiment which shows that parental grief correlates strongly to the expected reproductive potential of a child of that age in a hunter-gatherer society—not the different reproductive curve in a modern society—does not disturb me.
There was a point more than a decade ago when I would have seen that as a puppeteering of human emotions by evolutionary selection pressures, and hence something to be cynical about. Yet how could parental grief come into existence at all, without a strong enough selection pressure to carve it into the genome from scratch? All that should matter for saying “The parent truly cares about the child” is that the grief in the parent’s mind is cognitively real and unconditional and not even subconsciously for the sake of any ulterior motive; and so it does not update for modern reproductive curves.
Of course the emotional circuitry is ultimately there for evolutionary-historical reasons. But only conscious or subconscious computations can gloom up my day; natural selection is an alien thing whose ‘decisions’ can’t be the target of my cynicism or admiration.
I suppose that is a merely moral consequence—albeit it’s one that I care about quite a lot. Cynicism does have hedonic effects. Part of my grand agenda that I have to put forward about rationality, has to do with arguing against many various propositions “Rationality should make us cynical about X” (e.g. “physical lawfulness → choice is a meaningless illusion”) that I happen to disagree with. So you can see why I’m concerned about drawing the proper boundary of cynicism around evolutionary psychology (especially since I think the proper boundary is a sharp full stop).
But the same boundary also has major consequences for what we can expect people to recompute or not recompute—for the way that future behaviors will change as the environment changes. So once again, I advocate for language that separates out evolutionary causes and clearly labels them, especially in discussions of signaling. It has major effects, not just on how cynical I end up about human nature, but on what ‘signaling’ behaviors to expect, when.
The Evolutionary-Cognitive Boundary
I tend to draw a very sharp line between anything that happens inside a brain and anything that happened in evolutionary history. There are good reasons for this! Anything originally computed in a brain can be expected to be recomputed, on the fly, in response to changing circumstances.
Consider, for example, the hypothesis that managers behave rudely toward subordinates “to signal their higher status”. This hypothesis then has two natural subdivisions:
If rudeness is an executing adaptation as such—something historically linked to the fact it signaled high status, but not psychologically linked to status drives—then we might experiment and find that, say, the rudeness of high-status men to lower-status men depended on the number of desirable women watching, but that they weren’t aware of this fact. Or maybe that people are just as rude when posting completely anonymously on the Internet (or more rude; they can now indulge their adapted penchant to be rude without worrying about the now-nonexistent reputational consequences).
If rudeness is a conscious or subconscious strategy to signal high status (which is itself a universal adapted desire), then we’re more likely to expect the style of rudeness to be culturally variable, like clothes or jewelry; different kinds of rudeness will send different signals in different places. People will be most likely to be rude (in the culturally indicated fashion) in front of those whom they have the greatest psychological desire to impress with their own high status.
When someone says, “People do X to signal Y”, I tend to hear, “People do X when they consciously or subconsciously expect it to signal Y”, not, “Evolution built people to do X as an adaptation that executes given such-and-such circumstances, because in the ancestral environment, X signaled Y.”
I apologize, Robin, if this means I misunderstood you. But I think it really is important to use different words that draw a hard boundary between the evolutionary computation and the cognitive computation—“People are adapted to do X because it signaled Y”, versus “People do X because they expect it to signal Y”.
“Distal cause” and “proximate cause” doesn’t seem good enough, when there’s such a sharp boundary within the causal network about what gets computed, how it got computed, and when it will be recomputed. Yes, we have epistemic leakage across this boundary—we can try to fill in our leftover uncertainty about psychology using evolutionary predictions—but it’s epistemic leakage between two very different subjects.
I’ve noticed that I am, in general, less cynical than Robin, and I would offer up the guess for refutation (it is dangerous to reason about other people’s psychologies) that Robin doesn’t draw a sharp boundary across his cynicism at the evolutionary-cognitive boundary. When Robin asks “Are people doing X mostly for the sake of Y?” he seems to answer the same “Yes”, and feel more or less the same way about that answer, whether or not the reasoning goes through an evolutionary step along the way.
I would be very disturbed to learn that parents, in general, showed no grief for the loss of a child who they consciously believed to be sterile. The actual experiment which shows that parental grief correlates strongly to the expected reproductive potential of a child of that age in a hunter-gatherer society—not the different reproductive curve in a modern society—does not disturb me.
There was a point more than a decade ago when I would have seen that as a puppeteering of human emotions by evolutionary selection pressures, and hence something to be cynical about. Yet how could parental grief come into existence at all, without a strong enough selection pressure to carve it into the genome from scratch? All that should matter for saying “The parent truly cares about the child” is that the grief in the parent’s mind is cognitively real and unconditional and not even subconsciously for the sake of any ulterior motive; and so it does not update for modern reproductive curves.
Of course the emotional circuitry is ultimately there for evolutionary-historical reasons. But only conscious or subconscious computations can gloom up my day; natural selection is an alien thing whose ‘decisions’ can’t be the target of my cynicism or admiration.
I suppose that is a merely moral consequence—albeit it’s one that I care about quite a lot. Cynicism does have hedonic effects. Part of my grand agenda that I have to put forward about rationality, has to do with arguing against many various propositions “Rationality should make us cynical about X” (e.g. “physical lawfulness → choice is a meaningless illusion”) that I happen to disagree with. So you can see why I’m concerned about drawing the proper boundary of cynicism around evolutionary psychology (especially since I think the proper boundary is a sharp full stop).
But the same boundary also has major consequences for what we can expect people to recompute or not recompute—for the way that future behaviors will change as the environment changes. So once again, I advocate for language that separates out evolutionary causes and clearly labels them, especially in discussions of signaling. It has major effects, not just on how cynical I end up about human nature, but on what ‘signaling’ behaviors to expect, when.