I have an off-topic question about this theory of ancestral environment. It seems to me that we would expect the behavior you describe if (1) decision theory says it is beneficial, and (2) our reward centers have sufficiently fuzzy definitions that behavioralconditioning of some kind is effective.
By contract, you seem to be articulating a strong ancestral environment theory that says the beneficial aspects shown by decision theory analysis were a strong enough selection pressure that there actually are processes in the brain devoted to signalling, status, and the like. (in the same way that there are processes in the brain devoted to sight, or memory)
What sort of evidence would distinguish between these two positions? Relatedly, am I understanding the positions correctly, or have I inadvertently set up a straw man?
I like that essay, which I hadn’t seen before. But I’m having trouble deciphering whether it endorses what I called the strong ancestral environment hypothesis.
I’d say it doesn’t endorse the strong ancestral environment hypothesis (SAEH). The most relevant part of EY’s piece is, “Anything originally computed in a brain can be expected to be recomputed, on the fly, in response to changing circumstances.” “Mainstream” evolutionary psychologists uphold the “massive modularity hypothesis,” according to which the adaptive demands of the ancestral environment gave rise to hardwired adaptations that continue to operate despite different environmental conditions. They deny that a general purpose learning mechanism is capable of solving specific adaptive problems (recomputed on the fly). The cognitive biases are one of the evidentiary mainstays of SAEH, but they are subject to alternative interpretations. The evidence of the plasticity of the brain is perhaps the strongest evidence against massive modularity.
I’d also mention that not all primate species are highly stratified. Although chimps are our closest relatives, it is far from clear that the human ancestral environment included comparable stratification. It isn’t even clear that a uniform ancestral human environment existed.
You are either setting up a straw man, or you have identified a weakness in my thinking that I’m not seeing clearly myself. If you think it might be the latter, I’d appreciate it if you banged on it some more.
Certainly, I don’t mean to draw a distinction between in this thread between dedicated circuits for “signaling, status, and the like” vs. a more general cognitive capacity that has such things as potential outputs… I intended to be agnostic on that question here, as it was beside my point, although I’m certainly suggesting that if we’re talking about a general cognitive capacity, the fact that it routinely gets pressed into service as a mechanism for grabbing and keeping hierarchical status is no accident.
But now that you ask: I doubt that any significant chunk of our status-management behavior is hardwired in the way that, say, edge-detection in our visual cortex is, but I doubt that we’re cognitively a blank slate in this regard (and that all of our status-management behavior is consequently cultural).
As for what sort of evidence I’d be looking for if I wanted to make a more confident statement along these lines… hm.
So, I remember some old work on reinforcement learning that demonstrates that while it’s a fairly general mechanism in “higher” mammals—that is, it pretty much works the same way for chaining any response the animal can produce to any stimulus the animal can perceive—it’s not fully general. A dog is quicker to associate a particular smell to the experience of nausea, for example, than it is to associate a particular color to that experience, and more likely to associate a color than a smell to the experience of electric shock. (I’m remembering something from 20 years ago, here, so I’m probably getting it wrong, and it might be outdated anyway. I mean it here only as illustration.)
That’s the kind of thing I’m talking about: a generalized faculty that is genetically biased towards drawing particular conclusions (whether that bias was specifically selected for, or was a side-effect of some other selection pressure, or just happened to happened, is a different question and not relevant to the issue here, though there’s certainly a just-so story one can tell about the example I quoted, which may be entirely an artifact of the fact that my mind is likely to impose narrative on its confabulations).
I guess that’s the sort of evidence i’d be looking for: demonstrations that although the faculty is significantly general (e.g., we can adapt readily as individuals to an arbitrary set of rules for establishing status), it is not fully general (e.g., it is easier for us to adapt to rules that have certain properties and not others.)
Setting up an experimental protocol to test this that was (a) ethical and (b) not horribly tainted by the existing cultural experience of human subjects, would be tricky. On thirty seconds of thought i can’t think of a way to do it, which ought not significantly affect your beliefs about whether it’s doable.
I have an off-topic question about this theory of ancestral environment. It seems to me that we would expect the behavior you describe if (1) decision theory says it is beneficial, and (2) our reward centers have sufficiently fuzzy definitions that behavioral conditioning of some kind is effective.
By contract, you seem to be articulating a strong ancestral environment theory that says the beneficial aspects shown by decision theory analysis were a strong enough selection pressure that there actually are processes in the brain devoted to signalling, status, and the like. (in the same way that there are processes in the brain devoted to sight, or memory)
What sort of evidence would distinguish between these two positions? Relatedly, am I understanding the positions correctly, or have I inadvertently set up a straw man?
evolutionary/cognitive boundary
tl;dr: people who talk about signaling are confusing everyone.
I like that essay, which I hadn’t seen before. But I’m having trouble deciphering whether it endorses what I called the strong ancestral environment hypothesis.
I’d say it doesn’t endorse the strong ancestral environment hypothesis (SAEH). The most relevant part of EY’s piece is, “Anything originally computed in a brain can be expected to be recomputed, on the fly, in response to changing circumstances.” “Mainstream” evolutionary psychologists uphold the “massive modularity hypothesis,” according to which the adaptive demands of the ancestral environment gave rise to hardwired adaptations that continue to operate despite different environmental conditions. They deny that a general purpose learning mechanism is capable of solving specific adaptive problems (recomputed on the fly). The cognitive biases are one of the evidentiary mainstays of SAEH, but they are subject to alternative interpretations. The evidence of the plasticity of the brain is perhaps the strongest evidence against massive modularity.
I’d also mention that not all primate species are highly stratified. Although chimps are our closest relatives, it is far from clear that the human ancestral environment included comparable stratification. It isn’t even clear that a uniform ancestral human environment existed.
That’s just false, and EY really should know better.
You are either setting up a straw man, or you have identified a weakness in my thinking that I’m not seeing clearly myself. If you think it might be the latter, I’d appreciate it if you banged on it some more.
Certainly, I don’t mean to draw a distinction between in this thread between dedicated circuits for “signaling, status, and the like” vs. a more general cognitive capacity that has such things as potential outputs… I intended to be agnostic on that question here, as it was beside my point, although I’m certainly suggesting that if we’re talking about a general cognitive capacity, the fact that it routinely gets pressed into service as a mechanism for grabbing and keeping hierarchical status is no accident.
But now that you ask: I doubt that any significant chunk of our status-management behavior is hardwired in the way that, say, edge-detection in our visual cortex is, but I doubt that we’re cognitively a blank slate in this regard (and that all of our status-management behavior is consequently cultural).
As for what sort of evidence I’d be looking for if I wanted to make a more confident statement along these lines… hm.
So, I remember some old work on reinforcement learning that demonstrates that while it’s a fairly general mechanism in “higher” mammals—that is, it pretty much works the same way for chaining any response the animal can produce to any stimulus the animal can perceive—it’s not fully general. A dog is quicker to associate a particular smell to the experience of nausea, for example, than it is to associate a particular color to that experience, and more likely to associate a color than a smell to the experience of electric shock. (I’m remembering something from 20 years ago, here, so I’m probably getting it wrong, and it might be outdated anyway. I mean it here only as illustration.)
That’s the kind of thing I’m talking about: a generalized faculty that is genetically biased towards drawing particular conclusions (whether that bias was specifically selected for, or was a side-effect of some other selection pressure, or just happened to happened, is a different question and not relevant to the issue here, though there’s certainly a just-so story one can tell about the example I quoted, which may be entirely an artifact of the fact that my mind is likely to impose narrative on its confabulations).
I guess that’s the sort of evidence i’d be looking for: demonstrations that although the faculty is significantly general (e.g., we can adapt readily as individuals to an arbitrary set of rules for establishing status), it is not fully general (e.g., it is easier for us to adapt to rules that have certain properties and not others.)
Setting up an experimental protocol to test this that was (a) ethical and (b) not horribly tainted by the existing cultural experience of human subjects, would be tricky. On thirty seconds of thought i can’t think of a way to do it, which ought not significantly affect your beliefs about whether it’s doable.