You are either setting up a straw man, or you have identified a weakness in my thinking that I’m not seeing clearly myself. If you think it might be the latter, I’d appreciate it if you banged on it some more.
Certainly, I don’t mean to draw a distinction between in this thread between dedicated circuits for “signaling, status, and the like” vs. a more general cognitive capacity that has such things as potential outputs… I intended to be agnostic on that question here, as it was beside my point, although I’m certainly suggesting that if we’re talking about a general cognitive capacity, the fact that it routinely gets pressed into service as a mechanism for grabbing and keeping hierarchical status is no accident.
But now that you ask: I doubt that any significant chunk of our status-management behavior is hardwired in the way that, say, edge-detection in our visual cortex is, but I doubt that we’re cognitively a blank slate in this regard (and that all of our status-management behavior is consequently cultural).
As for what sort of evidence I’d be looking for if I wanted to make a more confident statement along these lines… hm.
So, I remember some old work on reinforcement learning that demonstrates that while it’s a fairly general mechanism in “higher” mammals—that is, it pretty much works the same way for chaining any response the animal can produce to any stimulus the animal can perceive—it’s not fully general. A dog is quicker to associate a particular smell to the experience of nausea, for example, than it is to associate a particular color to that experience, and more likely to associate a color than a smell to the experience of electric shock. (I’m remembering something from 20 years ago, here, so I’m probably getting it wrong, and it might be outdated anyway. I mean it here only as illustration.)
That’s the kind of thing I’m talking about: a generalized faculty that is genetically biased towards drawing particular conclusions (whether that bias was specifically selected for, or was a side-effect of some other selection pressure, or just happened to happened, is a different question and not relevant to the issue here, though there’s certainly a just-so story one can tell about the example I quoted, which may be entirely an artifact of the fact that my mind is likely to impose narrative on its confabulations).
I guess that’s the sort of evidence i’d be looking for: demonstrations that although the faculty is significantly general (e.g., we can adapt readily as individuals to an arbitrary set of rules for establishing status), it is not fully general (e.g., it is easier for us to adapt to rules that have certain properties and not others.)
Setting up an experimental protocol to test this that was (a) ethical and (b) not horribly tainted by the existing cultural experience of human subjects, would be tricky. On thirty seconds of thought i can’t think of a way to do it, which ought not significantly affect your beliefs about whether it’s doable.
You are either setting up a straw man, or you have identified a weakness in my thinking that I’m not seeing clearly myself. If you think it might be the latter, I’d appreciate it if you banged on it some more.
Certainly, I don’t mean to draw a distinction between in this thread between dedicated circuits for “signaling, status, and the like” vs. a more general cognitive capacity that has such things as potential outputs… I intended to be agnostic on that question here, as it was beside my point, although I’m certainly suggesting that if we’re talking about a general cognitive capacity, the fact that it routinely gets pressed into service as a mechanism for grabbing and keeping hierarchical status is no accident.
But now that you ask: I doubt that any significant chunk of our status-management behavior is hardwired in the way that, say, edge-detection in our visual cortex is, but I doubt that we’re cognitively a blank slate in this regard (and that all of our status-management behavior is consequently cultural).
As for what sort of evidence I’d be looking for if I wanted to make a more confident statement along these lines… hm.
So, I remember some old work on reinforcement learning that demonstrates that while it’s a fairly general mechanism in “higher” mammals—that is, it pretty much works the same way for chaining any response the animal can produce to any stimulus the animal can perceive—it’s not fully general. A dog is quicker to associate a particular smell to the experience of nausea, for example, than it is to associate a particular color to that experience, and more likely to associate a color than a smell to the experience of electric shock. (I’m remembering something from 20 years ago, here, so I’m probably getting it wrong, and it might be outdated anyway. I mean it here only as illustration.)
That’s the kind of thing I’m talking about: a generalized faculty that is genetically biased towards drawing particular conclusions (whether that bias was specifically selected for, or was a side-effect of some other selection pressure, or just happened to happened, is a different question and not relevant to the issue here, though there’s certainly a just-so story one can tell about the example I quoted, which may be entirely an artifact of the fact that my mind is likely to impose narrative on its confabulations).
I guess that’s the sort of evidence i’d be looking for: demonstrations that although the faculty is significantly general (e.g., we can adapt readily as individuals to an arbitrary set of rules for establishing status), it is not fully general (e.g., it is easier for us to adapt to rules that have certain properties and not others.)
Setting up an experimental protocol to test this that was (a) ethical and (b) not horribly tainted by the existing cultural experience of human subjects, would be tricky. On thirty seconds of thought i can’t think of a way to do it, which ought not significantly affect your beliefs about whether it’s doable.