Surely justice [i.e. punishment of criminals] is a terminal value; it feels so noble to desire it.
I don’t know that many people who consider punishing criminals a end in itself, as opposed to a means to rehabilitate them and/or deter other potential criminals. (Maybe that’s because I’m European; I’ve heard that that is mainly an American thing.)
It’s a recurring theme in the animal-training literature that active positive punishment (that is, doing things to an animal they don’t want done, like squirting them with a water bottle or hitting them with something) is often reinforcing for the punisher. I don’t doubt that a similar pattern arises when humans punish other humans, whether under the label of justice or not.
It’s a recurring theme in the animal-training literature that active positive punishment (that is, doing things to an animal they don’t want done, like squirting them with a water bottle or hitting them with something) is often reinforcing for the punisher.
What do you think we should conclude from the fact that we evolved this behavior?
We are primates who evolved within status hierarchies.
The rules imposed by high-status members of a status hierarchy are often to the direct benefit of those members, and even when they aren’t, violations of those rules are nevertheless a challenge to those members’ statuses. (Indeed, it’s not uncommon for sufficiently intelligent high-status group members to create rules for the sole purpose of signalling their status.) Punishing rules violations (at least, if done consistently) reduces the frequency of those violations, which addresses the former threat. Doing so visibly establishes the punisher’s dominance over the violator, which addresses the latter threat.
Of course, as with any high-status act, it’s also a way for ambitious lower-status individuals to signal status they don’t have. Unlike many high-status signalling acts, though, punishing someone is relatively safe, since any attempt to censure the punisher for presumption necessarily aligns the censurer with the lower-status punishee, as well as potentially with the rule-violation itself.
It ought not be surprising that we’ve evolved in such a way that behaviors which benefited our ancestors are reinforcing for us.
I have an off-topic question about this theory of ancestral environment. It seems to me that we would expect the behavior you describe if (1) decision theory says it is beneficial, and (2) our reward centers have sufficiently fuzzy definitions that behavioralconditioning of some kind is effective.
By contract, you seem to be articulating a strong ancestral environment theory that says the beneficial aspects shown by decision theory analysis were a strong enough selection pressure that there actually are processes in the brain devoted to signalling, status, and the like. (in the same way that there are processes in the brain devoted to sight, or memory)
What sort of evidence would distinguish between these two positions? Relatedly, am I understanding the positions correctly, or have I inadvertently set up a straw man?
I like that essay, which I hadn’t seen before. But I’m having trouble deciphering whether it endorses what I called the strong ancestral environment hypothesis.
I’d say it doesn’t endorse the strong ancestral environment hypothesis (SAEH). The most relevant part of EY’s piece is, “Anything originally computed in a brain can be expected to be recomputed, on the fly, in response to changing circumstances.” “Mainstream” evolutionary psychologists uphold the “massive modularity hypothesis,” according to which the adaptive demands of the ancestral environment gave rise to hardwired adaptations that continue to operate despite different environmental conditions. They deny that a general purpose learning mechanism is capable of solving specific adaptive problems (recomputed on the fly). The cognitive biases are one of the evidentiary mainstays of SAEH, but they are subject to alternative interpretations. The evidence of the plasticity of the brain is perhaps the strongest evidence against massive modularity.
I’d also mention that not all primate species are highly stratified. Although chimps are our closest relatives, it is far from clear that the human ancestral environment included comparable stratification. It isn’t even clear that a uniform ancestral human environment existed.
You are either setting up a straw man, or you have identified a weakness in my thinking that I’m not seeing clearly myself. If you think it might be the latter, I’d appreciate it if you banged on it some more.
Certainly, I don’t mean to draw a distinction between in this thread between dedicated circuits for “signaling, status, and the like” vs. a more general cognitive capacity that has such things as potential outputs… I intended to be agnostic on that question here, as it was beside my point, although I’m certainly suggesting that if we’re talking about a general cognitive capacity, the fact that it routinely gets pressed into service as a mechanism for grabbing and keeping hierarchical status is no accident.
But now that you ask: I doubt that any significant chunk of our status-management behavior is hardwired in the way that, say, edge-detection in our visual cortex is, but I doubt that we’re cognitively a blank slate in this regard (and that all of our status-management behavior is consequently cultural).
As for what sort of evidence I’d be looking for if I wanted to make a more confident statement along these lines… hm.
So, I remember some old work on reinforcement learning that demonstrates that while it’s a fairly general mechanism in “higher” mammals—that is, it pretty much works the same way for chaining any response the animal can produce to any stimulus the animal can perceive—it’s not fully general. A dog is quicker to associate a particular smell to the experience of nausea, for example, than it is to associate a particular color to that experience, and more likely to associate a color than a smell to the experience of electric shock. (I’m remembering something from 20 years ago, here, so I’m probably getting it wrong, and it might be outdated anyway. I mean it here only as illustration.)
That’s the kind of thing I’m talking about: a generalized faculty that is genetically biased towards drawing particular conclusions (whether that bias was specifically selected for, or was a side-effect of some other selection pressure, or just happened to happened, is a different question and not relevant to the issue here, though there’s certainly a just-so story one can tell about the example I quoted, which may be entirely an artifact of the fact that my mind is likely to impose narrative on its confabulations).
I guess that’s the sort of evidence i’d be looking for: demonstrations that although the faculty is significantly general (e.g., we can adapt readily as individuals to an arbitrary set of rules for establishing status), it is not fully general (e.g., it is easier for us to adapt to rules that have certain properties and not others.)
Setting up an experimental protocol to test this that was (a) ethical and (b) not horribly tainted by the existing cultural experience of human subjects, would be tricky. On thirty seconds of thought i can’t think of a way to do it, which ought not significantly affect your beliefs about whether it’s doable.
To parallel what TheOtherDave said, is it really the case that the retributive theory of justice is essentially rejected in Europe?
That said, my impression is that the US is more concerned about this principle than Europe, which I suspect is related to the fact that the US is more religious than Europe.
That’s what I thought, until I tried talking to people about how justice could be improved. Some people really do take punishment of criminals as terminal. There are some in this very thread.
That’s what I thought, until I tried talking to people about how justice could be improved. Some people really do take punishment of criminals as terminal. There are some in this very thread.
I assign greater preferences for universes in which those who make certain actions experience outcomes lower in their preferences than they would have if they had not committed those acts, all else being equal. This roughly translates into treating punishment for certain things as a terminal value as well as an instrumental one.
This position does not strike me as one particularly out of accord with reasonable human values.
I’ve since improved my metaethics to acknoledge that I want punishment for criminals, but it is a rather small want, vastly overpowered by social-good game theory considerations.
I don’t know that many people who consider punishing criminals a end in itself, as opposed to a means to rehabilitate them and/or deter other potential criminals. (Maybe that’s because I’m European; I’ve heard that that is mainly an American thing.)
It’s a recurring theme in the animal-training literature that active positive punishment (that is, doing things to an animal they don’t want done, like squirting them with a water bottle or hitting them with something) is often reinforcing for the punisher. I don’t doubt that a similar pattern arises when humans punish other humans, whether under the label of justice or not.
What do you think we should conclude from the fact that we evolved this behavior?
We are primates who evolved within status hierarchies.
The rules imposed by high-status members of a status hierarchy are often to the direct benefit of those members, and even when they aren’t, violations of those rules are nevertheless a challenge to those members’ statuses. (Indeed, it’s not uncommon for sufficiently intelligent high-status group members to create rules for the sole purpose of signalling their status.) Punishing rules violations (at least, if done consistently) reduces the frequency of those violations, which addresses the former threat. Doing so visibly establishes the punisher’s dominance over the violator, which addresses the latter threat.
Of course, as with any high-status act, it’s also a way for ambitious lower-status individuals to signal status they don’t have. Unlike many high-status signalling acts, though, punishing someone is relatively safe, since any attempt to censure the punisher for presumption necessarily aligns the censurer with the lower-status punishee, as well as potentially with the rule-violation itself.
It ought not be surprising that we’ve evolved in such a way that behaviors which benefited our ancestors are reinforcing for us.
I have an off-topic question about this theory of ancestral environment. It seems to me that we would expect the behavior you describe if (1) decision theory says it is beneficial, and (2) our reward centers have sufficiently fuzzy definitions that behavioral conditioning of some kind is effective.
By contract, you seem to be articulating a strong ancestral environment theory that says the beneficial aspects shown by decision theory analysis were a strong enough selection pressure that there actually are processes in the brain devoted to signalling, status, and the like. (in the same way that there are processes in the brain devoted to sight, or memory)
What sort of evidence would distinguish between these two positions? Relatedly, am I understanding the positions correctly, or have I inadvertently set up a straw man?
evolutionary/cognitive boundary
tl;dr: people who talk about signaling are confusing everyone.
I like that essay, which I hadn’t seen before. But I’m having trouble deciphering whether it endorses what I called the strong ancestral environment hypothesis.
I’d say it doesn’t endorse the strong ancestral environment hypothesis (SAEH). The most relevant part of EY’s piece is, “Anything originally computed in a brain can be expected to be recomputed, on the fly, in response to changing circumstances.” “Mainstream” evolutionary psychologists uphold the “massive modularity hypothesis,” according to which the adaptive demands of the ancestral environment gave rise to hardwired adaptations that continue to operate despite different environmental conditions. They deny that a general purpose learning mechanism is capable of solving specific adaptive problems (recomputed on the fly). The cognitive biases are one of the evidentiary mainstays of SAEH, but they are subject to alternative interpretations. The evidence of the plasticity of the brain is perhaps the strongest evidence against massive modularity.
I’d also mention that not all primate species are highly stratified. Although chimps are our closest relatives, it is far from clear that the human ancestral environment included comparable stratification. It isn’t even clear that a uniform ancestral human environment existed.
That’s just false, and EY really should know better.
You are either setting up a straw man, or you have identified a weakness in my thinking that I’m not seeing clearly myself. If you think it might be the latter, I’d appreciate it if you banged on it some more.
Certainly, I don’t mean to draw a distinction between in this thread between dedicated circuits for “signaling, status, and the like” vs. a more general cognitive capacity that has such things as potential outputs… I intended to be agnostic on that question here, as it was beside my point, although I’m certainly suggesting that if we’re talking about a general cognitive capacity, the fact that it routinely gets pressed into service as a mechanism for grabbing and keeping hierarchical status is no accident.
But now that you ask: I doubt that any significant chunk of our status-management behavior is hardwired in the way that, say, edge-detection in our visual cortex is, but I doubt that we’re cognitively a blank slate in this regard (and that all of our status-management behavior is consequently cultural).
As for what sort of evidence I’d be looking for if I wanted to make a more confident statement along these lines… hm.
So, I remember some old work on reinforcement learning that demonstrates that while it’s a fairly general mechanism in “higher” mammals—that is, it pretty much works the same way for chaining any response the animal can produce to any stimulus the animal can perceive—it’s not fully general. A dog is quicker to associate a particular smell to the experience of nausea, for example, than it is to associate a particular color to that experience, and more likely to associate a color than a smell to the experience of electric shock. (I’m remembering something from 20 years ago, here, so I’m probably getting it wrong, and it might be outdated anyway. I mean it here only as illustration.)
That’s the kind of thing I’m talking about: a generalized faculty that is genetically biased towards drawing particular conclusions (whether that bias was specifically selected for, or was a side-effect of some other selection pressure, or just happened to happened, is a different question and not relevant to the issue here, though there’s certainly a just-so story one can tell about the example I quoted, which may be entirely an artifact of the fact that my mind is likely to impose narrative on its confabulations).
I guess that’s the sort of evidence i’d be looking for: demonstrations that although the faculty is significantly general (e.g., we can adapt readily as individuals to an arbitrary set of rules for establishing status), it is not fully general (e.g., it is easier for us to adapt to rules that have certain properties and not others.)
Setting up an experimental protocol to test this that was (a) ethical and (b) not horribly tainted by the existing cultural experience of human subjects, would be tricky. On thirty seconds of thought i can’t think of a way to do it, which ought not significantly affect your beliefs about whether it’s doable.
To parallel what TheOtherDave said, is it really the case that the retributive theory of justice is essentially rejected in Europe?
That said, my impression is that the US is more concerned about this principle than Europe, which I suspect is related to the fact that the US is more religious than Europe.
That’s what I thought, until I tried talking to people about how justice could be improved. Some people really do take punishment of criminals as terminal. There are some in this very thread.
I assign greater preferences for universes in which those who make certain actions experience outcomes lower in their preferences than they would have if they had not committed those acts, all else being equal. This roughly translates into treating punishment for certain things as a terminal value as well as an instrumental one.
This position does not strike me as one particularly out of accord with reasonable human values.
I’ve since improved my metaethics to acknoledge that I want punishment for criminals, but it is a rather small want, vastly overpowered by social-good game theory considerations.
(I wonder whether they wouldn’t like a world without crime because that would mean there’s no-one to punish.)
It’s not so much that U(punish criminals) is high it’s that U(punish criminal | criminal) is high.