I would say that “becoming strong and opressing the weak” is the default goal. You don’t need any kind of morality here, it’s just biology of a social species. Being strong has natural rewards.
Morality is what allows you to have alternative goals. Morality means that “X is important too”, sometimes even more important than being strong (though usually it is good to both be strong and do X). Morality gives you social rewards for doing X.
Being strong is favored by genes, doing X is favored by (X-promoting) memes. In the absence of memes (more precisely in absence of strong memes saying what is right and wrong), humans fall back on their natural social behavior, the pecking order. In the presence of such memes, humans try to do X; and also at the same time secretly try to be strong, but they cannot use too obvious means for that.
Technically, we could call the pecking order a “null morality”; like the “null hypothesis” in statistics.
That’s forgetting that morality doesn’t come from nowhere, it comes from genes too. Because life is full of iterated prisoner’s dilemma, because gene survival requires the survival of your close relatives, because of the way the brain is shaped (like the fact the empathy very likely comes, at least in part, from the way we reuse our own brain circuits to predict the behavior of others).
Moral theories are “artificial constructs”, as are all theories. They are generalization, they are abstraction, they can conflict with the “genetic morality”, and yes, memes play a huge role in morality. But the core of morality comes from our genes—care for our family, “tit-for-tat with initial cooperation” as the winning strategy for IPD, empathy, …
That’s forgetting that morality doesn’t come from nowhere, it comes from genes too.
Even if ultimately everything comes from the genes, we have to learn some things, while other things come rather automatically.
We educate children to behave nicely to others—they don’t get this ability automatically just because of their genes. On the other hand, children are able to create “Lord of the Flies”-like systems at school without being taught so. Both behaviors are based on evolution, both promote our genes in certain situations, but still one is the default option, and the other must be taught (is transferred by memes).
And by the way, Prisonners’ Dilemma is not a perfect model of reality, and the differences are very relevant for this topic. Prisonners’ Dilemma or Iterated Prisonners’ Dilemma are modelled as series of 1:1 encounters, where the information remains hidden between the interacting players; each player tries to maximize their own utility; and each encounter is scored independently. In real life, people observe what others are doing even when interacting with others; people have families and are willing to sacrifice some of their utility to increase their family’s utility; and results of one encounter may influence your survival or death, your health, your prestige etc., which influence the rules of the following encounter. This results in new strategies, such as “signal a membership to a powerful group G, play tit-for-tat with initial cooperation against members of G, and defect against everyone else” which will work if the group G has a majority. Now the problem is how will people agree what is the right group G? In small societies, family can be such group; in larger societies memetic similarity can play the same role—if you consider that humans are not automatically strategic, why not make a meme M, which teaches them this strategy and at the same times defines group G as “people who share the meme M”? Here comes the morality, religion, football team fans, et cetera.
I would certainly agree that without an agreed-upon set of behavioral norms, coordinating peer pressure is difficult. I would also agree that there always tends to be some set of behavioral norms, often several conflicting sets, some of which we may not want to label “morality”.
It is not clear to me that the distinction you want to draw between “natural” and “alternative” norms is quite as clearcut as you make it sound. Nor is it clear to me that that distinction maps quite as readily to genetic vs. cultural factors as you imply here.
But I would certainly agree that some norms are more easily arrived at (that is, require less extensive training to impart) than others, and that in the absence of strong enforcement of the harder to impart norms (what you’re describing as “alternative goals”/”morality” propogated by memes) the easier-to-impart ones (what you describe as “natural” and genetically constrained) tend to influence behavior more.
I guess my comment seems too dichotomic; I did not intend it that way. Basicly I wanted to say that if you have e.g. children without proper upbringing (or in an environment that allows them to act against their upbringing), their behavior easily collapses to something most dramatically described in the “Lord of the Flies” book, which is rather similar to what social animals do: establishing group hierarchy by using intra-group violence and threats. I call it “natural” because this is what happens unless people use some strategy to prevent it.
But of course both building the pecking order and the desire to avoid the negative consequences for people at the bottom are natural, i.e. driven by self-interest of our genes; it’s just that the former is easier to do, while the latter requires some thinking, strategy, coordination, infrastructure (laws, police, morality, religion, etc.) to be done successfully. It feels like worth doing, but it can be done in a few different ways, and we often disagree about the details.
It’s like in the Prisonners’ Dilemma, the choice to defect is in short term (1 turn) always better than to cooperate; and if you imagine an agent without a memory or unable to distinguish between individual players, then in a world consisting of such agents, always defecting would be the winning strategy. Only the possibility to remember and iterate allows the strategy of punishment, and now “tit-for-tat with initial cooperation” becomes a successful implementation of the more general principle “cooperate with those who cooperate with you, and punish those who defect”.
But in real life sometimes those who can punish are different from those who have been harmed. For example, if someone steals from you, you will try to punish them—but for a society without theft, it is necessary that people punish even those who stole from someone else. (Otherwise the thieves would just have to carefully select their targets among the weaker people.) Here we have a problem, because engaging in punishment has some costs (if you see someone stealing and try to stop them, the thief may hurt you) and no direct benefit for the punisher. This can be fixed by a system when people are rewarded for punishing those who have harmed someone else. For such system to work, it is necessary to have an agreement about what is harm, what is the proper punishment, and what is the reward (social esteem for the hero, salary for policeman, etc.). This is difficult to organize.
Yes, I continue to agree that without an agreed-upon set of behavioral norms, coordinating peer pressure is difficult, and that some norms are easier to coordinate (more natural, if you like) than others.
And yes, the dichotomy is part of what I’m skeptical about. Even in a “pecking order” environment, for example, I suspect a norm saying that low-status tribe members don’t get to steal from high-status members is relatively easy to coordinate. That’s not the same as my culture’s notion of theft, but neither is it the same as a complete absence of a notion of theft. I suspect it’s much more of a continuum, and much more variable, than you make it sound.
I agree there is a continuum of possibilities, that’s how the things were developed. But it does not mean that all parts of the continuum exist in reality with the same frequency, or even that the frequency is a monotonous function.
I guess I have troubles explaining what I mean, so I will use a metaphor—computer. You can have no computer. You can use fingers. You can use pebbles. You can use abacus. You can have a mechanical calculator, vacuum-tube calculator, or some kind of integrated-circuit computer. It’s not literally a continuum, but there are many steps. But now make a histogram of how often people around you use this or that… and you will probably find that most people use some integrated-circuit computing machine, or nothing. There is very little in between. So in theory, there is a continuum, but it can be approximated as just having two choices: an integrated-circuit computer, or no computing machine. There is very little incentive to use abacus, or even to invent one. You don’t upgrade from “no calculator” to “integrated-circuit calculator” by discovering abacus etc., you just go to shop to buy one. And even those people who design and build integrated-circuit calculators, they don’t start from abacus. This part in the middle does not exist anymore, because compared with both extremes, it is not cost-effective.
It’s not the same with morality, but my point is that there is so much morality around (it feels kind of funny when I write it), that very few people are inventing the morality from scratch. You copy it, or you ignore it; or you copy some parts, or you copy it and forget some parts. Inventing it all in one lifetime is almost impossible. So to me it seems safe to say that the higher levels must be carried by memes. It’s like saying that you can find pebbles or invent abacus, but you have to buy an integrated-circuit computer, unless you are an exceptional person.
I agree with you that very few behavioral norms are invented from scratch, and that the more complex ones pretty much never are, and that they must therefore be propagated culturally.
That said, your analogy is actually a good one, in that I have the same objection to the analogy that I had to the original.
Unlike you, I suspect that there’s quite a lot of in between: some people use integrated-circuit computers, some people (often the same people) use pen and paper, some people use a method of successive approximation, some people count on their fingers. It depends on the people and it depends on the kind of calculation they are doing and it depends on the context in which they’re doing it; I might open an excel spreadsheet to calculate 15% of a number if I’m sitting in front of my computer, I might calculate it as “a tenth plus half of a rounded-up tenth” if I’m working out a tip at a restaurant, I might solve it with pencil and paper if it’s the tenth in a series of arithmetic problems I’m solving on a neuropsych examination.
When you say “most people use some integrated-circuit computing machine, or nothing” you end up excluding a wide range of actual human behavior in the real world.
Analogously, I think that when you talk about the vast excluded middle between “morality” and “pecking order” you exclude a similarly wide range of actual human behavior in the real world.
When that range is “approximated as just having two choices” something important is lost. If you have some specific analytical goal in mind, perhaps the approximation is good enough for that goal… I’m afraid I’ve lost track of what your goal might be, here. But in general, I don’t accept it as a good-enough approximation; the excluded middle seems worthy of consideration.
I would say that “becoming strong and opressing the weak” is the default goal. You don’t need any kind of morality here, it’s just biology of a social species. Being strong has natural rewards.
Morality is what allows you to have alternative goals. Morality means that “X is important too”, sometimes even more important than being strong (though usually it is good to both be strong and do X). Morality gives you social rewards for doing X.
Being strong is favored by genes, doing X is favored by (X-promoting) memes. In the absence of memes (more precisely in absence of strong memes saying what is right and wrong), humans fall back on their natural social behavior, the pecking order. In the presence of such memes, humans try to do X; and also at the same time secretly try to be strong, but they cannot use too obvious means for that.
Technically, we could call the pecking order a “null morality”; like the “null hypothesis” in statistics.
That’s forgetting that morality doesn’t come from nowhere, it comes from genes too. Because life is full of iterated prisoner’s dilemma, because gene survival requires the survival of your close relatives, because of the way the brain is shaped (like the fact the empathy very likely comes, at least in part, from the way we reuse our own brain circuits to predict the behavior of others).
Moral theories are “artificial constructs”, as are all theories. They are generalization, they are abstraction, they can conflict with the “genetic morality”, and yes, memes play a huge role in morality. But the core of morality comes from our genes—care for our family, “tit-for-tat with initial cooperation” as the winning strategy for IPD, empathy, …
Even if ultimately everything comes from the genes, we have to learn some things, while other things come rather automatically.
We educate children to behave nicely to others—they don’t get this ability automatically just because of their genes. On the other hand, children are able to create “Lord of the Flies”-like systems at school without being taught so. Both behaviors are based on evolution, both promote our genes in certain situations, but still one is the default option, and the other must be taught (is transferred by memes).
And by the way, Prisonners’ Dilemma is not a perfect model of reality, and the differences are very relevant for this topic. Prisonners’ Dilemma or Iterated Prisonners’ Dilemma are modelled as series of 1:1 encounters, where the information remains hidden between the interacting players; each player tries to maximize their own utility; and each encounter is scored independently. In real life, people observe what others are doing even when interacting with others; people have families and are willing to sacrifice some of their utility to increase their family’s utility; and results of one encounter may influence your survival or death, your health, your prestige etc., which influence the rules of the following encounter. This results in new strategies, such as “signal a membership to a powerful group G, play tit-for-tat with initial cooperation against members of G, and defect against everyone else” which will work if the group G has a majority. Now the problem is how will people agree what is the right group G? In small societies, family can be such group; in larger societies memetic similarity can play the same role—if you consider that humans are not automatically strategic, why not make a meme M, which teaches them this strategy and at the same times defines group G as “people who share the meme M”? Here comes the morality, religion, football team fans, et cetera.
OK, cool; thanks for clarifying.
I would certainly agree that without an agreed-upon set of behavioral norms, coordinating peer pressure is difficult. I would also agree that there always tends to be some set of behavioral norms, often several conflicting sets, some of which we may not want to label “morality”.
It is not clear to me that the distinction you want to draw between “natural” and “alternative” norms is quite as clearcut as you make it sound. Nor is it clear to me that that distinction maps quite as readily to genetic vs. cultural factors as you imply here.
But I would certainly agree that some norms are more easily arrived at (that is, require less extensive training to impart) than others, and that in the absence of strong enforcement of the harder to impart norms (what you’re describing as “alternative goals”/”morality” propogated by memes) the easier-to-impart ones (what you describe as “natural” and genetically constrained) tend to influence behavior more.
I guess my comment seems too dichotomic; I did not intend it that way. Basicly I wanted to say that if you have e.g. children without proper upbringing (or in an environment that allows them to act against their upbringing), their behavior easily collapses to something most dramatically described in the “Lord of the Flies” book, which is rather similar to what social animals do: establishing group hierarchy by using intra-group violence and threats. I call it “natural” because this is what happens unless people use some strategy to prevent it.
But of course both building the pecking order and the desire to avoid the negative consequences for people at the bottom are natural, i.e. driven by self-interest of our genes; it’s just that the former is easier to do, while the latter requires some thinking, strategy, coordination, infrastructure (laws, police, morality, religion, etc.) to be done successfully. It feels like worth doing, but it can be done in a few different ways, and we often disagree about the details.
It’s like in the Prisonners’ Dilemma, the choice to defect is in short term (1 turn) always better than to cooperate; and if you imagine an agent without a memory or unable to distinguish between individual players, then in a world consisting of such agents, always defecting would be the winning strategy. Only the possibility to remember and iterate allows the strategy of punishment, and now “tit-for-tat with initial cooperation” becomes a successful implementation of the more general principle “cooperate with those who cooperate with you, and punish those who defect”.
But in real life sometimes those who can punish are different from those who have been harmed. For example, if someone steals from you, you will try to punish them—but for a society without theft, it is necessary that people punish even those who stole from someone else. (Otherwise the thieves would just have to carefully select their targets among the weaker people.) Here we have a problem, because engaging in punishment has some costs (if you see someone stealing and try to stop them, the thief may hurt you) and no direct benefit for the punisher. This can be fixed by a system when people are rewarded for punishing those who have harmed someone else. For such system to work, it is necessary to have an agreement about what is harm, what is the proper punishment, and what is the reward (social esteem for the hero, salary for policeman, etc.). This is difficult to organize.
Yes, I continue to agree that without an agreed-upon set of behavioral norms, coordinating peer pressure is difficult, and that some norms are easier to coordinate (more natural, if you like) than others.
And yes, the dichotomy is part of what I’m skeptical about. Even in a “pecking order” environment, for example, I suspect a norm saying that low-status tribe members don’t get to steal from high-status members is relatively easy to coordinate. That’s not the same as my culture’s notion of theft, but neither is it the same as a complete absence of a notion of theft. I suspect it’s much more of a continuum, and much more variable, than you make it sound.
I agree there is a continuum of possibilities, that’s how the things were developed. But it does not mean that all parts of the continuum exist in reality with the same frequency, or even that the frequency is a monotonous function.
I guess I have troubles explaining what I mean, so I will use a metaphor—computer. You can have no computer. You can use fingers. You can use pebbles. You can use abacus. You can have a mechanical calculator, vacuum-tube calculator, or some kind of integrated-circuit computer. It’s not literally a continuum, but there are many steps. But now make a histogram of how often people around you use this or that… and you will probably find that most people use some integrated-circuit computing machine, or nothing. There is very little in between. So in theory, there is a continuum, but it can be approximated as just having two choices: an integrated-circuit computer, or no computing machine. There is very little incentive to use abacus, or even to invent one. You don’t upgrade from “no calculator” to “integrated-circuit calculator” by discovering abacus etc., you just go to shop to buy one. And even those people who design and build integrated-circuit calculators, they don’t start from abacus. This part in the middle does not exist anymore, because compared with both extremes, it is not cost-effective.
It’s not the same with morality, but my point is that there is so much morality around (it feels kind of funny when I write it), that very few people are inventing the morality from scratch. You copy it, or you ignore it; or you copy some parts, or you copy it and forget some parts. Inventing it all in one lifetime is almost impossible. So to me it seems safe to say that the higher levels must be carried by memes. It’s like saying that you can find pebbles or invent abacus, but you have to buy an integrated-circuit computer, unless you are an exceptional person.
I agree with you that very few behavioral norms are invented from scratch, and that the more complex ones pretty much never are, and that they must therefore be propagated culturally.
That said, your analogy is actually a good one, in that I have the same objection to the analogy that I had to the original.
Unlike you, I suspect that there’s quite a lot of in between: some people use integrated-circuit computers, some people (often the same people) use pen and paper, some people use a method of successive approximation, some people count on their fingers. It depends on the people and it depends on the kind of calculation they are doing and it depends on the context in which they’re doing it; I might open an excel spreadsheet to calculate 15% of a number if I’m sitting in front of my computer, I might calculate it as “a tenth plus half of a rounded-up tenth” if I’m working out a tip at a restaurant, I might solve it with pencil and paper if it’s the tenth in a series of arithmetic problems I’m solving on a neuropsych examination.
When you say “most people use some integrated-circuit computing machine, or nothing” you end up excluding a wide range of actual human behavior in the real world.
Analogously, I think that when you talk about the vast excluded middle between “morality” and “pecking order” you exclude a similarly wide range of actual human behavior in the real world.
When that range is “approximated as just having two choices” something important is lost. If you have some specific analytical goal in mind, perhaps the approximation is good enough for that goal… I’m afraid I’ve lost track of what your goal might be, here. But in general, I don’t accept it as a good-enough approximation; the excluded middle seems worthy of consideration.