Ok, so let me state my argument maximally clearly. People who can convince others that they make good, trustworthy allies will find it easier to make alliances. This is beyond reasonable doubt—it is why we are so concerned with attributing motives to people, with analyzing people’s characters, with sorting people into ethical categories (she is a liar, he is brave, etc).
If there is a cheap but not completely free way to signal that you have the disposition of a moral, trustworthy person, for example by rooting for the underdog in faraway conflicts, then we should expect people to have the trait of displaying that signal.
All that remains is to conclude that rooting for the underdog rather then the overdog really does signal to others that you are a good person, who is more likely than average to side with he who has the moral high ground rather than he who has most power. In the case that the human brain could lie perfectly, rooting for the underdog in a faraway conflict would carry no information about what you will do in a near conflict. But the human brain somehow didn’t manage to be a maximally efficient lying machievelli, so he who displays moral opinions about faraway conflicts presumably behaves at least a little more ethically in near conflicts.
The mechanism here is that there is a weak connection between what we say about Israel/Palestine [the faraway conflict], and how we behave in our personal lives [the nearby conflict]. My experiences with people who are e.g. pro-Palestine bears this out—they tend to be that Gaurdian-reading almost hippie type, who might be a little fuzzy headed but are probably more likely to help a stranger. This weak connection means that you can do inference about someone’s behavior in a near situation by what they say about a far situation. The survival advantage of claiming to support the underdog follows.
Another possible mechanism is that by supporting the underdog, you put yourself slightly at risk [the overdog won’t like you] so this is a costly signal of strength. This I find a little less convincing, but still worth considering.
“If there is a cheap but not completely free way to signal that you have the disposition of a moral, trustworthy person, for example by rooting for the underdog in faraway conflicts, then we should expect people to have the trait of displaying that signal.”
During the time span when the underdog tendency was presumably evolving, I doubt that there was any awareness of far-away conflicts that didn’t touch the observer. Awareness of geographically distant conflicts is a relatively modern phenomenon.
Here is an alternative explanation. The inclination to protect the weak from harm provides reproductive advantage—parents protect their young who go on to reproduce. This tendency is thoroughly bound up with empathic responses to distress signals from the weak and defenseless. It’s the default position.
This strategy works up to the point when the aggressor poses an overwhelming threat. Challenging that big silverback when he’s killing someone else’s young could buy you a heap of trouble—better to form an alliance for one’s own safety and the safety of one’s young who can go on to reproduce if they survive). So, when the cost is low we’re inclined to feel empathy for the weak—it’s the default position. But when the threat is more immediate and overwhelming, we identify with and seek alliances with the aggressor. Nothing about signaling our moral standing to others is necessary in this formulation.
Clearer, but I remain unconvinced. The value of this signal, as you present it, seems to provide so little real benefit that I can’t distinguish it from the noise of random mutation.
Ok, so let me state my argument maximally clearly. People who can convince others that they make good, trustworthy allies will find it easier to make alliances. This is beyond reasonable doubt—it is why we are so concerned with attributing motives to people, with analyzing people’s characters, with sorting people into ethical categories (she is a liar, he is brave, etc).
If there is a cheap but not completely free way to signal that you have the disposition of a moral, trustworthy person, for example by rooting for the underdog in faraway conflicts, then we should expect people to have the trait of displaying that signal.
All that remains is to conclude that rooting for the underdog rather then the overdog really does signal to others that you are a good person, who is more likely than average to side with he who has the moral high ground rather than he who has most power. In the case that the human brain could lie perfectly, rooting for the underdog in a faraway conflict would carry no information about what you will do in a near conflict. But the human brain somehow didn’t manage to be a maximally efficient lying machievelli, so he who displays moral opinions about faraway conflicts presumably behaves at least a little more ethically in near conflicts.
The mechanism here is that there is a weak connection between what we say about Israel/Palestine [the faraway conflict], and how we behave in our personal lives [the nearby conflict]. My experiences with people who are e.g. pro-Palestine bears this out—they tend to be that Gaurdian-reading almost hippie type, who might be a little fuzzy headed but are probably more likely to help a stranger. This weak connection means that you can do inference about someone’s behavior in a near situation by what they say about a far situation. The survival advantage of claiming to support the underdog follows.
Another possible mechanism is that by supporting the underdog, you put yourself slightly at risk [the overdog won’t like you] so this is a costly signal of strength. This I find a little less convincing, but still worth considering.
“If there is a cheap but not completely free way to signal that you have the disposition of a moral, trustworthy person, for example by rooting for the underdog in faraway conflicts, then we should expect people to have the trait of displaying that signal.”
During the time span when the underdog tendency was presumably evolving, I doubt that there was any awareness of far-away conflicts that didn’t touch the observer. Awareness of geographically distant conflicts is a relatively modern phenomenon.
Here is an alternative explanation. The inclination to protect the weak from harm provides reproductive advantage—parents protect their young who go on to reproduce. This tendency is thoroughly bound up with empathic responses to distress signals from the weak and defenseless. It’s the default position.
This strategy works up to the point when the aggressor poses an overwhelming threat. Challenging that big silverback when he’s killing someone else’s young could buy you a heap of trouble—better to form an alliance for one’s own safety and the safety of one’s young who can go on to reproduce if they survive). So, when the cost is low we’re inclined to feel empathy for the weak—it’s the default position. But when the threat is more immediate and overwhelming, we identify with and seek alliances with the aggressor. Nothing about signaling our moral standing to others is necessary in this formulation.
Clearer, but I remain unconvinced. The value of this signal, as you present it, seems to provide so little real benefit that I can’t distinguish it from the noise of random mutation.