Why not go with the explanation that doesn’t multiply entities beyond necessity? Why should we assume that there was a specific strategic circumstance in our evolutionary past that caused us to make the near-far distinction when it could very easily—perhaps more easily—be the side-effect of higher reasoning, a basic disposition towards kindness, or a cultural evolution? Isn’t it best practice to assume the null hypothesis until there’s compelling evidence of something else?
There’s a distinction between what I believe is more likely to be true, and what I wish were true instead. The null hypothesis is always more likely to be correct than any specific hypothesis. If I have to stick with a very unpredictive hypothesis, I have a decreased ability to predict the world, I will therefore do worse.
In this case, I am fairly sure that the near/far distinction gives good reason to believe that the Israel experiment doesn’t contradict the cave man fight: i.e. what people do in far situations can be the opposite of what they do in near situations.
But as to why people root for the underdog, rather than just choosing at random… I am less sure.
The empathy argument has been made independently a few times, and I am starting to see it’s merit. But empathy and signalling aren’t mutually exclusive. We could be seeing an example of exaption here—the empathy response tended to make people sympathize with the underdog, and this effect was re-enforced because it was actually advantageous—as a signal of virtue and power.
The original poster here seemed to basically be saying “This is a minor effect of such complexity that it could be entirely the result of selective pressures on other parts of human psychology, which give us this predisposition.” This seems highly plausible, given that I don’t think anyone has come up with a story of how decisions in this circumstance influence differential selective pressure. It seems that if you can’t find a reasonably clear mechanism for differntial reproductive success, you should not bend over backwards to do so (that is, if it’s that hard to find one, maybe it’s because it isn’t there).
My personal theory is that it stems from story telling and thus availability bias. Almost no story has the overdog as a good guy. This is probably the result of a story requiring conflict to occur in a non-predictable manner. Big, good guy crushes bad, little guy with little resistance is too foregone of a conclusion. Thus, every story we hear, we like the good guy. When we hear a story about Israel-Palestine (that happens to represent reality, roughly), we side with the little guy because, based on a massive compilation of (fictional) “evidence,” the little guy is always right.
Of course, explaining the psychology of good stories is rather difficult; still, “side effect of other aspect of human psychology” seems more accurate than “result of differential reproduction” for something this specific, abstract, and practically useless. Though, of course, if someone comes up with a convincing mechanism for differential reproductive success, that would probably change my mind.
It seems that if you can’t find a reasonably clear mechanism for differntial reproductive success, you should not bend over backwards to do so (that is, if it’s that hard to find one, maybe it’s because it isn’t there).
You should bend over backwards until you find one or two “differential fitness” explanations, then you should go test them!
EDIT: And, of course, you should also look for hypotheses not based upon differential reproduction. And test those too!
I think that “signalling your virtue and power” isn’t a crazily complex explanation. We are in need of evidence methinks.
Why not go with the explanation that doesn’t multiply entities beyond necessity? Why should we assume that there was a specific strategic circumstance in our evolutionary past that caused us to make the near-far distinction when it could very easily—perhaps more easily—be the side-effect of higher reasoning, a basic disposition towards kindness, or a cultural evolution? Isn’t it best practice to assume the null hypothesis until there’s compelling evidence of something else?
There’s a distinction between what I believe is more likely to be true, and what I wish were true instead. The null hypothesis is always more likely to be correct than any specific hypothesis. If I have to stick with a very unpredictive hypothesis, I have a decreased ability to predict the world, I will therefore do worse.
In this case, I am fairly sure that the near/far distinction gives good reason to believe that the Israel experiment doesn’t contradict the cave man fight: i.e. what people do in far situations can be the opposite of what they do in near situations.
But as to why people root for the underdog, rather than just choosing at random… I am less sure.
The empathy argument has been made independently a few times, and I am starting to see it’s merit. But empathy and signalling aren’t mutually exclusive. We could be seeing an example of exaption here—the empathy response tended to make people sympathize with the underdog, and this effect was re-enforced because it was actually advantageous—as a signal of virtue and power.
The original poster here seemed to basically be saying “This is a minor effect of such complexity that it could be entirely the result of selective pressures on other parts of human psychology, which give us this predisposition.” This seems highly plausible, given that I don’t think anyone has come up with a story of how decisions in this circumstance influence differential selective pressure. It seems that if you can’t find a reasonably clear mechanism for differntial reproductive success, you should not bend over backwards to do so (that is, if it’s that hard to find one, maybe it’s because it isn’t there).
My personal theory is that it stems from story telling and thus availability bias. Almost no story has the overdog as a good guy. This is probably the result of a story requiring conflict to occur in a non-predictable manner. Big, good guy crushes bad, little guy with little resistance is too foregone of a conclusion. Thus, every story we hear, we like the good guy. When we hear a story about Israel-Palestine (that happens to represent reality, roughly), we side with the little guy because, based on a massive compilation of (fictional) “evidence,” the little guy is always right.
Of course, explaining the psychology of good stories is rather difficult; still, “side effect of other aspect of human psychology” seems more accurate than “result of differential reproduction” for something this specific, abstract, and practically useless. Though, of course, if someone comes up with a convincing mechanism for differential reproductive success, that would probably change my mind.
You should bend over backwards until you find one or two “differential fitness” explanations, then you should go test them!
EDIT: And, of course, you should also look for hypotheses not based upon differential reproduction. And test those too!
I think that “signalling your virtue and power” isn’t a crazily complex explanation. We are in need of evidence methinks.
If I have to stick with a very unpredictive hypothesis, I have a decreased ability to predict the world, I will therefore do worse.
Not true. If you have a prediction model that is non-random and wrong you will get better results from simple random predictions.
Just so you know, you can use a greater than sign to quote text, which will look like this
If you actually want to italicize text, you can use stars, which will look like this.
HTML will not avail you.
For more, check the help box—whenever you’re in the middle of writing a comment, it’s below and to the right of the editing window.
yes, this is true. I didn’t express that very well.
What I meant was that a more specific correct hypothesis is much more useful to me than random predictions.