All that remains to explain is what situation in our evolutionary past corresponds to hearing about a faraway conflict
No! You are assuming that underdogism is something evolution specifically optimized for. It could very well be a useless side effect of other unrelated optimizations.
Yes, it could be. But “random side effect of unspecified other optimisations” is a completely general explanation for any trait of any organism, so as a hypothesis it is totally uninformative. We should actively look for more informative hypotheses, keeping this one as a last resort or “default” to return to if nothing more informative fits the evidence. I’ll vote your comment up if you can think of something more specific.
It should be taken as read that when I propose an explanation for something, I implicitly admit that I could be wrong.
Yes, it could be. But “random side effect of unspecified other optimizations” is a completely general explanation for any trait of any organism, so as a hypothesis it is totally uninformative.
The obvious “random side effect” theory, for underdog-empathy, is that we tend to empathize with agents who feel the way underdogs feel (e.g., in pain, afraid, pulling together their pluck to succeed) regardless of whether the agent who feels that way is an underdog, or feels those emotions for some other reason.
To test this theory: do we in fact feel similarly about people struggling to scale a difficult mountain, or to gather food for the winter in the face of difficulty and starvation risk? Or does our empathy with underdogs (in group conflict situations specifically; not in struggles against non-agent difficulties like mountains) bring out a response that would not be predicted from just the agent’s fear/pain/pluck? Also, can our responses to more complex fear/pain/pluck situations (such as the person struggling to avoid starvation) be explained from simpler reactions to the individual components (e.g., the tendency to flinch and grab your thumb when you see someone hit his thumb with a hammer)?
I think this is the least wrong post here. If we assume that our pattern recognition ability, which has obvious evolutionary advantages, is the source of empathy, which makes sense to me in terms of individual selection, then we can see that looking at a far situation will trigger certain pattern recognitions, specifically looking at our past experience. Based on my past experience more gain comes from being the underdog and winning than being the assumed winner and winning. Because I can see that, emotionally I will identify with the underdog more often because the outcome will be greater for individuals in that group and people tend to identify with individuals in far populations rather than groups. I’d add that personally being a part of the underdog group and winning would have much more of an impact than being a part of the assumed winning side and winning, much like a gambler remembers the wins more than the losses, and thus I would be pulling for the underdogs.
This can explain why my reasonableness will lead me to support the overdog in close situations. If there is a split in my group and I have to choose which side I’m on pattern recognition helps me realize that I am more likely to come out ahead if I ally with the overdog.(uberhund?) Thus, in such a situation I would be more likely to support the uberhund than the underdog because it directly affects my situation.
We should actively look for more informative hypotheses, keeping this one as a last resort or “default” to return to if nothing more informative fits the evidence.
Roko’s heuristic (quoted above) isn’t terrible, but as LW-ers equipped with Bayes’ theorem, we can do better. Instead of betting on whichever explanation predicts the observations in most detail (the “most informative hypothesis”), we can bet on whichever explanation has the most combined predictive power and prior probability.
P(hypothesis | data ) = P(hypothesis) * P( data | hypothesis ) / P(data).
Let’s say we’re trying to decide between theory A—“we like underdogs because underdog-liking was specifically useful to our ancestors, for such-and-such a specific reason”—and theory B—“underdog-liking is an accidental side-effect of other adaptations”. Roko correctly points out that P(data | hypothesis A) is larger than P(data | hypothesis B). That’s what it means to say hypothesis A is “more informative” or has “more predictive power”. (Well, that, and the fact that hypothesis A might also fit some larger set of future data that we might collect in future experiments.) But it is also true that P(hypothesis B) is much larger than P(hypothesis A). And if our goal is to estimate whether smoofra’s general hypothesis B or Roko’s specific hypothesis A is more likely to be true, we need to focus on the product.
We can estimate the relative prior probabilities of hypotheses A and B partly by thinking about how much more general B is than A (general hypotheses have higher priors) and partly by gathering data on how good an optimizer evolution is, or how often evolution generates specific adaptations vs. general side-effects. Regarding how good an optimizer evolution is, Michael Vassar likes to note that adult baboons and human toddlers have to learn how to hide; hiding is useful in a variety of situations, but its usefulness was not sufficient to cause specific “how to hide” adaptations to evolve. If similar examples of missing adaptations are common, this would increase the prior weight against hypotheses such as Roko’s near/far account of underdog-empathy. If there are plenty of clear examples of specific adaptations, that would increase weight toward Roko’s near/fear underdog theory.
Evolutionary psychology is important enough that figuring out what priors to put on specific adaptations vs. side-effects would be darn useful. Anyone have data? Better yet, anyone with data willing to write us a post, here?
Even if the functional hypothesis is less likely than the random hypothesis, there is further we can take it if we explore it. Finding structure can lead to finding more structure, letting us climb further steps up the ladder of knowledge.
Why not go with the explanation that doesn’t multiply entities beyond necessity? Why should we assume that there was a specific strategic circumstance in our evolutionary past that caused us to make the near-far distinction when it could very easily—perhaps more easily—be the side-effect of higher reasoning, a basic disposition towards kindness, or a cultural evolution? Isn’t it best practice to assume the null hypothesis until there’s compelling evidence of something else?
There’s a distinction between what I believe is more likely to be true, and what I wish were true instead. The null hypothesis is always more likely to be correct than any specific hypothesis. If I have to stick with a very unpredictive hypothesis, I have a decreased ability to predict the world, I will therefore do worse.
In this case, I am fairly sure that the near/far distinction gives good reason to believe that the Israel experiment doesn’t contradict the cave man fight: i.e. what people do in far situations can be the opposite of what they do in near situations.
But as to why people root for the underdog, rather than just choosing at random… I am less sure.
The empathy argument has been made independently a few times, and I am starting to see it’s merit. But empathy and signalling aren’t mutually exclusive. We could be seeing an example of exaption here—the empathy response tended to make people sympathize with the underdog, and this effect was re-enforced because it was actually advantageous—as a signal of virtue and power.
The original poster here seemed to basically be saying “This is a minor effect of such complexity that it could be entirely the result of selective pressures on other parts of human psychology, which give us this predisposition.” This seems highly plausible, given that I don’t think anyone has come up with a story of how decisions in this circumstance influence differential selective pressure. It seems that if you can’t find a reasonably clear mechanism for differntial reproductive success, you should not bend over backwards to do so (that is, if it’s that hard to find one, maybe it’s because it isn’t there).
My personal theory is that it stems from story telling and thus availability bias. Almost no story has the overdog as a good guy. This is probably the result of a story requiring conflict to occur in a non-predictable manner. Big, good guy crushes bad, little guy with little resistance is too foregone of a conclusion. Thus, every story we hear, we like the good guy. When we hear a story about Israel-Palestine (that happens to represent reality, roughly), we side with the little guy because, based on a massive compilation of (fictional) “evidence,” the little guy is always right.
Of course, explaining the psychology of good stories is rather difficult; still, “side effect of other aspect of human psychology” seems more accurate than “result of differential reproduction” for something this specific, abstract, and practically useless. Though, of course, if someone comes up with a convincing mechanism for differential reproductive success, that would probably change my mind.
It seems that if you can’t find a reasonably clear mechanism for differntial reproductive success, you should not bend over backwards to do so (that is, if it’s that hard to find one, maybe it’s because it isn’t there).
You should bend over backwards until you find one or two “differential fitness” explanations, then you should go test them!
EDIT: And, of course, you should also look for hypotheses not based upon differential reproduction. And test those too!
I think that “signalling your virtue and power” isn’t a crazily complex explanation. We are in need of evidence methinks.
No! You are assuming that underdogism is something evolution specifically optimized for. It could very well be a useless side effect of other unrelated optimizations.
Yes, it could be. But “random side effect of unspecified other optimisations” is a completely general explanation for any trait of any organism, so as a hypothesis it is totally uninformative. We should actively look for more informative hypotheses, keeping this one as a last resort or “default” to return to if nothing more informative fits the evidence. I’ll vote your comment up if you can think of something more specific.
It should be taken as read that when I propose an explanation for something, I implicitly admit that I could be wrong.
The obvious “random side effect” theory, for underdog-empathy, is that we tend to empathize with agents who feel the way underdogs feel (e.g., in pain, afraid, pulling together their pluck to succeed) regardless of whether the agent who feels that way is an underdog, or feels those emotions for some other reason.
To test this theory: do we in fact feel similarly about people struggling to scale a difficult mountain, or to gather food for the winter in the face of difficulty and starvation risk? Or does our empathy with underdogs (in group conflict situations specifically; not in struggles against non-agent difficulties like mountains) bring out a response that would not be predicted from just the agent’s fear/pain/pluck? Also, can our responses to more complex fear/pain/pluck situations (such as the person struggling to avoid starvation) be explained from simpler reactions to the individual components (e.g., the tendency to flinch and grab your thumb when you see someone hit his thumb with a hammer)?
I think this is the least wrong post here. If we assume that our pattern recognition ability, which has obvious evolutionary advantages, is the source of empathy, which makes sense to me in terms of individual selection, then we can see that looking at a far situation will trigger certain pattern recognitions, specifically looking at our past experience. Based on my past experience more gain comes from being the underdog and winning than being the assumed winner and winning. Because I can see that, emotionally I will identify with the underdog more often because the outcome will be greater for individuals in that group and people tend to identify with individuals in far populations rather than groups. I’d add that personally being a part of the underdog group and winning would have much more of an impact than being a part of the assumed winning side and winning, much like a gambler remembers the wins more than the losses, and thus I would be pulling for the underdogs.
This can explain why my reasonableness will lead me to support the overdog in close situations. If there is a split in my group and I have to choose which side I’m on pattern recognition helps me realize that I am more likely to come out ahead if I ally with the overdog.(uberhund?) Thus, in such a situation I would be more likely to support the uberhund than the underdog because it directly affects my situation.
Roko’s heuristic (quoted above) isn’t terrible, but as LW-ers equipped with Bayes’ theorem, we can do better. Instead of betting on whichever explanation predicts the observations in most detail (the “most informative hypothesis”), we can bet on whichever explanation has the most combined predictive power and prior probability.
P(hypothesis | data ) = P(hypothesis) * P( data | hypothesis ) / P(data).
Let’s say we’re trying to decide between theory A—“we like underdogs because underdog-liking was specifically useful to our ancestors, for such-and-such a specific reason”—and theory B—“underdog-liking is an accidental side-effect of other adaptations”. Roko correctly points out that P(data | hypothesis A) is larger than P(data | hypothesis B). That’s what it means to say hypothesis A is “more informative” or has “more predictive power”. (Well, that, and the fact that hypothesis A might also fit some larger set of future data that we might collect in future experiments.) But it is also true that P(hypothesis B) is much larger than P(hypothesis A). And if our goal is to estimate whether smoofra’s general hypothesis B or Roko’s specific hypothesis A is more likely to be true, we need to focus on the product.
We can estimate the relative prior probabilities of hypotheses A and B partly by thinking about how much more general B is than A (general hypotheses have higher priors) and partly by gathering data on how good an optimizer evolution is, or how often evolution generates specific adaptations vs. general side-effects. Regarding how good an optimizer evolution is, Michael Vassar likes to note that adult baboons and human toddlers have to learn how to hide; hiding is useful in a variety of situations, but its usefulness was not sufficient to cause specific “how to hide” adaptations to evolve. If similar examples of missing adaptations are common, this would increase the prior weight against hypotheses such as Roko’s near/far account of underdog-empathy. If there are plenty of clear examples of specific adaptations, that would increase weight toward Roko’s near/fear underdog theory.
Evolutionary psychology is important enough that figuring out what priors to put on specific adaptations vs. side-effects would be darn useful. Anyone have data? Better yet, anyone with data willing to write us a post, here?
Even if the functional hypothesis is less likely than the random hypothesis, there is further we can take it if we explore it. Finding structure can lead to finding more structure, letting us climb further steps up the ladder of knowledge.
Exactly. There is a distinction between that hypothesis that you think is most likely, and that hypothesis that you think is most worth pursuing.
Why not go with the explanation that doesn’t multiply entities beyond necessity? Why should we assume that there was a specific strategic circumstance in our evolutionary past that caused us to make the near-far distinction when it could very easily—perhaps more easily—be the side-effect of higher reasoning, a basic disposition towards kindness, or a cultural evolution? Isn’t it best practice to assume the null hypothesis until there’s compelling evidence of something else?
There’s a distinction between what I believe is more likely to be true, and what I wish were true instead. The null hypothesis is always more likely to be correct than any specific hypothesis. If I have to stick with a very unpredictive hypothesis, I have a decreased ability to predict the world, I will therefore do worse.
In this case, I am fairly sure that the near/far distinction gives good reason to believe that the Israel experiment doesn’t contradict the cave man fight: i.e. what people do in far situations can be the opposite of what they do in near situations.
But as to why people root for the underdog, rather than just choosing at random… I am less sure.
The empathy argument has been made independently a few times, and I am starting to see it’s merit. But empathy and signalling aren’t mutually exclusive. We could be seeing an example of exaption here—the empathy response tended to make people sympathize with the underdog, and this effect was re-enforced because it was actually advantageous—as a signal of virtue and power.
The original poster here seemed to basically be saying “This is a minor effect of such complexity that it could be entirely the result of selective pressures on other parts of human psychology, which give us this predisposition.” This seems highly plausible, given that I don’t think anyone has come up with a story of how decisions in this circumstance influence differential selective pressure. It seems that if you can’t find a reasonably clear mechanism for differntial reproductive success, you should not bend over backwards to do so (that is, if it’s that hard to find one, maybe it’s because it isn’t there).
My personal theory is that it stems from story telling and thus availability bias. Almost no story has the overdog as a good guy. This is probably the result of a story requiring conflict to occur in a non-predictable manner. Big, good guy crushes bad, little guy with little resistance is too foregone of a conclusion. Thus, every story we hear, we like the good guy. When we hear a story about Israel-Palestine (that happens to represent reality, roughly), we side with the little guy because, based on a massive compilation of (fictional) “evidence,” the little guy is always right.
Of course, explaining the psychology of good stories is rather difficult; still, “side effect of other aspect of human psychology” seems more accurate than “result of differential reproduction” for something this specific, abstract, and practically useless. Though, of course, if someone comes up with a convincing mechanism for differential reproductive success, that would probably change my mind.
You should bend over backwards until you find one or two “differential fitness” explanations, then you should go test them!
EDIT: And, of course, you should also look for hypotheses not based upon differential reproduction. And test those too!
I think that “signalling your virtue and power” isn’t a crazily complex explanation. We are in need of evidence methinks.
If I have to stick with a very unpredictive hypothesis, I have a decreased ability to predict the world, I will therefore do worse.
Not true. If you have a prediction model that is non-random and wrong you will get better results from simple random predictions.
Just so you know, you can use a greater than sign to quote text, which will look like this
If you actually want to italicize text, you can use stars, which will look like this.
HTML will not avail you.
For more, check the help box—whenever you’re in the middle of writing a comment, it’s below and to the right of the editing window.
yes, this is true. I didn’t express that very well.
What I meant was that a more specific correct hypothesis is much more useful to me than random predictions.