The proffered explanations seem plausible. What about with ideas though? I think it’s social signaling: ‘Look how clever and independent and different I am, that I can adopt this minority viewpoint and justify it.’
What if the mechanism isn’t designed to actually support the underdog, but to signal a tendency to support the underdog?
In a world where everyone supports the likely winner, Zug doesn’t need to promise anyone anything to keep them on his side. But if one person suddenly develops a tendency to support the underdog, then Zug has to keep him loyal by promising him extra rewards.
The best possible case is one where you end up on Zug’s side, but only after vacillating for so long that Zug is terrified you’re going to side with Urk and promises everything in his power to win you over. And the only way to terrify Zug that way is to actually side with Urk sometimes.
It seems that supporting an underdog is a more impressive act—it suggests more confidence in your own abilities, and your ability to withstand retribution from the overdog. I’m not sure we do actually support the underdog more when a costly act is required, but we probably try to pretend to support the underdog when doing so is cheap, so we can look more impressive.
In other words, if Zug believes you to be the kind of agent who will make the naively rational decision to side with him, he will not reward you. You then side with Zug, because it makes more sense.
However, if Zug believes you to be the kind of agent who will irrationally oppose him unless bribed, he will reward you. You then side with Zug, because it makes more sense.
This seems to be another problem of precommitment.
While my own decision theory has no need of precommitment, it’s interesting to consider that genes have no trouble with precommitments; they just make us want to do it that way. The urge to revenge, for example, can be considered as the genes making a sort of believable and true precommitment; you don’t reconsider afterward, once you get the benefits, because—thanks to the genes—it’s what you want. The genes don’t have quite the same calculus as an inconsistent classical decision theorist who knows beforehand that they want to precommit early but will want to reconsider later.
But Zug probably doesn’t care about just one person. Doesn’t the underdog bias still require a way to “get off the ground” in this scenario? Siding with Urk initially flies in the face of individual selection.
Zug can be only slightly more powerful than Urk to start with, and then as more individuals have the adaptation, the power difference it’s willing to confront will scale. I.e. this sounds like it could evolve incrementally.
Social signalling explains almost everything and predicts little. By law of parsimony, supporting underdog ideas seems much likelier to me as a special case of the general tendency Yvain is considering.
Social signalling explains almost everything and predicts little.
In this case, the social signaling interpretation predicts a discrepancy between peoples’ expressed preferences in distant situations, and peoples’ felt responses in situations where they can act.
We can acquire evidence for or against the social signaling interpretation by e.g. taking an “underdog” scene, where a popular kid fights with a lone unpopular kid, and having two randomized groups of kids (both strangers to the fighters): (a) actually see the fight, as if by accident, nearby where they can in principle intercede; or (b) watch video footage of the fight, as a distant event that happened long ago and that they are being asked to comment on. Watch the Eckman expressions of the kids in each group, and see if the tendency to empathize with the underdog is stronger when signaling is the only issue (for group (b)) than when action is also a possibility (for group (a)). A single experiment of this sort wouldn’t be decisive, but with enough variations it might.
Your experiment wouldn’t convince me at all because the video vs reality distinction could confound it any number of ways. That said, I upvoted you because no one else here has even proposed a test.
The proffered explanations seem plausible. What about with ideas though? I think it’s social signaling: ‘Look how clever and independent and different I am, that I can adopt this minority viewpoint and justify it.’
(Kind of like Zahavi’s handicap principle.)
EDIT: It appears I largely stole this variant on signaling strategy from http://www.overcomingbias.com/2008/12/showoff-bias.html . Oh well.
Your mention of signaling gives me an idea.
What if the mechanism isn’t designed to actually support the underdog, but to signal a tendency to support the underdog?
In a world where everyone supports the likely winner, Zug doesn’t need to promise anyone anything to keep them on his side. But if one person suddenly develops a tendency to support the underdog, then Zug has to keep him loyal by promising him extra rewards.
The best possible case is one where you end up on Zug’s side, but only after vacillating for so long that Zug is terrified you’re going to side with Urk and promises everything in his power to win you over. And the only way to terrify Zug that way is to actually side with Urk sometimes.
It seems that supporting an underdog is a more impressive act—it suggests more confidence in your own abilities, and your ability to withstand retribution from the overdog. I’m not sure we do actually support the underdog more when a costly act is required, but we probably try to pretend to support the underdog when doing so is cheap, so we can look more impressive.
In other words, if Zug believes you to be the kind of agent who will make the naively rational decision to side with him, he will not reward you. You then side with Zug, because it makes more sense.
However, if Zug believes you to be the kind of agent who will irrationally oppose him unless bribed, he will reward you. You then side with Zug, because it makes more sense.
This seems to be another problem of precommitment.
While my own decision theory has no need of precommitment, it’s interesting to consider that genes have no trouble with precommitments; they just make us want to do it that way. The urge to revenge, for example, can be considered as the genes making a sort of believable and true precommitment; you don’t reconsider afterward, once you get the benefits, because—thanks to the genes—it’s what you want. The genes don’t have quite the same calculus as an inconsistent classical decision theorist who knows beforehand that they want to precommit early but will want to reconsider later.
But Zug probably doesn’t care about just one person. Doesn’t the underdog bias still require a way to “get off the ground” in this scenario? Siding with Urk initially flies in the face of individual selection.
Zug can be only slightly more powerful than Urk to start with, and then as more individuals have the adaptation, the power difference it’s willing to confront will scale. I.e. this sounds like it could evolve incrementally.
Ah, makes sense. The modern bias seems specifically connected to major differences, but that doesn’t exclude milder origins.
Social signalling explains almost everything and predicts little. By law of parsimony, supporting underdog ideas seems much likelier to me as a special case of the general tendency Yvain is considering.
In this case, the social signaling interpretation predicts a discrepancy between peoples’ expressed preferences in distant situations, and peoples’ felt responses in situations where they can act.
We can acquire evidence for or against the social signaling interpretation by e.g. taking an “underdog” scene, where a popular kid fights with a lone unpopular kid, and having two randomized groups of kids (both strangers to the fighters): (a) actually see the fight, as if by accident, nearby where they can in principle intercede; or (b) watch video footage of the fight, as a distant event that happened long ago and that they are being asked to comment on. Watch the Eckman expressions of the kids in each group, and see if the tendency to empathize with the underdog is stronger when signaling is the only issue (for group (b)) than when action is also a possibility (for group (a)). A single experiment of this sort wouldn’t be decisive, but with enough variations it might.
Your experiment wouldn’t convince me at all because the video vs reality distinction could confound it any number of ways. That said, I upvoted you because no one else here has even proposed a test.