What if the mechanism isn’t designed to actually support the underdog, but to signal a tendency to support the underdog?
In a world where everyone supports the likely winner, Zug doesn’t need to promise anyone anything to keep them on his side. But if one person suddenly develops a tendency to support the underdog, then Zug has to keep him loyal by promising him extra rewards.
The best possible case is one where you end up on Zug’s side, but only after vacillating for so long that Zug is terrified you’re going to side with Urk and promises everything in his power to win you over. And the only way to terrify Zug that way is to actually side with Urk sometimes.
It seems that supporting an underdog is a more impressive act—it suggests more confidence in your own abilities, and your ability to withstand retribution from the overdog. I’m not sure we do actually support the underdog more when a costly act is required, but we probably try to pretend to support the underdog when doing so is cheap, so we can look more impressive.
In other words, if Zug believes you to be the kind of agent who will make the naively rational decision to side with him, he will not reward you. You then side with Zug, because it makes more sense.
However, if Zug believes you to be the kind of agent who will irrationally oppose him unless bribed, he will reward you. You then side with Zug, because it makes more sense.
This seems to be another problem of precommitment.
While my own decision theory has no need of precommitment, it’s interesting to consider that genes have no trouble with precommitments; they just make us want to do it that way. The urge to revenge, for example, can be considered as the genes making a sort of believable and true precommitment; you don’t reconsider afterward, once you get the benefits, because—thanks to the genes—it’s what you want. The genes don’t have quite the same calculus as an inconsistent classical decision theorist who knows beforehand that they want to precommit early but will want to reconsider later.
But Zug probably doesn’t care about just one person. Doesn’t the underdog bias still require a way to “get off the ground” in this scenario? Siding with Urk initially flies in the face of individual selection.
Zug can be only slightly more powerful than Urk to start with, and then as more individuals have the adaptation, the power difference it’s willing to confront will scale. I.e. this sounds like it could evolve incrementally.
Your mention of signaling gives me an idea.
What if the mechanism isn’t designed to actually support the underdog, but to signal a tendency to support the underdog?
In a world where everyone supports the likely winner, Zug doesn’t need to promise anyone anything to keep them on his side. But if one person suddenly develops a tendency to support the underdog, then Zug has to keep him loyal by promising him extra rewards.
The best possible case is one where you end up on Zug’s side, but only after vacillating for so long that Zug is terrified you’re going to side with Urk and promises everything in his power to win you over. And the only way to terrify Zug that way is to actually side with Urk sometimes.
It seems that supporting an underdog is a more impressive act—it suggests more confidence in your own abilities, and your ability to withstand retribution from the overdog. I’m not sure we do actually support the underdog more when a costly act is required, but we probably try to pretend to support the underdog when doing so is cheap, so we can look more impressive.
In other words, if Zug believes you to be the kind of agent who will make the naively rational decision to side with him, he will not reward you. You then side with Zug, because it makes more sense.
However, if Zug believes you to be the kind of agent who will irrationally oppose him unless bribed, he will reward you. You then side with Zug, because it makes more sense.
This seems to be another problem of precommitment.
While my own decision theory has no need of precommitment, it’s interesting to consider that genes have no trouble with precommitments; they just make us want to do it that way. The urge to revenge, for example, can be considered as the genes making a sort of believable and true precommitment; you don’t reconsider afterward, once you get the benefits, because—thanks to the genes—it’s what you want. The genes don’t have quite the same calculus as an inconsistent classical decision theorist who knows beforehand that they want to precommit early but will want to reconsider later.
But Zug probably doesn’t care about just one person. Doesn’t the underdog bias still require a way to “get off the ground” in this scenario? Siding with Urk initially flies in the face of individual selection.
Zug can be only slightly more powerful than Urk to start with, and then as more individuals have the adaptation, the power difference it’s willing to confront will scale. I.e. this sounds like it could evolve incrementally.
Ah, makes sense. The modern bias seems specifically connected to major differences, but that doesn’t exclude milder origins.