E.g. suppose there’s some game where you can reproduce by getting resources, and you get resources by playing certain strategies, and it turns out there’s an equilibrium where there’s 90% strategy A in the ecosystem (by some arbitrary accounting) and 10% strategy B. It’s kind of silly to ask whether it’s A or B that’s winning based on this.
But this is an abstraction that would never occur in reality. The real systems that inspire this sort of thing have lots of pelagibacter communis and the strategies A and B are constantly diverging off into various experimental organisms that fit neither strategy and then die out.
When you choose to model this as a mixture of A and B, you’re already implicitly picking out both A and B as especially worth paying attention to—that is, as “winners” in some sense.
Actually I guess I endorse this response in the real world too, where if a species is materially changing to exploit a new niche, it seems wrong to say “oh, that old species that’s totally dead now sure were winners.” If the old species had particular genes with a satisfying story for making it more adaptable than its competitors, perhaps better to take a gene’s-eye view and say those genes won. If not, just call it all a wash.
But in this case you could just say A’ is winning over A. Like if you were training a neural network, you wouldn’t say that your random initialization won the loss function, you’d say the optimized network scores better loss than the initial random initialization.
But this is an abstraction that would never occur in reality. The real systems that inspire this sort of thing have lots of pelagibacter communis and the strategies A and B are constantly diverging off into various experimental organisms that fit neither strategy and then die out.
When you choose to model this as a mixture of A and B, you’re already implicitly picking out both A and B as especially worth paying attention to—that is, as “winners” in some sense.
But in this case you could just say A’ is winning over A. Like if you were training a neural network, you wouldn’t say that your random initialization won the loss function, you’d say the optimized network scores better loss than the initial random initialization.
Perhaps I should have said that it’s silly to ask whether “being like A” or “being like B” is the goal of the game.