You could also imagine more toy-model games with mixed ecological equilibria.
E.g. suppose there’s some game where you can reproduce by getting resources, and you get resources by playing certain strategies, and it turns out there’s an equilibrium where there’s 90% strategy A in the ecosystem (by some arbitrary accounting) and 10% strategy B. It’s kind of silly to ask whether it’s A or B that’s winning based on this.
Although now that I’ve put things like that, it does seem fair to say that A is ‘winning’ if we’re not at equilibrium, and A’s total resources (by some accounting...) is increasing over time.
Now to complicate things again, what if A is increasing in resource usage but simultaneously mutating to be played by fewer actual individuals (the trees versus pelagibacter, perhaps)? Well, in the toy model setting it’s pretty tempting to say the question is wrong, because if the strategy is changing it’s not A anymore at all, and A has been totally wiped out by the new strategy A’.
Actually I guess I endorse this response in the real world too, where if a species is materially changing to exploit a new niche, it seems wrong to say “oh, that old species that’s totally dead now sure were winners.” If the old species had particular genes with a satisfying story for making it more adaptable than its competitors, perhaps better to take a gene’s-eye view and say those genes won. If not, just call it all a wash.
Anyhow, on humans: I think we’re ‘winners’ just in the sense that the human strategy seems better than our population 200ky ago would have reflected, leading to a population and resource use boom. As you say, we don’t need to be comparing ourselves to phytoplankton, the game is nonzero-sum.
E.g. suppose there’s some game where you can reproduce by getting resources, and you get resources by playing certain strategies, and it turns out there’s an equilibrium where there’s 90% strategy A in the ecosystem (by some arbitrary accounting) and 10% strategy B. It’s kind of silly to ask whether it’s A or B that’s winning based on this.
But this is an abstraction that would never occur in reality. The real systems that inspire this sort of thing have lots of pelagibacter communis and the strategies A and B are constantly diverging off into various experimental organisms that fit neither strategy and then die out.
When you choose to model this as a mixture of A and B, you’re already implicitly picking out both A and B as especially worth paying attention to—that is, as “winners” in some sense.
Actually I guess I endorse this response in the real world too, where if a species is materially changing to exploit a new niche, it seems wrong to say “oh, that old species that’s totally dead now sure were winners.” If the old species had particular genes with a satisfying story for making it more adaptable than its competitors, perhaps better to take a gene’s-eye view and say those genes won. If not, just call it all a wash.
But in this case you could just say A’ is winning over A. Like if you were training a neural network, you wouldn’t say that your random initialization won the loss function, you’d say the optimized network scores better loss than the initial random initialization.
Yeah, this makes sense.
You could also imagine more toy-model games with mixed ecological equilibria.
E.g. suppose there’s some game where you can reproduce by getting resources, and you get resources by playing certain strategies, and it turns out there’s an equilibrium where there’s 90% strategy A in the ecosystem (by some arbitrary accounting) and 10% strategy B. It’s kind of silly to ask whether it’s A or B that’s winning based on this.
Although now that I’ve put things like that, it does seem fair to say that A is ‘winning’ if we’re not at equilibrium, and A’s total resources (by some accounting...) is increasing over time.
Now to complicate things again, what if A is increasing in resource usage but simultaneously mutating to be played by fewer actual individuals (the trees versus pelagibacter, perhaps)? Well, in the toy model setting it’s pretty tempting to say the question is wrong, because if the strategy is changing it’s not A anymore at all, and A has been totally wiped out by the new strategy A’.
Actually I guess I endorse this response in the real world too, where if a species is materially changing to exploit a new niche, it seems wrong to say “oh, that old species that’s totally dead now sure were winners.” If the old species had particular genes with a satisfying story for making it more adaptable than its competitors, perhaps better to take a gene’s-eye view and say those genes won. If not, just call it all a wash.
Anyhow, on humans: I think we’re ‘winners’ just in the sense that the human strategy seems better than our population 200ky ago would have reflected, leading to a population and resource use boom. As you say, we don’t need to be comparing ourselves to phytoplankton, the game is nonzero-sum.
But this is an abstraction that would never occur in reality. The real systems that inspire this sort of thing have lots of pelagibacter communis and the strategies A and B are constantly diverging off into various experimental organisms that fit neither strategy and then die out.
When you choose to model this as a mixture of A and B, you’re already implicitly picking out both A and B as especially worth paying attention to—that is, as “winners” in some sense.
But in this case you could just say A’ is winning over A. Like if you were training a neural network, you wouldn’t say that your random initialization won the loss function, you’d say the optimized network scores better loss than the initial random initialization.
Perhaps I should have said that it’s silly to ask whether “being like A” or “being like B” is the goal of the game.