In the case of gradient flow, we expect almost-all starting conditions to end up in a similar functional relationship when restricting attention to their on-distribution behavior. This allows us to pick a canonical winner.
Evolution is somewhat different from this in that we’re not working with a random distribution but instead a historical distribution, but that should just increase the convergence even more.
The noteworthy part is that despite this convergence, there’s still multiple winners because it depends on your weighting (and I guess because the species aren’t independent, too).
In the case of gradient flow, we expect almost-all starting conditions to end up in a similar functional relationship when restricting attention to their on-distribution behavior. This allows us to pick a canonical winner.
Evolution is somewhat different from this in that we’re not working with a random distribution but instead a historical distribution, but that should just increase the convergence even more.
The noteworthy part is that despite this convergence, there’s still multiple winners because it depends on your weighting (and I guess because the species aren’t independent, too).