When I imagine the beginnings of life on earth, I imagine a handful of molecules which just-so-happen to catalyze reactions which produce more of those same molecules. The more of molecule X there is, the more molecule X is produced. The chemical kinetic equations contain a positive feedback loop (aka instability).
There might also be other molecules which catalyze their own production. If Y catalyzes its own production more efficiently than X, then we eventually expect to see more Y than X. Still at the level of chemical kinetics, we’d say that the equations contain multiple positive feedback loops, and the feedback loop for Y has a faster doubling time than that for X.
We haven’t used the word “fitness” here at all. We’re quite literally talking about eigenvalues of a matrix (i.e. the Jacobian of the kinetic equations of the chemical system) - those are what determine the relevant doubling times, at least close to ambient steady-state concentrations. As we move away from ambient concentrations, the math will get a bit more complicated, but the qualitative idea remains: we’re talking about positive feedback loops, without any explicit mention of fitness or optimization.
But it sure seems like there should be a higher level of abstraction, at which the positive feedback loops for X and Y are competing, Y eventually wins out due to higher fitness, and there’s some meaningful sense in which the system is optimizing for fitness.
More generally, whenever there’s a dynamical system containing multiple instabilities (i.e. positive feedback loops), it seems like there should be a canonical way to interpret that system as multiple competing subsystems, under selection, optimizing for some kind of fitness function. I’d like a way to take a dynamical system containing positive feedback loops, and say both (a) what the competing subsystems are, and (b) what fitness function it’s implicitly maximizing.
Something like this would likely be useful in a number of areas:
Agent foundations: formulate “agents” as self-reinforcing feedback loops in dynamical systems. Tying effective self-reinforcement to world-models would probably be a key piece (e.g. along these lines).
Biology: generalize evolutionary theory
Economics: ground economic theory in selection effects (e.g. along these lines) rather than ideal agents, allowing it to apply much more broadly.
Consider the differential equation y′=Ay where A has many positive eigenvalues. This is the simplest case of
a dynamical system containing multiple instabilities (i.e. positive feedback loops),
Where is the selection? It isn’t there. You have multiple independent exponential growth rates.
Consider y′=f(y) a chaotic system like a double pendulum. Fix y to a particular typical solution.
consider y′+z′=f(y+z) as a differential equation in z. Here z represents the difference between y and some other solution to x′=f(x). If you start at z=0 then z stays at 0. However, small variations will grow exponentially. After a while, you just get a difference between 2 arbitrary chaotic paths.
I can’t see a way of meaningfully describing these as optimizing processes with competing subagents. Arguably y′=Ay could be optimising |y|. However, this doesn’t seem canonical, as for any invertable B. z(0)=By(0) and z′=BAB−1z describes an exactly isomorphic system, but dosen’t preserve modulus. This isomorphism does preserve yTAy. That could be the thing being optimised.
+1. The multiple feedback loops have to be competing in some important sense; it’s just not true that “whenever there’s a dynamical system containing multiple instabilities (i.e. positive feedback loops) … there should be a canonical way to interpret that system as multiple competing subsystems...”
In the OP’s case study, the molecules are competing for scarce resources. More abstractly, perhaps we can say that there are multiple feedback loops such that when the system has travelled far enough in the direction pushed by one feedback loop, it destroys or otherwise seriously inhibits movement in the directions pushed by the other feedback loops.
Consider a pencil balanced on its point. It has multiple positive feedback loops, (different directions to fall in) and falling far in one direction prevents falling in others. But once it has fallen, it just sits there. That said, evolution can settle into a strong local minimum, and just sit there.
I think this approach is worth pursuing, at least as a toy model, to identify the salient features of such a system. There is, of course, plenty of research in the area of evolutionary modeling already, but maybe not exactly in the way you are interested in. Consider spending some time on the literature search and review.
Not sure whether this is what you meant, but there is a difference between a situation when resources are abundant and the reproduction is an exponential function of the speed of reproduction, and when resources become scarce and reproduction is only one important parameter along with survival and interaction with competitors.
To continue with your example, imagine that Y has faster doubling rate than X (assuming abundant resources), but X can disassemble Y to create its own copy while Y can’t do the same to X. So there will be first a period when Y exponentially outgrows X, followed by a period where Y greadually disappears.
If you want to model this by matrices or something similar, you need to somehow include this aspect.
Also, the reality will be more complicated, because the values of X and Y and their interaction may depend on local environment. So it is possible that X eliminates Y in warm waters, but Y survives around the poles. Then it is possible that X evolves into intelligent species that causes global warming… okay, this is probably outside the scope of the original question.
[Question] Positive Feedback → Optimization?
When I imagine the beginnings of life on earth, I imagine a handful of molecules which just-so-happen to catalyze reactions which produce more of those same molecules. The more of molecule X there is, the more molecule X is produced. The chemical kinetic equations contain a positive feedback loop (aka instability).
There might also be other molecules which catalyze their own production. If Y catalyzes its own production more efficiently than X, then we eventually expect to see more Y than X. Still at the level of chemical kinetics, we’d say that the equations contain multiple positive feedback loops, and the feedback loop for Y has a faster doubling time than that for X.
We haven’t used the word “fitness” here at all. We’re quite literally talking about eigenvalues of a matrix (i.e. the Jacobian of the kinetic equations of the chemical system) - those are what determine the relevant doubling times, at least close to ambient steady-state concentrations. As we move away from ambient concentrations, the math will get a bit more complicated, but the qualitative idea remains: we’re talking about positive feedback loops, without any explicit mention of fitness or optimization.
But it sure seems like there should be a higher level of abstraction, at which the positive feedback loops for X and Y are competing, Y eventually wins out due to higher fitness, and there’s some meaningful sense in which the system is optimizing for fitness.
More generally, whenever there’s a dynamical system containing multiple instabilities (i.e. positive feedback loops), it seems like there should be a canonical way to interpret that system as multiple competing subsystems, under selection, optimizing for some kind of fitness function. I’d like a way to take a dynamical system containing positive feedback loops, and say both (a) what the competing subsystems are, and (b) what fitness function it’s implicitly maximizing.
Something like this would likely be useful in a number of areas:
Alignment: notice implicit optimization by looking for dynamic instabilities (e.g. instabilities in imperfect search).
Agent foundations: formulate “agents” as self-reinforcing feedback loops in dynamical systems. Tying effective self-reinforcement to world-models would probably be a key piece (e.g. along these lines).
Biology: generalize evolutionary theory
Economics: ground economic theory in selection effects (e.g. along these lines) rather than ideal agents, allowing it to apply much more broadly.
Consider the differential equation y′=Ay where A has many positive eigenvalues. This is the simplest case of
Where is the selection? It isn’t there. You have multiple independent exponential growth rates.
Consider y′=f(y) a chaotic system like a double pendulum. Fix y to a particular typical solution.
consider y′+z′=f(y+z) as a differential equation in z. Here z represents the difference between y and some other solution to x′=f(x). If you start at z=0 then z stays at 0. However, small variations will grow exponentially. After a while, you just get a difference between 2 arbitrary chaotic paths.
I can’t see a way of meaningfully describing these as optimizing processes with competing subagents. Arguably y′=Ay could be optimising |y|. However, this doesn’t seem canonical, as for any invertable B. z(0)=By(0) and z′=BAB−1z describes an exactly isomorphic system, but dosen’t preserve modulus. This isomorphism does preserve yTAy. That could be the thing being optimised.
+1. The multiple feedback loops have to be competing in some important sense; it’s just not true that “whenever there’s a dynamical system containing multiple instabilities (i.e. positive feedback loops) … there should be a canonical way to interpret that system as multiple competing subsystems...”
In the OP’s case study, the molecules are competing for scarce resources. More abstractly, perhaps we can say that there are multiple feedback loops such that when the system has travelled far enough in the direction pushed by one feedback loop, it destroys or otherwise seriously inhibits movement in the directions pushed by the other feedback loops.
Consider a pencil balanced on its point. It has multiple positive feedback loops, (different directions to fall in) and falling far in one direction prevents falling in others. But once it has fallen, it just sits there. That said, evolution can settle into a strong local minimum, and just sit there.
Mmm, good point. My hasty generalization was perhaps too hasty. Perhaps we need some sort of robust-to-different-initial-conditions sort of criterion.
I think this approach is worth pursuing, at least as a toy model, to identify the salient features of such a system. There is, of course, plenty of research in the area of evolutionary modeling already, but maybe not exactly in the way you are interested in. Consider spending some time on the literature search and review.
Not sure whether this is what you meant, but there is a difference between a situation when resources are abundant and the reproduction is an exponential function of the speed of reproduction, and when resources become scarce and reproduction is only one important parameter along with survival and interaction with competitors.
To continue with your example, imagine that Y has faster doubling rate than X (assuming abundant resources), but X can disassemble Y to create its own copy while Y can’t do the same to X. So there will be first a period when Y exponentially outgrows X, followed by a period where Y greadually disappears.
If you want to model this by matrices or something similar, you need to somehow include this aspect.
Also, the reality will be more complicated, because the values of X and Y and their interaction may depend on local environment. So it is possible that X eliminates Y in warm waters, but Y survives around the poles. Then it is possible that X evolves into intelligent species that causes global warming… okay, this is probably outside the scope of the original question.