Water flowing downhill is an optimisation process.
Do you mind telling me what does that optimise? In other words, what is the objective function? Water flowing downhill because of gravity. It needs not optimise anything.
Of course, certain intrinsic properties may make some non-living things survive better than other (long half lives, water resistance, etc). But you don’t need to give them any objective as though they have a mind. When you say ‘optimisation,’ you ascribe one more objective to something within a set of constraints, and by doing this you imply that there are some objectives that are more ‘desirable’ than others.
I understand that it is just the human mind that makes judgment about ‘desirability’. Yes, I’m suggesting that your views are rather anthropomorphic.
1) Systems do collapse (political systems collapse due to wars, lack of social capital, etc; financial systems collapse due to mismanagement, failure of the invisible hand; the earth may collapse due to anthropogenic climate change; stars do explode). And this means optimisation, if any, fails. If you want to argue that systems collapse in order to optimise larger systems, please come up with some system-design explanations. I believe that a good optimisation process in a well-designed system is one-directional, at least in the short run. You don’t destroy a building to recreate it very soon later unless you have bad design or miscalculation of requirements. But the nature is sometimes stupid enough to destroy a forest in a flash and recreate something very similar several years later.
2) An optimal solution should be preventive rather than corrective. If the objective function of the whole world is ecological stability, then maybe humans shouldn’t be intelligent enough to think and invent something that harms the environment. And maybe there shouldn’t be stuffs like bush fires in forests that take a century to regrow. Or oil spill that kills planktons. Those things hurt more than benefit the environment. What do those events optimise? Please let me know.
3) The fluctuation theorem, the Gaia hypothesis, etc. are kind of depicting self-regulating systems. (Natural selection is not. Some species adapt better than others and this may be destructive in the long run.) And self-regulating systems are not necessarily self-optimising unless the objective function is definable, defined and maximised when the equilibrium state is reached. And if there are multiple possible equilibria in the system, self-regulating systems may get stuck at a non-optimal equilibrium. I’m not talking about the thermodynamic equilibrium here. I’m talking about systems in general (young democracies seem to be good examples).
4) I don’t see a link between evolution and systematic optimisation. Evolution is locally, a greedy algorithm. In computer science, greedy algorithm doesn’t normally give the best results. It can give the worst possible result, indeed. Moreover, organisms adapt for themselves, not for the system. They optimise their survival probability (though the process is kind of slow), and this could bring the ecology from balance to imbalance. This could eventually harm the adapted species themselves.
5) I’m not sure whether the Newcomb’s problem is sorta contradict the natural selection, if applied to computer systems. In this environment, AI that chooses options randomly would fare better than intelligent AI that understands strategic dominance in game theory.
Water flowing downhill is an optimisation process.
Do you mind telling me what does that optimise? In other words, what is the objective function?
In a word, entropy.
Water flowing downhill because of gravity. It needs not optimise anything.
Water flowing downhill does optimise a function, though. The laws of physics are microscopically reversible—and so are exactly as compatible with water flowing uphill as down. Water flows downhill because of statistical mechanics.
When you say ‘optimisation,’ you ascribe one more objective to something within a set of constraints, and by doing this you imply that there are some objectives that are more ‘desirable’ than others. I understand that it is just the human mind that makes judgment about ‘desirability’. Yes, I’m suggesting that your views are rather anthropomorphic.
Water flowing downhill is an optimisation process.
Do you mind telling me what does that optimise? In other words, what is the objective function?
I’ve never seen an academic article saying that the world is maximising entropy (in the thermodynamic sense). I understand that the second law of thermodynamics hints that entropy is a fairly closed system should increase over time.
When a process (rather) consistently increases (or decreases) the value of a variable, it doesn’t necessarily optimise it! Like when you see a nation’s positive GDP growth from year to year, you can’t say the nation is optimising its GDP. It is tempting, but still it is not a sufficient condition to say it is an optimisation process.
When you say ‘optimisation,’ you ascribe one more objective to something within a set of constraints, and by doing this you imply that there are some objectives that are more ‘desirable’ than others. I understand that it is just the human mind that makes judgment about ‘desirability’. Yes, I’m suggesting that your views are rather anthropomorphic.
You are not using the word ‘optimization’ in its mathematical sense—whereas I am.
In an optimisation problem, there is an objective function and, often, a set of constraints. You are trying to find the best solution from all possible solutions. The objective function itself reveals preferences (‘best’ solution—isn’t that subjective?), and this is sometimes inherent, sometimes explicit.
I use the word ‘optimisation’ in its mathematical sense. And I know the difference between definitions and axioms. Objective functions are definitions, not axioms. You can’t take them as facts! In an optimisation problem, you start with an objective function given a set of constraints, and then you arrive at an optimal solution and work it out. This is the real optimisation process. You, on the other hand, observe a phenomenon, and then explain it by giving it an objective function as a theory… although the phenomenon isn’t efficient in giving the optimal outcome.
Suppose one day you observe the global economy. You see the trend that global production, in real terms, is increasing. Can you conclude that the world’s economy is an optimisation process of output? No! It is just a candidate story, not fact.
You are suggesting that my views on this topic are anthropomorphic?!? Uh, they are the facts of the matter.
Definitely not facts.
The Gaia hypothesis is the way some biologists see how the world works. “Optimising Gaia” is a story. The strongest hypothesis among Gaia hypotheses. It is like Earth has a mind and tries to adjust herself to be biologically favourable (the objective function here is ecological). Regardless, the truth remains. All versions of the Gaia hypotheses are maps, not territories.
You, on the other hand, observe a phenomenon, and then explain it by giving it an objective function as a theory… although the phenomenon isn’t efficient in giving the optimal outcome.
The phenomenon isn’t always “efficient” at dissipating entropy—because of constraints imposed by physical law. Also, in general, optimisation processes are not guaranteed to find the “optimal outcome”—due to local maxima. I am not making the idea of entropy maximisation up—there’s a big literature about it dating back to 1922. Check my references.
In an optimisation problem, there is an objective function and, often, a set of constraints. You are trying to find the best solution from all possible solutions. …
While I generally agree with you in this debate, and disagree with Tim Tyler’s claims that spontaneous dissipation of free energy exemplifies Nature’s optimization of entropy production, I have to agree with ata. There is an important distinction between an optimization problem and an optimization process. And the distinction is definitely not that the process generates the solution to the problem.
You, on the other hand, observe a phenomenon, and then explain it by giving it an objective function as a theory… although the phenomenon isn’t efficient in giving the optimal outcome.
Yep, that is what is happening, alright. But this isn’t quite as disreputable as you make it sound. Take, for example, biological evolution under natural selection—the canonical example of an ‘optimization process’ as the phrase is used here. R.A. Fisher proved that (under the admittedly unrealistic assumption of an unchanging environment) the average ‘fitness’ of the organisms in a population subject to natural selection can only increase, so long as the mutation rate is moderate. So what is ‘fitness’? Well, it is an ‘objective function’ which we generate from the phenomenon—the fitness of an individual organism is simply a count of surviving offspring and the fitness of a ‘type’ is the average fitness of the individuals of that type.
So, this ‘fitness’ can only increase. But there is no guarantee that the process generating the increase is efficient, nor that some ‘optimal’ level of ‘fitness’ will ever be reached. Nonetheless, the local usage designates natural selection as an ‘optimization’ process. We are aware that we are flirting with teleological language, here, but it is only a flirtation. We know what we are doing. We are not in danger of being seduced.
Yes, I did notice that. That is why I wrote spelling out the assumptions:
R.A. Fisher proved that (under the admittedly unrealistic assumption of an unchanging environment) the average ‘fitness’ of the organisms in a population subject to natural selection can only increase, so long as the mutation rate is moderate.
You, on the other hand, observe a phenomenon, and then explain it by giving it an objective function as a theory… although the phenomenon isn’t efficient in giving the optimal outcome.
Tim, if you wish to disagree, it might be polite to state the reasons for your disagreement.
I meant my “Yep” to apply to shadow’s denunciation of the practice of extracting the objective function from observation of the phenomenon—particularly as it applies to the two optimization processes of greatest interest to LW: natural selection and human rationality.
In constructing the objective functions that we use to explain rational behavior, we use a concept of “revealed preference”. That is, we observe the behavior—the choices that a rational agent makes—in order to explain the behavior. In truth, from shadow’s viewpoint, we are not explaining behavior at all—we are merely explaining the consistency of behavior over time.
Similarly, when analyzing natural selection, we need to observe the deaths and reproductions of organisms in order to construct our ‘fitness’ function—the very thing that we claim that the process optimizes. We are rescued from the well-known charge of ‘tautology’ only by the fact that we are explaining/predicting the fitness of the current generation of organisms, based on the observation of the fitness of prior generations. Not really a tautology, but also not really an explanation of as much as might be naively thought.
So, to my opinion, shadow’s critique is quite correct when applied to the important optimization processes of natural selection and rational behavior/cognition. But the critique is not crippling.
But now, let us look at the kinds of ‘optimization processes’ that you were describing. Least action, 2nd law, the various MAXENT ideas of Lotka, Kay and Schneider, and Dewar together with the minimum entropy production theorem of Prigogine. As you
know, we have been in disagreement (for almost a decade now) about whether these things even exist, and whether they qualify as optimization when they do exist (least action, 2nd law, Prigogine). We don’t need to revive that debate. But you may be correct if you are claiming that shadow’s ‘fitting the theory to the observations’ critique does not apply at all to your examples of ‘optimization processes’. So, I apologize if it appeared that I was tarring them with the same shadow-brush which I applied to NS and rationality.
Least action, 2nd law, the various MAXENT ideas of Lotka, Kay and Schneider, and Dewar together with the minimum entropy production theorem of Prigogine. As you know, we have been in disagreement (for almost a decade now) about whether these things even exist, and whether they qualify as optimization when they do exist (least action, 2nd law, Prigogine). We don’t need to revive that debate.
OK. From this—and some other things on this thread, it does sound as though we still have a disagreement in this area. This probably isn’t the spot to go over that.
However, maybe something can be said now. For example, did you agree with my statement that water flowing downhill was essentially an optimisation process? If not, maybe I should say something now.
did you agree with my statement that water flowing downhill was essentially an optimisation process? If not, maybe I should say something now.
I did not agree, but I don’t think you should say something now. I don’t think it is useful to call the natural progression to a state of minimum free energy ‘an optimization process’.
Admittedly, it does share some features with rational decision making and natural selection—notably the existence of an ‘objective function’ and a promise of monotone progress toward the ‘objective’ without the promise of an optimal final result within a finite time.
But it lacks a property that I will call ‘retargetability’. By adjusting the environment we can redefine fitness—causing NS to send a population in a completely different evolutionary direction. We are still ‘optimizing’ fitness, and doing so using the same mechanisms, but the meaning of fitness has changed.
Similarly, by training a rational agent to have different tastes, we can redefine utility—causing rational decision making to choose a completely different set of actions. We are still ‘optimizing’ utility, and doing so using the same mechanisms, but the meaning of utility has changed.
I find it more difficult to imagine “retargeting” the meaning of ‘downhill’ for flowing water. And, if you postulate some artificial environment (iron balls rolling on a table with magnets placed underneath the table) in which mechanics plus dissipation leads to some tunable result, … well then i might agree to call that process an optimization process.
You can do gradient descent (optimisation) on arbitrary 1D / 2D functions with it—and adding more dimensions is not that conceptually challenging.
I am not sure what optimisation problem can’t easily have cold water poured on it ;-)
Also, “retargetability” sounds as though it is your own specification.
I don’t see much about being “retargetable” here. So, it seems as though this is not a standard concern. If you wish to continue to claim that “retargetability” is to do with optimisation, I think you should provide a supporting reference.
FWIW, optimisation implies quite a bit more than just monotonic increase. You get a monotonic increase from 2LoT—which is a different idea, with less to do with the concept of optimisation. The idea of “maximising entropy” constrains expectations a lot more than the second law alone does.
Tim, if you wish to disagree, it might be polite to state the reasons for your disagreement.
My jaw dropped—since I was unable to find a sympathetic reading of your comment.
You seemed to be expressing approval of material which I disapproved of.
However, I think I have now managed to find a plausible sympathetic reading—and
it turns out that we don’t really have a disagreement.
In an optimisation problem, there is an objective function and, often, a set of constraints. You are trying to find the best solution from all possible solutions. The objective function itself reveals preferences (‘best’ solution—isn’t that subjective?), and this is sometimes inherent, sometimes explicit.
I use the word ‘optimisation’ in its mathematical sense. And I know the difference between definitions and axioms. Objective functions are definitions, not axioms. You can’t take them as facts! In an optimisation problem, you start with an objective function given a set of constraints, and then you arrive at an optimal solution and work it out. This is the real optimisation process. You, on the other hand, observe a phenomenon, and then explain it by giving it an objective function as a theory… although the phenomenon isn’t efficient in giving the optimal outcome.
I’m pretty sure you’re still not using the word “optimization” in the sense of the phrase “optimization process” as used on Less Wrong. An optimization process doesn’t have to be a process that maximizes an explicitly-defined utility function; the function can be implicit in its structure or behaviour.
It’s not really the same as the sense of “optimization” described in the aforelinked Wikipedia article, which isn’t the subject of this discussion post. The terminology of “optimization processes” is used to analyze dynamics acting within a system.
Of course, certain intrinsic properties may make some non-living things survive better than other (long half lives, water resistance, etc). But you don’t need to give them any objective as though they have a mind. When you say ‘optimisation,’ you ascribe one more objective to something within a set of constraints, and by doing this you imply that there are some objectives that are more ‘desirable’ than others.
I understand that it is just the human mind that makes judgment about ‘desirability’. Yes, I’m suggesting that your views are rather anthropomorphic.
1) Systems do collapse (political systems collapse due to wars, lack of social capital, etc; financial systems collapse due to mismanagement, failure of the invisible hand; the earth may collapse due to anthropogenic climate change; stars do explode). And this means optimisation, if any, fails. If you want to argue that systems collapse in order to optimise larger systems, please come up with some system-design explanations. I believe that a good optimisation process in a well-designed system is one-directional, at least in the short run. You don’t destroy a building to recreate it very soon later unless you have bad design or miscalculation of requirements. But the nature is sometimes stupid enough to destroy a forest in a flash and recreate something very similar several years later.
2) An optimal solution should be preventive rather than corrective. If the objective function of the whole world is ecological stability, then maybe humans shouldn’t be intelligent enough to think and invent something that harms the environment. And maybe there shouldn’t be stuffs like bush fires in forests that take a century to regrow. Or oil spill that kills planktons. Those things hurt more than benefit the environment. What do those events optimise? Please let me know.
3) The fluctuation theorem, the Gaia hypothesis, etc. are kind of depicting self-regulating systems. (Natural selection is not. Some species adapt better than others and this may be destructive in the long run.) And self-regulating systems are not necessarily self-optimising unless the objective function is definable, defined and maximised when the equilibrium state is reached. And if there are multiple possible equilibria in the system, self-regulating systems may get stuck at a non-optimal equilibrium. I’m not talking about the thermodynamic equilibrium here. I’m talking about systems in general (young democracies seem to be good examples).
4) I don’t see a link between evolution and systematic optimisation. Evolution is locally, a greedy algorithm. In computer science, greedy algorithm doesn’t normally give the best results. It can give the worst possible result, indeed. Moreover, organisms adapt for themselves, not for the system. They optimise their survival probability (though the process is kind of slow), and this could bring the ecology from balance to imbalance. This could eventually harm the adapted species themselves.
5) I’m not sure whether the Newcomb’s problem is sorta contradict the natural selection, if applied to computer systems. In this environment, AI that chooses options randomly would fare better than intelligent AI that understands strategic dominance in game theory.
In a word, entropy.
Water flowing downhill does optimise a function, though. The laws of physics are microscopically reversible—and so are exactly as compatible with water flowing uphill as down. Water flows downhill because of statistical mechanics.
You are not using the word ‘optimization’ in its mathematical sense—whereas I am.
I’ve never seen an academic article saying that the world is maximising entropy (in the thermodynamic sense). I understand that the second law of thermodynamics hints that entropy is a fairly closed system should increase over time.
When a process (rather) consistently increases (or decreases) the value of a variable, it doesn’t necessarily optimise it! Like when you see a nation’s positive GDP growth from year to year, you can’t say the nation is optimising its GDP. It is tempting, but still it is not a sufficient condition to say it is an optimisation process.
In an optimisation problem, there is an objective function and, often, a set of constraints. You are trying to find the best solution from all possible solutions. The objective function itself reveals preferences (‘best’ solution—isn’t that subjective?), and this is sometimes inherent, sometimes explicit.
I use the word ‘optimisation’ in its mathematical sense. And I know the difference between definitions and axioms. Objective functions are definitions, not axioms. You can’t take them as facts! In an optimisation problem, you start with an objective function given a set of constraints, and then you arrive at an optimal solution and work it out. This is the real optimisation process. You, on the other hand, observe a phenomenon, and then explain it by giving it an objective function as a theory… although the phenomenon isn’t efficient in giving the optimal outcome.
Suppose one day you observe the global economy. You see the trend that global production, in real terms, is increasing. Can you conclude that the world’s economy is an optimisation process of output? No! It is just a candidate story, not fact.
Definitely not facts.
The Gaia hypothesis is the way some biologists see how the world works. “Optimising Gaia” is a story. The strongest hypothesis among Gaia hypotheses. It is like Earth has a mind and tries to adjust herself to be biologically favourable (the objective function here is ecological). Regardless, the truth remains. All versions of the Gaia hypotheses are maps, not territories.
The phenomenon isn’t always “efficient” at dissipating entropy—because of constraints imposed by physical law. Also, in general, optimisation processes are not guaranteed to find the “optimal outcome”—due to local maxima. I am not making the idea of entropy maximisation up—there’s a big literature about it dating back to 1922. Check my references.
Right. Well, I already gave some references about that further up the thread—these ones:
Dewar, R. C., 2003, “Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states,” J. Phys. A: Math.Gen. 36: 631-41.
Dewar, R. C., 2005, “Maximum entropy production and the fluctuation theorem,” J. Phys. A: Math.Gen. 38: L371-L381.
However, there are a large number of other such articles. E.g. see:
Royal Society “Theme Issue” ‘Maximum entropy production in ecological and environmental systems: applications and implications’ compiled and edited by Axel Kleidon, Yadvinder Malhi and Peter M. Cox May 12, 2010; 365 (1545) - (17 papers)
Paltridge, Garth W. 1975, Global Dynamics and Climate – A System of Minimum Entropy Exchange. Q. J. R. Meteorol. Soc. 101, 475-484
Paltridge, Garth W. 1978, The Steady State Format of Global Climate. Q. J. R. Meteorol. Soc. 104, 927-945
Paltridge, Garth W., 1969 - Climate and thermodynamic systems of maximum dissipation
For more introductory material, perhaps see:
Whitfield, John, Survival of the Likeliest
Whitfield, John, Complex systems: Order out of chaos
...and for more references, perhaps try the ones on: http://originoflife.net/bright_light/
While I generally agree with you in this debate, and disagree with Tim Tyler’s claims that spontaneous dissipation of free energy exemplifies Nature’s optimization of entropy production, I have to agree with ata. There is an important distinction between an optimization problem and an optimization process. And the distinction is definitely not that the process generates the solution to the problem.
Yep, that is what is happening, alright. But this isn’t quite as disreputable as you make it sound. Take, for example, biological evolution under natural selection—the canonical example of an ‘optimization process’ as the phrase is used here. R.A. Fisher proved that (under the admittedly unrealistic assumption of an unchanging environment) the average ‘fitness’ of the organisms in a population subject to natural selection can only increase, so long as the mutation rate is moderate. So what is ‘fitness’? Well, it is an ‘objective function’ which we generate from the phenomenon—the fitness of an individual organism is simply a count of surviving offspring and the fitness of a ‘type’ is the average fitness of the individuals of that type.
So, this ‘fitness’ can only increase. But there is no guarantee that the process generating the increase is efficient, nor that some ‘optimal’ level of ‘fitness’ will ever be reached. Nonetheless, the local usage designates natural selection as an ‘optimization’ process. We are aware that we are flirting with teleological language, here, but it is only a flirtation. We know what we are doing. We are not in danger of being seduced.
Note that, conventionally, fitnesses can decline—much as a hill climber can be climbing a hill on a mountain that is rapidly sinking into the sea.
Yes, I did notice that. That is why I wrote spelling out the assumptions:
Ah! Fisher’s fictional fitnesses! My bad; I missed that context—apologies.
What the..?
That is definitiely not what is happening—as I would have expected you to be aware of by now.
Tim, if you wish to disagree, it might be polite to state the reasons for your disagreement.
I meant my “Yep” to apply to shadow’s denunciation of the practice of extracting the objective function from observation of the phenomenon—particularly as it applies to the two optimization processes of greatest interest to LW: natural selection and human rationality.
In constructing the objective functions that we use to explain rational behavior, we use a concept of “revealed preference”. That is, we observe the behavior—the choices that a rational agent makes—in order to explain the behavior. In truth, from shadow’s viewpoint, we are not explaining behavior at all—we are merely explaining the consistency of behavior over time.
Similarly, when analyzing natural selection, we need to observe the deaths and reproductions of organisms in order to construct our ‘fitness’ function—the very thing that we claim that the process optimizes. We are rescued from the well-known charge of ‘tautology’ only by the fact that we are explaining/predicting the fitness of the current generation of organisms, based on the observation of the fitness of prior generations. Not really a tautology, but also not really an explanation of as much as might be naively thought.
So, to my opinion, shadow’s critique is quite correct when applied to the important optimization processes of natural selection and rational behavior/cognition. But the critique is not crippling.
But now, let us look at the kinds of ‘optimization processes’ that you were describing. Least action, 2nd law, the various MAXENT ideas of Lotka, Kay and Schneider, and Dewar together with the minimum entropy production theorem of Prigogine. As you know, we have been in disagreement (for almost a decade now) about whether these things even exist, and whether they qualify as optimization when they do exist (least action, 2nd law, Prigogine). We don’t need to revive that debate. But you may be correct if you are claiming that shadow’s ‘fitting the theory to the observations’ critique does not apply at all to your examples of ‘optimization processes’. So, I apologize if it appeared that I was tarring them with the same shadow-brush which I applied to NS and rationality.
OK. From this—and some other things on this thread, it does sound as though we still have a disagreement in this area. This probably isn’t the spot to go over that.
However, maybe something can be said now. For example, did you agree with my statement that water flowing downhill was essentially an optimisation process? If not, maybe I should say something now.
I did not agree, but I don’t think you should say something now. I don’t think it is useful to call the natural progression to a state of minimum free energy ‘an optimization process’.
Admittedly, it does share some features with rational decision making and natural selection—notably the existence of an ‘objective function’ and a promise of monotone progress toward the ‘objective’ without the promise of an optimal final result within a finite time.
But it lacks a property that I will call ‘retargetability’. By adjusting the environment we can redefine fitness—causing NS to send a population in a completely different evolutionary direction. We are still ‘optimizing’ fitness, and doing so using the same mechanisms, but the meaning of fitness has changed.
Similarly, by training a rational agent to have different tastes, we can redefine utility—causing rational decision making to choose a completely different set of actions. We are still ‘optimizing’ utility, and doing so using the same mechanisms, but the meaning of utility has changed.
I find it more difficult to imagine “retargeting” the meaning of ‘downhill’ for flowing water. And, if you postulate some artificial environment (iron balls rolling on a table with magnets placed underneath the table) in which mechanics plus dissipation leads to some tunable result, … well then i might agree to call that process an optimization process.
You can do gradient descent (optimisation) on arbitrary 1D / 2D functions with it—and adding more dimensions is not that conceptually challenging.
I am not sure what optimisation problem can’t easily have cold water poured on it ;-)
Also, “retargetability” sounds as though it is your own specification.
I don’t see much about being “retargetable” here. So, it seems as though this is not a standard concern. If you wish to continue to claim that “retargetability” is to do with optimisation, I think you should provide a supporting reference.
FWIW, optimisation implies quite a bit more than just monotonic increase. You get a monotonic increase from 2LoT—which is a different idea, with less to do with the concept of optimisation. The idea of “maximising entropy” constrains expectations a lot more than the second law alone does.
My jaw dropped—since I was unable to find a sympathetic reading of your comment. You seemed to be expressing approval of material which I disapproved of.
However, I think I have now managed to find a plausible sympathetic reading—and it turns out that we don’t really have a disagreement.
I’m pretty sure you’re still not using the word “optimization” in the sense of the phrase “optimization process” as used on Less Wrong. An optimization process doesn’t have to be a process that maximizes an explicitly-defined utility function; the function can be implicit in its structure or behaviour.
It’s not really the same as the sense of “optimization” described in the aforelinked Wikipedia article, which isn’t the subject of this discussion post. The terminology of “optimization processes” is used to analyze dynamics acting within a system.