Least action, 2nd law, the various MAXENT ideas of Lotka, Kay and Schneider, and Dewar together with the minimum entropy production theorem of Prigogine. As you know, we have been in disagreement (for almost a decade now) about whether these things even exist, and whether they qualify as optimization when they do exist (least action, 2nd law, Prigogine). We don’t need to revive that debate.
OK. From this—and some other things on this thread, it does sound as though we still have a disagreement in this area. This probably isn’t the spot to go over that.
However, maybe something can be said now. For example, did you agree with my statement that water flowing downhill was essentially an optimisation process? If not, maybe I should say something now.
did you agree with my statement that water flowing downhill was essentially an optimisation process? If not, maybe I should say something now.
I did not agree, but I don’t think you should say something now. I don’t think it is useful to call the natural progression to a state of minimum free energy ‘an optimization process’.
Admittedly, it does share some features with rational decision making and natural selection—notably the existence of an ‘objective function’ and a promise of monotone progress toward the ‘objective’ without the promise of an optimal final result within a finite time.
But it lacks a property that I will call ‘retargetability’. By adjusting the environment we can redefine fitness—causing NS to send a population in a completely different evolutionary direction. We are still ‘optimizing’ fitness, and doing so using the same mechanisms, but the meaning of fitness has changed.
Similarly, by training a rational agent to have different tastes, we can redefine utility—causing rational decision making to choose a completely different set of actions. We are still ‘optimizing’ utility, and doing so using the same mechanisms, but the meaning of utility has changed.
I find it more difficult to imagine “retargeting” the meaning of ‘downhill’ for flowing water. And, if you postulate some artificial environment (iron balls rolling on a table with magnets placed underneath the table) in which mechanics plus dissipation leads to some tunable result, … well then i might agree to call that process an optimization process.
You can do gradient descent (optimisation) on arbitrary 1D / 2D functions with it—and adding more dimensions is not that conceptually challenging.
I am not sure what optimisation problem can’t easily have cold water poured on it ;-)
Also, “retargetability” sounds as though it is your own specification.
I don’t see much about being “retargetable” here. So, it seems as though this is not a standard concern. If you wish to continue to claim that “retargetability” is to do with optimisation, I think you should provide a supporting reference.
FWIW, optimisation implies quite a bit more than just monotonic increase. You get a monotonic increase from 2LoT—which is a different idea, with less to do with the concept of optimisation. The idea of “maximising entropy” constrains expectations a lot more than the second law alone does.
OK. From this—and some other things on this thread, it does sound as though we still have a disagreement in this area. This probably isn’t the spot to go over that.
However, maybe something can be said now. For example, did you agree with my statement that water flowing downhill was essentially an optimisation process? If not, maybe I should say something now.
I did not agree, but I don’t think you should say something now. I don’t think it is useful to call the natural progression to a state of minimum free energy ‘an optimization process’.
Admittedly, it does share some features with rational decision making and natural selection—notably the existence of an ‘objective function’ and a promise of monotone progress toward the ‘objective’ without the promise of an optimal final result within a finite time.
But it lacks a property that I will call ‘retargetability’. By adjusting the environment we can redefine fitness—causing NS to send a population in a completely different evolutionary direction. We are still ‘optimizing’ fitness, and doing so using the same mechanisms, but the meaning of fitness has changed.
Similarly, by training a rational agent to have different tastes, we can redefine utility—causing rational decision making to choose a completely different set of actions. We are still ‘optimizing’ utility, and doing so using the same mechanisms, but the meaning of utility has changed.
I find it more difficult to imagine “retargeting” the meaning of ‘downhill’ for flowing water. And, if you postulate some artificial environment (iron balls rolling on a table with magnets placed underneath the table) in which mechanics plus dissipation leads to some tunable result, … well then i might agree to call that process an optimization process.
You can do gradient descent (optimisation) on arbitrary 1D / 2D functions with it—and adding more dimensions is not that conceptually challenging.
I am not sure what optimisation problem can’t easily have cold water poured on it ;-)
Also, “retargetability” sounds as though it is your own specification.
I don’t see much about being “retargetable” here. So, it seems as though this is not a standard concern. If you wish to continue to claim that “retargetability” is to do with optimisation, I think you should provide a supporting reference.
FWIW, optimisation implies quite a bit more than just monotonic increase. You get a monotonic increase from 2LoT—which is a different idea, with less to do with the concept of optimisation. The idea of “maximising entropy” constrains expectations a lot more than the second law alone does.