You, on the other hand, observe a phenomenon, and then explain it by giving it an objective function as a theory… although the phenomenon isn’t efficient in giving the optimal outcome.
Tim, if you wish to disagree, it might be polite to state the reasons for your disagreement.
I meant my “Yep” to apply to shadow’s denunciation of the practice of extracting the objective function from observation of the phenomenon—particularly as it applies to the two optimization processes of greatest interest to LW: natural selection and human rationality.
In constructing the objective functions that we use to explain rational behavior, we use a concept of “revealed preference”. That is, we observe the behavior—the choices that a rational agent makes—in order to explain the behavior. In truth, from shadow’s viewpoint, we are not explaining behavior at all—we are merely explaining the consistency of behavior over time.
Similarly, when analyzing natural selection, we need to observe the deaths and reproductions of organisms in order to construct our ‘fitness’ function—the very thing that we claim that the process optimizes. We are rescued from the well-known charge of ‘tautology’ only by the fact that we are explaining/predicting the fitness of the current generation of organisms, based on the observation of the fitness of prior generations. Not really a tautology, but also not really an explanation of as much as might be naively thought.
So, to my opinion, shadow’s critique is quite correct when applied to the important optimization processes of natural selection and rational behavior/cognition. But the critique is not crippling.
But now, let us look at the kinds of ‘optimization processes’ that you were describing. Least action, 2nd law, the various MAXENT ideas of Lotka, Kay and Schneider, and Dewar together with the minimum entropy production theorem of Prigogine. As you
know, we have been in disagreement (for almost a decade now) about whether these things even exist, and whether they qualify as optimization when they do exist (least action, 2nd law, Prigogine). We don’t need to revive that debate. But you may be correct if you are claiming that shadow’s ‘fitting the theory to the observations’ critique does not apply at all to your examples of ‘optimization processes’. So, I apologize if it appeared that I was tarring them with the same shadow-brush which I applied to NS and rationality.
Least action, 2nd law, the various MAXENT ideas of Lotka, Kay and Schneider, and Dewar together with the minimum entropy production theorem of Prigogine. As you know, we have been in disagreement (for almost a decade now) about whether these things even exist, and whether they qualify as optimization when they do exist (least action, 2nd law, Prigogine). We don’t need to revive that debate.
OK. From this—and some other things on this thread, it does sound as though we still have a disagreement in this area. This probably isn’t the spot to go over that.
However, maybe something can be said now. For example, did you agree with my statement that water flowing downhill was essentially an optimisation process? If not, maybe I should say something now.
did you agree with my statement that water flowing downhill was essentially an optimisation process? If not, maybe I should say something now.
I did not agree, but I don’t think you should say something now. I don’t think it is useful to call the natural progression to a state of minimum free energy ‘an optimization process’.
Admittedly, it does share some features with rational decision making and natural selection—notably the existence of an ‘objective function’ and a promise of monotone progress toward the ‘objective’ without the promise of an optimal final result within a finite time.
But it lacks a property that I will call ‘retargetability’. By adjusting the environment we can redefine fitness—causing NS to send a population in a completely different evolutionary direction. We are still ‘optimizing’ fitness, and doing so using the same mechanisms, but the meaning of fitness has changed.
Similarly, by training a rational agent to have different tastes, we can redefine utility—causing rational decision making to choose a completely different set of actions. We are still ‘optimizing’ utility, and doing so using the same mechanisms, but the meaning of utility has changed.
I find it more difficult to imagine “retargeting” the meaning of ‘downhill’ for flowing water. And, if you postulate some artificial environment (iron balls rolling on a table with magnets placed underneath the table) in which mechanics plus dissipation leads to some tunable result, … well then i might agree to call that process an optimization process.
You can do gradient descent (optimisation) on arbitrary 1D / 2D functions with it—and adding more dimensions is not that conceptually challenging.
I am not sure what optimisation problem can’t easily have cold water poured on it ;-)
Also, “retargetability” sounds as though it is your own specification.
I don’t see much about being “retargetable” here. So, it seems as though this is not a standard concern. If you wish to continue to claim that “retargetability” is to do with optimisation, I think you should provide a supporting reference.
FWIW, optimisation implies quite a bit more than just monotonic increase. You get a monotonic increase from 2LoT—which is a different idea, with less to do with the concept of optimisation. The idea of “maximising entropy” constrains expectations a lot more than the second law alone does.
Tim, if you wish to disagree, it might be polite to state the reasons for your disagreement.
My jaw dropped—since I was unable to find a sympathetic reading of your comment.
You seemed to be expressing approval of material which I disapproved of.
However, I think I have now managed to find a plausible sympathetic reading—and
it turns out that we don’t really have a disagreement.
What the..?
That is definitiely not what is happening—as I would have expected you to be aware of by now.
Tim, if you wish to disagree, it might be polite to state the reasons for your disagreement.
I meant my “Yep” to apply to shadow’s denunciation of the practice of extracting the objective function from observation of the phenomenon—particularly as it applies to the two optimization processes of greatest interest to LW: natural selection and human rationality.
In constructing the objective functions that we use to explain rational behavior, we use a concept of “revealed preference”. That is, we observe the behavior—the choices that a rational agent makes—in order to explain the behavior. In truth, from shadow’s viewpoint, we are not explaining behavior at all—we are merely explaining the consistency of behavior over time.
Similarly, when analyzing natural selection, we need to observe the deaths and reproductions of organisms in order to construct our ‘fitness’ function—the very thing that we claim that the process optimizes. We are rescued from the well-known charge of ‘tautology’ only by the fact that we are explaining/predicting the fitness of the current generation of organisms, based on the observation of the fitness of prior generations. Not really a tautology, but also not really an explanation of as much as might be naively thought.
So, to my opinion, shadow’s critique is quite correct when applied to the important optimization processes of natural selection and rational behavior/cognition. But the critique is not crippling.
But now, let us look at the kinds of ‘optimization processes’ that you were describing. Least action, 2nd law, the various MAXENT ideas of Lotka, Kay and Schneider, and Dewar together with the minimum entropy production theorem of Prigogine. As you know, we have been in disagreement (for almost a decade now) about whether these things even exist, and whether they qualify as optimization when they do exist (least action, 2nd law, Prigogine). We don’t need to revive that debate. But you may be correct if you are claiming that shadow’s ‘fitting the theory to the observations’ critique does not apply at all to your examples of ‘optimization processes’. So, I apologize if it appeared that I was tarring them with the same shadow-brush which I applied to NS and rationality.
OK. From this—and some other things on this thread, it does sound as though we still have a disagreement in this area. This probably isn’t the spot to go over that.
However, maybe something can be said now. For example, did you agree with my statement that water flowing downhill was essentially an optimisation process? If not, maybe I should say something now.
I did not agree, but I don’t think you should say something now. I don’t think it is useful to call the natural progression to a state of minimum free energy ‘an optimization process’.
Admittedly, it does share some features with rational decision making and natural selection—notably the existence of an ‘objective function’ and a promise of monotone progress toward the ‘objective’ without the promise of an optimal final result within a finite time.
But it lacks a property that I will call ‘retargetability’. By adjusting the environment we can redefine fitness—causing NS to send a population in a completely different evolutionary direction. We are still ‘optimizing’ fitness, and doing so using the same mechanisms, but the meaning of fitness has changed.
Similarly, by training a rational agent to have different tastes, we can redefine utility—causing rational decision making to choose a completely different set of actions. We are still ‘optimizing’ utility, and doing so using the same mechanisms, but the meaning of utility has changed.
I find it more difficult to imagine “retargeting” the meaning of ‘downhill’ for flowing water. And, if you postulate some artificial environment (iron balls rolling on a table with magnets placed underneath the table) in which mechanics plus dissipation leads to some tunable result, … well then i might agree to call that process an optimization process.
You can do gradient descent (optimisation) on arbitrary 1D / 2D functions with it—and adding more dimensions is not that conceptually challenging.
I am not sure what optimisation problem can’t easily have cold water poured on it ;-)
Also, “retargetability” sounds as though it is your own specification.
I don’t see much about being “retargetable” here. So, it seems as though this is not a standard concern. If you wish to continue to claim that “retargetability” is to do with optimisation, I think you should provide a supporting reference.
FWIW, optimisation implies quite a bit more than just monotonic increase. You get a monotonic increase from 2LoT—which is a different idea, with less to do with the concept of optimisation. The idea of “maximising entropy” constrains expectations a lot more than the second law alone does.
My jaw dropped—since I was unable to find a sympathetic reading of your comment. You seemed to be expressing approval of material which I disapproved of.
However, I think I have now managed to find a plausible sympathetic reading—and it turns out that we don’t really have a disagreement.