You’re right that it probably makes more sense to think of it as a perspective rather than a paradigm. Yet, I imagine that some useful fundamental assumptions may be made in the near future that could change that. If nothing else, a shared language to facilitate the transfer of relevant information across disciplines would be nice. And category theory seems like an interesting candidate in that regard.
I disagree with what you said about optimization, though. Phylogenetic and ontogenetic adaptations both result from a process that can be thought of as optimization. Sudden environmental changes don’t propagate back to the past and render this process void; they serve as novel constraints on adaptation.
If I optimize a model on dataset X, its failure to demonstrate optimal behavior on dataset Y doesn’t mean the model was never optimized. And it doesn’t imply that the model is no longer being optimized.
To use the example mentioned by Yudkowsky in his post on the topic: ice cream serves as superstimuli because it resulted from a process that can be considered optimization. On an evolutionary timeline, we are but a blip. As in the case of dataset Y, the observation that preferences resulting from an evolutionary process can prove maladaptive when constraints change doesn’t mean there’s no optimization at works.
My larger argument (that I now see I failed to communicate properly) was that the appearance of optimization ultimately derives from chance and and necessity. A photon doesn’t literally perform optimization. But it’s useful to us to pretend as if it does. Likewise, an individual organism doesn’t literally perform optimization either. But it’s useful to pretend as if it does if we want to predict its behavior. Let’s say that I offered you $1,000 for waving your hand at me. By assuming that you are an individual that would want money, I could make the prediction that you would do so. This is, of course, a trivial assumption. But it rests on the assumption that you are an agent with goals. This comes naturally to us. And that’s precisely my point. That’s Dennett’s intentional stance.
I’m not sure I’m getting my point across exactly, but I want to emphasize that I see no apparent conflict between the content in my post above and the idea of treating individual organisms as adaptation executers (except if the implication is that phylogenetic learning is qualitatively different from ontogenetic learning).
I think that we agree that it can be useful to model many processes as optimization.
My point is that it’s dangerous to lose the distinction between “currently useful abstraction” and “it actually is optimization”—much like locally-optimal vs. globally-optimal, it’s a subtle confusion but can land you in deep confusion and on an unsound basis for intervention. Systems people seem particuarly prone to this kind of error, maybe because of the tendency to focus on dynamics rather than details.
That’s a perfectly reasonable concern. Details keep you tethered to reality. If a model disagrees with experiment, it’s wrong.
Personally, I see much promise in this perspective. I believe we’ll see many interesting medical interventions in the coming decades inspired by this view in general.
You’re right that it probably makes more sense to think of it as a perspective rather than a paradigm. Yet, I imagine that some useful fundamental assumptions may be made in the near future that could change that. If nothing else, a shared language to facilitate the transfer of relevant information across disciplines would be nice. And category theory seems like an interesting candidate in that regard.
I disagree with what you said about optimization, though. Phylogenetic and ontogenetic adaptations both result from a process that can be thought of as optimization. Sudden environmental changes don’t propagate back to the past and render this process void; they serve as novel constraints on adaptation.
If I optimize a model on dataset X, its failure to demonstrate optimal behavior on dataset Y doesn’t mean the model was never optimized. And it doesn’t imply that the model is no longer being optimized.
To use the example mentioned by Yudkowsky in his post on the topic: ice cream serves as superstimuli because it resulted from a process that can be considered optimization. On an evolutionary timeline, we are but a blip. As in the case of dataset Y, the observation that preferences resulting from an evolutionary process can prove maladaptive when constraints change doesn’t mean there’s no optimization at works.
My larger argument (that I now see I failed to communicate properly) was that the appearance of optimization ultimately derives from chance and and necessity. A photon doesn’t literally perform optimization. But it’s useful to us to pretend as if it does. Likewise, an individual organism doesn’t literally perform optimization either. But it’s useful to pretend as if it does if we want to predict its behavior. Let’s say that I offered you $1,000 for waving your hand at me. By assuming that you are an individual that would want money, I could make the prediction that you would do so. This is, of course, a trivial assumption. But it rests on the assumption that you are an agent with goals. This comes naturally to us. And that’s precisely my point. That’s Dennett’s intentional stance.
I’m not sure I’m getting my point across exactly, but I want to emphasize that I see no apparent conflict between the content in my post above and the idea of treating individual organisms as adaptation executers (except if the implication is that phylogenetic learning is qualitatively different from ontogenetic learning).
I think that we agree that it can be useful to model many processes as optimization.
My point is that it’s dangerous to lose the distinction between “currently useful abstraction” and “it actually is optimization”—much like locally-optimal vs. globally-optimal, it’s a subtle confusion but can land you in deep confusion and on an unsound basis for intervention. Systems people seem particuarly prone to this kind of error, maybe because of the tendency to focus on dynamics rather than details.
That’s a perfectly reasonable concern. Details keep you tethered to reality. If a model disagrees with experiment, it’s wrong.
Personally, I see much promise in this perspective. I believe we’ll see many interesting medical interventions in the coming decades inspired by this view in general.