My first problem with holism, or higher order cybernetics generally, is that while it’s an interesting and sometimes illuminating perspective, it doesn’t give me useful tools. It’s not even really a paradigm, in that it provides no standard methods (or objects) of enquiry, doesn’t help much with interpretation, etc.
The “AI effect” (as soon as it works, no one calls it AI any more) has a similar application in cybernetics: the ideas are almost omnipresent in modern sciences and engineering, but we call them “control engineering” or “computer science” or “systems theory” etc. Don’t get me wrong: this was basically my whole degree (“interdisciplinary studies”), I’ve spend the last few years at ANU’s School of Cybernetics(eg), and I love it, but it’s more of a worldview than a discipline. I frame problems with this lens, and then solve them with causal statistics or systems theory or HCI or …
My second problem is that it leaves people prone to thinking that they have a grand theory of everything, but without the expertise to notice the ways in which they’re wrong—or humility to seek them out. Worse, these details are often actually really important to get right. For example:
I want to reiterate: optimization is a lens through which you can view the behavior of nonlinear systems. There’s no need to take it literally. Well, it’s a matter of personal choice. If thinking about the world in these terms make you feel better, there’s no harm in it I suppose.
You’re right that it probably makes more sense to think of it as a perspective rather than a paradigm. Yet, I imagine that some useful fundamental assumptions may be made in the near future that could change that. If nothing else, a shared language to facilitate the transfer of relevant information across disciplines would be nice. And category theory seems like an interesting candidate in that regard.
I disagree with what you said about optimization, though. Phylogenetic and ontogenetic adaptations both result from a process that can be thought of as optimization. Sudden environmental changes don’t propagate back to the past and render this process void; they serve as novel constraints on adaptation.
If I optimize a model on dataset X, its failure to demonstrate optimal behavior on dataset Y doesn’t mean the model was never optimized. And it doesn’t imply that the model is no longer being optimized.
To use the example mentioned by Yudkowsky in his post on the topic: ice cream serves as superstimuli because it resulted from a process that can be considered optimization. On an evolutionary timeline, we are but a blip. As in the case of dataset Y, the observation that preferences resulting from an evolutionary process can prove maladaptive when constraints change doesn’t mean there’s no optimization at works.
My larger argument (that I now see I failed to communicate properly) was that the appearance of optimization ultimately derives from chance and and necessity. A photon doesn’t literally perform optimization. But it’s useful to us to pretend as if it does. Likewise, an individual organism doesn’t literally perform optimization either. But it’s useful to pretend as if it does if we want to predict its behavior. Let’s say that I offered you $1,000 for waving your hand at me. By assuming that you are an individual that would want money, I could make the prediction that you would do so. This is, of course, a trivial assumption. But it rests on the assumption that you are an agent with goals. This comes naturally to us. And that’s precisely my point. That’s Dennett’s intentional stance.
I’m not sure I’m getting my point across exactly, but I want to emphasize that I see no apparent conflict between the content in my post above and the idea of treating individual organisms as adaptation executers (except if the implication is that phylogenetic learning is qualitatively different from ontogenetic learning).
I think that we agree that it can be useful to model many processes as optimization.
My point is that it’s dangerous to lose the distinction between “currently useful abstraction” and “it actually is optimization”—much like locally-optimal vs. globally-optimal, it’s a subtle confusion but can land you in deep confusion and on an unsound basis for intervention. Systems people seem particuarly prone to this kind of error, maybe because of the tendency to focus on dynamics rather than details.
That’s a perfectly reasonable concern. Details keep you tethered to reality. If a model disagrees with experiment, it’s wrong.
Personally, I see much promise in this perspective. I believe we’ll see many interesting medical interventions in the coming decades inspired by this view in general.
My first problem with holism, or higher order cybernetics generally, is that while it’s an interesting and sometimes illuminating perspective, it doesn’t give me useful tools. It’s not even really a paradigm, in that it provides no standard methods (or objects) of enquiry, doesn’t help much with interpretation, etc.
The “AI effect” (as soon as it works, no one calls it AI any more) has a similar application in cybernetics: the ideas are almost omnipresent in modern sciences and engineering, but we call them “control engineering” or “computer science” or “systems theory” etc. Don’t get me wrong: this was basically my whole degree (“interdisciplinary studies”), I’ve spend the last few years at ANU’s School of Cybernetics (eg), and I love it, but it’s more of a worldview than a discipline. I frame problems with this lens, and then solve them with causal statistics or systems theory or HCI or …
My second problem is that it leaves people prone to thinking that they have a grand theory of everything, but without the expertise to notice the ways in which they’re wrong—or humility to seek them out. Worse, these details are often actually really important to get right. For example:
I strongly disagree: viewing most systems as optimizers will mislead you, both epistemically and affectively. We have a whole tag for “Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers.”! The difference really matters!
You’re right that it probably makes more sense to think of it as a perspective rather than a paradigm. Yet, I imagine that some useful fundamental assumptions may be made in the near future that could change that. If nothing else, a shared language to facilitate the transfer of relevant information across disciplines would be nice. And category theory seems like an interesting candidate in that regard.
I disagree with what you said about optimization, though. Phylogenetic and ontogenetic adaptations both result from a process that can be thought of as optimization. Sudden environmental changes don’t propagate back to the past and render this process void; they serve as novel constraints on adaptation.
If I optimize a model on dataset X, its failure to demonstrate optimal behavior on dataset Y doesn’t mean the model was never optimized. And it doesn’t imply that the model is no longer being optimized.
To use the example mentioned by Yudkowsky in his post on the topic: ice cream serves as superstimuli because it resulted from a process that can be considered optimization. On an evolutionary timeline, we are but a blip. As in the case of dataset Y, the observation that preferences resulting from an evolutionary process can prove maladaptive when constraints change doesn’t mean there’s no optimization at works.
My larger argument (that I now see I failed to communicate properly) was that the appearance of optimization ultimately derives from chance and and necessity. A photon doesn’t literally perform optimization. But it’s useful to us to pretend as if it does. Likewise, an individual organism doesn’t literally perform optimization either. But it’s useful to pretend as if it does if we want to predict its behavior. Let’s say that I offered you $1,000 for waving your hand at me. By assuming that you are an individual that would want money, I could make the prediction that you would do so. This is, of course, a trivial assumption. But it rests on the assumption that you are an agent with goals. This comes naturally to us. And that’s precisely my point. That’s Dennett’s intentional stance.
I’m not sure I’m getting my point across exactly, but I want to emphasize that I see no apparent conflict between the content in my post above and the idea of treating individual organisms as adaptation executers (except if the implication is that phylogenetic learning is qualitatively different from ontogenetic learning).
I think that we agree that it can be useful to model many processes as optimization.
My point is that it’s dangerous to lose the distinction between “currently useful abstraction” and “it actually is optimization”—much like locally-optimal vs. globally-optimal, it’s a subtle confusion but can land you in deep confusion and on an unsound basis for intervention. Systems people seem particuarly prone to this kind of error, maybe because of the tendency to focus on dynamics rather than details.
That’s a perfectly reasonable concern. Details keep you tethered to reality. If a model disagrees with experiment, it’s wrong.
Personally, I see much promise in this perspective. I believe we’ll see many interesting medical interventions in the coming decades inspired by this view in general.