A framework that can predict anything is not really a predictive framework; it’s just a modeling convention.
In the specific case of PCT, the model treats everything as closed-loop homeostasis occurring within the organism being modeled. However, there are plenty of situations where a significant part of the loop control occurs outside the organism, or where organism behavior is only homeostatic if certain EEA assumptions apply. (e.g. the body’s tendencies to hoard certain nutrients and flush others, based on historic availability rather than actual availability)
While this doesn’t harm PCT’s use as a conceptual model of organism behavior, it limits its use as a predictive framework with regard to what 1) we will find happening in the hardware, and 2) we will find happening in actual behavior.
The extension of this problem to TOTE loops is straightforward, since a TOTE loop is just a description of one possible implementation strategy for a PCT control loops and linkage, and one that similarly doesn’t always map to the hardware or the location where the tests and operations are taking place (i.e., in-organism or outside-organism).
In the specific case of PCT, the model treats everything as closed-loop homeostasis occurring within the organism being modeled.
That is not the case. Indeed, most of the experimental work on PCT involves creatures controlling perceptions of things outside themselves, e.g. cursor-tracking experiments, or ball catching. Indeed, this is where the important applications are. Homeostatic processes within the organism, such as control of deep body temperature, are well understood to be control processes, and in the case of body temperature, I believe it is known where the temperature sensor is. It is for interactions with the environment that many still think in terms of stimulus-response, or plan-then-execute, or sensing and compensating for disturbances, none of which are control processes, and therefore cannot explain how organisms achieve consistent results in the face of varying environments.
When I say “closed loop within the organism” I mean “having within the organism all the error detection and machinery for reducing the error”, not that the subject of perception is also within the organism.
Note, too, that It’s possible for people to display apparently-homeostatic processes where no such process is actually occurring.
For example, outside observation might appear to create the impression that say, a person is afraid of success and downregulating their ambitions or skill in order to maintain a lower level of success.
However, upon closer observation, it might instead be the case that the person is responding in a stimulus-response based way to something that is perceived as a threat related to success.
While you could reframe that in terms of homeostasis away from anxiety or threat perception, this framing doesn’t give you anything new in terms of solving the problem—especially if the required solution is to remove the conditioned threat perception. If anything, trying to view that problem as homeostatic in nature is a red herring, despite the fact that homeostasis is the result of the process.
This is a practical example of how using PCT as an explanatory theory—rather than simply a modeling paradigm—can interfere with actually solving problems.
In my early learning of PCT, I was overly excited by its apparent explanatory power, but later ended up dialing it back significantly as I realized it was mainly a useful tool for communicating certain ideas; the number of high-level psychological phenomena that actually involve homeostasis loops in the brain appear to be both quite few and relatively short-term in nature.
Indeed, to some extent, looking at things through the PCT lens was a step backwards, as it encouraged me to view things in terms of such higher-order homeostasis loops when those loops were merely emergent properties, rather than reified primitives. (And this especially applies when we’re talking about unwanted behavior.)
To put it another way, some people may indeed regulate their perception of “success” in some abstract high-level fashion. But most of the things that one might try to model in such a way, for most people, most of the time, actually involve much tinier, half-open controls like “reduce my anxiety in response to thinking about this problem, in whatever way possible as soon as possible”, and not some hypothetical long-term perception of success or status or whatnot.
If I model something as a TOTE that’s modeling. The model however implies predictions.If I use the TOTE model I can predict from the fact that my termostate being broken in a way that it doesn’t let it detect heat that it will likely overheat the room.
If I set my room to heat to 22C and find that my room is heated to 26C the TOTE model for the thermostate helps me reason that there’s likely a problem with the temperature sensor.
PCT does that too. Except that sometimes, body and brain processes are open-ended, with an important part of the loop existing in the outside world.
The problem with a model that can explain anything, is that you can’t notice when you’re being confused by a fake explanation.
A useful explanatory model needs to be able to rule things out, as well as “in”.
I think we are talking about different meanings of “modeling” here. There are plenty of uses for which PCT and TOTEs are apt. But if you’re trying to discover something about the physical nature of things involved, being able to explain anything equally well is not actually a benefit. That is, it doesn’t provide us with any information we don’t already know, absent the model.
So e.g. in your thermostat example, the TOTE model doesn’t provide you with any predictions you didn’t have without it: a person who lacks understanding of how thermostats work internally can trivially make the prediction that something is wrong with it, since it’s supposed to produce the reqested temperature.
Conversely, if you know the thermostat contains a sensor, then the idea that “it might be broken if the room temperature is wrong” is trivially derivable from that mere fact, without a detailed control systems model.
IOW, the TOTE model adds nothing to your existing predictions; it doesn’t constitute evidence of anything you didn’t already know.
This doesn’t take away from the many valuable uses of paradigms like PCT or TOTE: it’s just that they’re one of those things that seems super-valuable because it seems to be a more efficient mental data compressor than whatever you had before. But being a good compressor for whatever data you have is not the same as having any new data!
So paradigmatic models are more about being able to more efficiently think or reason about something, or focus your attention in useful ways, without necessarily changing much about how much one actually knows, from an evidentiary perspective.
But if you’re trying to discover thing about the physical nature of things involved, being able to explain anything equally well is not actually a benefit.
I do grant that’s the case. In the context of NLP, modeling doesn’t have the intention of discovering things about the phsyical nature of the things involved and if you go to NLP with that intention it’s easy to get disappointed.
A framework that can predict anything is not really a predictive framework; it’s just a modeling convention.
In the specific case of PCT, the model treats everything as closed-loop homeostasis occurring within the organism being modeled. However, there are plenty of situations where a significant part of the loop control occurs outside the organism, or where organism behavior is only homeostatic if certain EEA assumptions apply. (e.g. the body’s tendencies to hoard certain nutrients and flush others, based on historic availability rather than actual availability)
While this doesn’t harm PCT’s use as a conceptual model of organism behavior, it limits its use as a predictive framework with regard to what 1) we will find happening in the hardware, and 2) we will find happening in actual behavior.
The extension of this problem to TOTE loops is straightforward, since a TOTE loop is just a description of one possible implementation strategy for a PCT control loops and linkage, and one that similarly doesn’t always map to the hardware or the location where the tests and operations are taking place (i.e., in-organism or outside-organism).
That is not the case. Indeed, most of the experimental work on PCT involves creatures controlling perceptions of things outside themselves, e.g. cursor-tracking experiments, or ball catching. Indeed, this is where the important applications are. Homeostatic processes within the organism, such as control of deep body temperature, are well understood to be control processes, and in the case of body temperature, I believe it is known where the temperature sensor is. It is for interactions with the environment that many still think in terms of stimulus-response, or plan-then-execute, or sensing and compensating for disturbances, none of which are control processes, and therefore cannot explain how organisms achieve consistent results in the face of varying environments.
When I say “closed loop within the organism” I mean “having within the organism all the error detection and machinery for reducing the error”, not that the subject of perception is also within the organism.
Note, too, that It’s possible for people to display apparently-homeostatic processes where no such process is actually occurring.
For example, outside observation might appear to create the impression that say, a person is afraid of success and downregulating their ambitions or skill in order to maintain a lower level of success.
However, upon closer observation, it might instead be the case that the person is responding in a stimulus-response based way to something that is perceived as a threat related to success.
While you could reframe that in terms of homeostasis away from anxiety or threat perception, this framing doesn’t give you anything new in terms of solving the problem—especially if the required solution is to remove the conditioned threat perception. If anything, trying to view that problem as homeostatic in nature is a red herring, despite the fact that homeostasis is the result of the process.
This is a practical example of how using PCT as an explanatory theory—rather than simply a modeling paradigm—can interfere with actually solving problems.
In my early learning of PCT, I was overly excited by its apparent explanatory power, but later ended up dialing it back significantly as I realized it was mainly a useful tool for communicating certain ideas; the number of high-level psychological phenomena that actually involve homeostasis loops in the brain appear to be both quite few and relatively short-term in nature.
Indeed, to some extent, looking at things through the PCT lens was a step backwards, as it encouraged me to view things in terms of such higher-order homeostasis loops when those loops were merely emergent properties, rather than reified primitives. (And this especially applies when we’re talking about unwanted behavior.)
To put it another way, some people may indeed regulate their perception of “success” in some abstract high-level fashion. But most of the things that one might try to model in such a way, for most people, most of the time, actually involve much tinier, half-open controls like “reduce my anxiety in response to thinking about this problem, in whatever way possible as soon as possible”, and not some hypothetical long-term perception of success or status or whatnot.
If I model something as a TOTE that’s modeling. The model however implies predictions.If I use the TOTE model I can predict from the fact that my termostate being broken in a way that it doesn’t let it detect heat that it will likely overheat the room.
If I set my room to heat to 22C and find that my room is heated to 26C the TOTE model for the thermostate helps me reason that there’s likely a problem with the temperature sensor.
PCT does that too. Except that sometimes, body and brain processes are open-ended, with an important part of the loop existing in the outside world.
The problem with a model that can explain anything, is that you can’t notice when you’re being confused by a fake explanation.
A useful explanatory model needs to be able to rule things out, as well as “in”.
I think we are talking about different meanings of “modeling” here. There are plenty of uses for which PCT and TOTEs are apt. But if you’re trying to discover something about the physical nature of things involved, being able to explain anything equally well is not actually a benefit. That is, it doesn’t provide us with any information we don’t already know, absent the model.
So e.g. in your thermostat example, the TOTE model doesn’t provide you with any predictions you didn’t have without it: a person who lacks understanding of how thermostats work internally can trivially make the prediction that something is wrong with it, since it’s supposed to produce the reqested temperature.
Conversely, if you know the thermostat contains a sensor, then the idea that “it might be broken if the room temperature is wrong” is trivially derivable from that mere fact, without a detailed control systems model.
IOW, the TOTE model adds nothing to your existing predictions; it doesn’t constitute evidence of anything you didn’t already know.
This doesn’t take away from the many valuable uses of paradigms like PCT or TOTE: it’s just that they’re one of those things that seems super-valuable because it seems to be a more efficient mental data compressor than whatever you had before. But being a good compressor for whatever data you have is not the same as having any new data!
So paradigmatic models are more about being able to more efficiently think or reason about something, or focus your attention in useful ways, without necessarily changing much about how much one actually knows, from an evidentiary perspective.
I do grant that’s the case. In the context of NLP, modeling doesn’t have the intention of discovering things about the phsyical nature of the things involved and if you go to NLP with that intention it’s easy to get disappointed.