PCT does that too. Except that sometimes, body and brain processes are open-ended, with an important part of the loop existing in the outside world.
The problem with a model that can explain anything, is that you can’t notice when you’re being confused by a fake explanation.
A useful explanatory model needs to be able to rule things out, as well as “in”.
I think we are talking about different meanings of “modeling” here. There are plenty of uses for which PCT and TOTEs are apt. But if you’re trying to discover something about the physical nature of things involved, being able to explain anything equally well is not actually a benefit. That is, it doesn’t provide us with any information we don’t already know, absent the model.
So e.g. in your thermostat example, the TOTE model doesn’t provide you with any predictions you didn’t have without it: a person who lacks understanding of how thermostats work internally can trivially make the prediction that something is wrong with it, since it’s supposed to produce the reqested temperature.
Conversely, if you know the thermostat contains a sensor, then the idea that “it might be broken if the room temperature is wrong” is trivially derivable from that mere fact, without a detailed control systems model.
IOW, the TOTE model adds nothing to your existing predictions; it doesn’t constitute evidence of anything you didn’t already know.
This doesn’t take away from the many valuable uses of paradigms like PCT or TOTE: it’s just that they’re one of those things that seems super-valuable because it seems to be a more efficient mental data compressor than whatever you had before. But being a good compressor for whatever data you have is not the same as having any new data!
So paradigmatic models are more about being able to more efficiently think or reason about something, or focus your attention in useful ways, without necessarily changing much about how much one actually knows, from an evidentiary perspective.
But if you’re trying to discover thing about the physical nature of things involved, being able to explain anything equally well is not actually a benefit.
I do grant that’s the case. In the context of NLP, modeling doesn’t have the intention of discovering things about the phsyical nature of the things involved and if you go to NLP with that intention it’s easy to get disappointed.
PCT does that too. Except that sometimes, body and brain processes are open-ended, with an important part of the loop existing in the outside world.
The problem with a model that can explain anything, is that you can’t notice when you’re being confused by a fake explanation.
A useful explanatory model needs to be able to rule things out, as well as “in”.
I think we are talking about different meanings of “modeling” here. There are plenty of uses for which PCT and TOTEs are apt. But if you’re trying to discover something about the physical nature of things involved, being able to explain anything equally well is not actually a benefit. That is, it doesn’t provide us with any information we don’t already know, absent the model.
So e.g. in your thermostat example, the TOTE model doesn’t provide you with any predictions you didn’t have without it: a person who lacks understanding of how thermostats work internally can trivially make the prediction that something is wrong with it, since it’s supposed to produce the reqested temperature.
Conversely, if you know the thermostat contains a sensor, then the idea that “it might be broken if the room temperature is wrong” is trivially derivable from that mere fact, without a detailed control systems model.
IOW, the TOTE model adds nothing to your existing predictions; it doesn’t constitute evidence of anything you didn’t already know.
This doesn’t take away from the many valuable uses of paradigms like PCT or TOTE: it’s just that they’re one of those things that seems super-valuable because it seems to be a more efficient mental data compressor than whatever you had before. But being a good compressor for whatever data you have is not the same as having any new data!
So paradigmatic models are more about being able to more efficiently think or reason about something, or focus your attention in useful ways, without necessarily changing much about how much one actually knows, from an evidentiary perspective.
I do grant that’s the case. In the context of NLP, modeling doesn’t have the intention of discovering things about the phsyical nature of the things involved and if you go to NLP with that intention it’s easy to get disappointed.