The fact that you’re still ignoring any of the substantive and responsive portions of my comments, bodes ill for this being a useful exchange.
It is quite possible I’ve misunderstood your queries and/or answered them inadequately. However, I’d like to think that the appropriate response in that case would be to clarify what you want, rather than simply taking it to mean no-one can give you what you want.
So the “human as controller” model doesn’t simplify the problem, it just says “here, go solve the problem, somehow, and when you do, without the help of this model, you’ll see that one of the six trillion neat things you can do is specify them in controls format”.
I’ve mentioned a number of things that PCT does beyond that. For example, it shows that the first thing to look for in modeling are continuous analog variables integrated over a time period, with shorter time periods generally being represented lower in the control hierarchy than longer time periods. AFAICT, this is far from an obvious or trivial modeling distinction.
The fact that you chose not to comment on that, but instead dug in on justifying your initial position, suggests to me that you aren’t actually interested in the merits (or lack thereof) of PCT as a modeling tool, so much as in defending your position.
The fact that you’re still ignoring any of the substantive and responsive portions of my comments, bodes ill for this being a useful exchange.
Yeah, I like that strategy. “In this extremely long, involved exchange, any part of my post that you didn’t directly respond to, was an ultra-critical omission, and completely demonstrates your failure to act in good faith or adequately respond to my points.”
Whatever. I didn’t respond to it because I haven’t gotten around to responding to your points in the other thread, or because it didn’t address my request. In this case, it’s the latter.
For example, it shows that the first thing to look for in modeling are continuous analog variables integrated over a time period, with shorter time periods generally being represented lower in the control hierarchy than longer time periods. AFAICT, this is far from an obvious or trivial modeling distinction.
Okay, and what epistemic profit does this approach gain for you, especially given that deliberate actions in pursuit of a goal are highly discontinuous? Oh, right, add another epicycle. Hey, the Hawkins HTM model, that’ll work!
ETA: Do not interpret this post to mean I’m in full anti-PCT mode. I am still exploring the software on the site pjeby linked and working through the downloadable pdfs. I’m making every effort to give PCT a fair shake.
especially given that deliberate actions in pursuit of a goal are highly discontinuous?
I’m not certain I understand your terms. If I interpret your words “classically”, then of course I “know what you mean”. However, if I’m viewing them through the PCT lens, those words make no sense at all, or are blatantly false.
When you drive a car and step on the brake, is that a “deliberate action” that’s “discontinuous”? Classically, it seems obvious. PCT-wise, you’re begging the question.
From the PCT perspective, the so-called “action” of braking is a chain of controls looking something like:
Speed-change controller detects discrepancy between current acceleration and desired deceleration, sets braking controller to “braking hard”
Braking controller notes we aren’t braking, sets foot position to “on brake”
Foot position controller detects foot is out of position, requests new leg position
Leg position controller detects out of position, requests new leg speed/direction
Leg speed controller detects not moving, requests increased muscle force
...etc., until
Foot position controller detects approaching correct position, and lowers requested movement speed, until desired position is reached
Speed controller observes drop of speed below its reference level, sets speed-change controller to “slow accelerate”
Speed-change controller notices that current deceleration is below “slow accelerate” reference, sets “gas” controller to “slight acceleration”
...and so on, until speed stabilizes… and the foot goes up and down slightly on the gas… all very continuously.
So, there is nothing at all “discontinuous” about this. (Modulo the part where nerves effectively use pulse-width modulation to communicate “analog” values).
And it’s precisely this stable continuity of design that makes PCT so elegant; it requires very little coordination (except hierarchically), and it scales beautifully, in the sense that mostly-identical control units can be used. Got a more complex animal? Need more sophisticated behavior? Just add controllers, or new layers of controllers.
Need a new skill? Learning grows in or assigns some new controllers, that measure derived perceptual quantities like “speed of the car”, “braking”, and “putting on the gas”. (Which explains why procedural knowledge is more persistent than propositional knowledge—the controllers represent a hardware investment in knowledge.)
And within this model, actions are merely side-effects of disturbances to the regulated levels of perceptual variables, such as speed. I stopped the upward point of the hierarchy at the speed controller noticing a speed discrepancy, but the reason for that discrepancy could be you noticing you’re late, or it could be that your “distance to next car” controller has issued a request to set the new “desired speed” to “less than the car in front of us”. In either case, the “action” is the same, regardless of what “goal”—or more likely, disturbance—caused it to occur.
That being said, PCT does include “sequence” and “program” controller layers, that can handle doing things in a particular sequence or branching. However, even these are modeled in terms of a perceptual control hierarchy, ala TOTE loops. That is, you can build TOTE loops by wiring controllers together in relatively simple ways.
Reification of programs and “actions” through controller hierararchies is also a good strategy for building a fast machine out of slow components. Rather than share a few ultra-fast, complex components, PCT hierarchies depend on chains of similar, simultaneously-responding, cheap/dumb components, such that the fastest responses are required from the components that are generally nearest (network-wise) to the place where the signals need to be received or delivered to exert control.
These are just some of the obvious properties that make PCT-style design a good set of tradeoffs for designing living creatures, using similar constraints to evolution. (Such as the need to be able to start with primitive versions of the model, and gradually scale up from there.)
Okay, and what epistemic profit does this approach gain for you
As I said, it gives me a better idea of what to look for. After grasping PCT, I was able to identify certain “bugs” in my brain that had previously been more elusive. The time and hierarchy distinctions made it possible for me to identify what I was controlling for, rather than just looking at discrete action triggers, as I did in the past.
In this area, PCT provides a more compact model of what psychologists call “secondary gain” , hypnosis people call “symptom conversion”, and NLP people call “ecology”.
The idea is that when you take away one path for someone to get something (e.g. giving up smoking) they may end up doing something else to satisfy a need that was previously supported by the old behavior (e.g. chewing gum).
What psychologists, NLPers, and hypnosis people never had a good explanation for (AFAIK) is why it takes time for this substitution or reversion to occur! Similarly, why does it take time for people to stop persisting at trying to do something new?
This is an example of a complex behavioral property of humans that falls directly out of the PCT model without any specific attempt to generate it. Since high-level goals are integrated over a longer time period, it takes time for the error signal to rise, and then further time for the controller network reorganization process (part of the PCT model of learning) to find an alternative or extinguish the changed behavior.
I find PCT parsimonious because there are so many little quirks of human nature I know about, that would be naturally expected to occur if behavior was control-system driven in precisely the ways PCT predicts that it is… but which are just weird and/or unexplained under any other model that I know of.
From the PCT perspective, the so-called “action” of braking is a chain of controls looking something like: [...]
Okay, thank you, that was exactly the kind of answer I was looking for, in terms of breaking down (what is framed by us non-PCTers as) a discrete list of actions into hierarchical feedback loops and what they’re using for comparison. Much appreciated.
But just the same, I think your explanation illuminates my complaint about the usefulness of the model. What it appears to me is, you just took a list of discrete steps and rephrased them as continuous values. So far, so good, but all I see is added complexity. Let me explain.
I would describe my steps in baking a cake (and of course this abstracts away from lower level detail) as:
1) Open preheated oven.
2) Place pan containing batter onto middle of middle over rack.
3) Close oven.
4) Set timer.
Your claimed improvement over this framing of these events is:
1) Define variable for oven openness. Recognize it’s zero and push it toward 1.
2) Define variable for pan distance from middle of middle oven rack. Recognize it’s too high and push it toward zero.
3) Recognize oven openness is 1 and should be zero, push it in that direction.
4) Define variable for oven-timer-value-appropriateness. Recognize it’s too low and move it higher.
Yes, superficially, you’ve made it all continuous, but only by positing new features of the model, like some neural mechanism isomorphic to “detection of oven-timer-value-appropriateness”, which requires you to expand that out into another complex mechanism.
I agree, as I’ve said before, that this is one way to rephrase what is going on. But it doesn’t simplify the problem; it forces you identify the physical correlate of “making sure the oven’s set to the right time” in a form that I’m not convinced is appropriate for the problem. Why isn’t it appropriate?
Among other things, you’re forced to solve the object recognition problem and identify a format for comparison. But if I’ve solved the (biological) object recognition problem, my model can simply invoke the actual neural mechanism being used, without the added complexity of reformatting the causal flow into feedback loops.
You defend this model by its elegance, but you only get the elegance after you solve the problem some other way. That is, I only have an elegant hierarchical feedback loop if I can, somehow, solve the object recognition problem that allows me to actually specify a reference and feedback signal. A model isn’t any good if it presupposes the solution of the problem it’s being used to solve.
I would describe my steps in baking a cake (and of course this abstracts away from lower level detail)
You could describe them that way, yes, and that would nominally describe the behaviors you emit. However, it’s trivial to prove that this does not describe the implementation in your head that emits those behaviors!
For example, you might forget to preheat the oven, in which case the order of your steps is going to change. There are any number of disruptions that can occur in your sequence of “steps”, that will cause you to change your actions to work around the disruptions, with varying degrees of automaticity, depending on how high in your control hierarchy the disruptions reach.
A simple disruption like a spill on the floor you need to walk around will be handled with barely a conscious notice, while a complex disruption like the power being off (and the oven therefore not working) will induce more complex behavior requiring conscious attention.
If a sequence of steps could actually describe human behavior, we could feed your list of steps to a computer and get it done. It’s actually omitting some of the most important information: the goals of the steps, and how to measure whether they’ve been obtained.
And that’s information our brains have to be able to know and use in order to actually carry out behavior. We tend to assume we do this by “thinking”, but we usually only “think” in order to handle high-level disturbances that require rearrangement of goals, rather than just using existing control systems to work around the disturbance.
Your claimed improvement over this framing of these events is: [list I wouldn’t use to describe it]
When you get to higher levels of modeling, you can certainly deal with sequences of subgoals that are matching of perceptual patterns like “cake is in the oven”. Did you read the TOTE loops reference I gave? TOTE loops act on something until it reaches a certain state, then activate another TOTE loop. PCT incorporates the previously-proposed cog psych notion of TOTE loops, and proposes some ways to build TOTE loops out of simpler controllers.
Part of the modeling elegance of using controllers and TOTE loops to implement overall behavioral programs is that they allow you to notice, for example, that your assistant chef has already set the timer for you as you placed the cake in the oven… and thereby skipping the need to perform that step.
Among other things, you’re forced to solve the object recognition problem and identify a format for comparison. But if I’ve solved the (biological) object recognition problem, my model can simply invoke the actual neural mechanism being used, without the added complexity of reformatting the causal flow into feedback loops.
This I don’t get. We already know that (visual) object recognition can be implemented by time-varying hierarchical feature sequences tied to the so-called “grandmother neurons”. Both HTM and PCT include this concept, except that PCT proposes you get an analog “grandmotherness” signal, whereas IIRC the HTM model assumes it’s a digital “grandmother present” signal. But HTM at least has pattern recognition demos that handle automatic learning and pattern extraction from noisy inputs, and it uses exactly the same sort of recognition hierarchy that the full PCT model calls for.
That’s why I keep saying that if you want to see the entire PCT model, you need to read the book. Most of the primers either talk about low-level stuff or high-level stuff. Object recognition, action sequences, and that sort of thing are all in the middle layers that make up the bulk of the book.
You defend this model by its elegance, but you only get the elegance after you solve the problem some other way. That is, I only have an elegant hierarchical feedback loop if I can, somehow, solve the object recognition problem that allows me to actually specify a reference and feedback signal. A model isn’t any good if it presupposes the solution of the problem it’s being used to solve.
Note, by the way, that this is backwards, when applied to listing steps and calling it a model. The steps cannot be used to actually predict behavior, because they list only the nominal case, where everything goes according to plan. The extra information that PCT forces you to include results in a more accurate model—one that does not simply elide or handwave away the parts of behavior that we intuitively ignore.
That is, the parts we don’t usually bother communicating to other human beings, because we assume they’ll fill in the gaps.
PCT is useful because it shows where those gaps are and what is needed to fill them in, in much the same way that the initial description of evolution identified gaps in our knowledge about biology, and what information was needed to fill them in, in place of the asumptive handwaving that “God did it”. In the same way, we currently handwave most of what we don’t understand about behavior as, “X did it”, where X is some blurry entity or other such as learning, environment, intelligence, habit, genetics, conditioning, etc.
(And no, PCT doesn’t merely replace X with “control systems”, because it shows HOW control systems can “do it”, whereas other values of X are simply stopping the explanation at that point.)
The fact that you’re still ignoring any of the substantive and responsive portions of my comments, bodes ill for this being a useful exchange.
It is quite possible I’ve misunderstood your queries and/or answered them inadequately. However, I’d like to think that the appropriate response in that case would be to clarify what you want, rather than simply taking it to mean no-one can give you what you want.
I’ve mentioned a number of things that PCT does beyond that. For example, it shows that the first thing to look for in modeling are continuous analog variables integrated over a time period, with shorter time periods generally being represented lower in the control hierarchy than longer time periods. AFAICT, this is far from an obvious or trivial modeling distinction.
The fact that you chose not to comment on that, but instead dug in on justifying your initial position, suggests to me that you aren’t actually interested in the merits (or lack thereof) of PCT as a modeling tool, so much as in defending your position.
Yeah, I like that strategy. “In this extremely long, involved exchange, any part of my post that you didn’t directly respond to, was an ultra-critical omission, and completely demonstrates your failure to act in good faith or adequately respond to my points.”
Whatever. I didn’t respond to it because I haven’t gotten around to responding to your points in the other thread, or because it didn’t address my request. In this case, it’s the latter.
Okay, and what epistemic profit does this approach gain for you, especially given that deliberate actions in pursuit of a goal are highly discontinuous? Oh, right, add another epicycle. Hey, the Hawkins HTM model, that’ll work!
ETA: Do not interpret this post to mean I’m in full anti-PCT mode. I am still exploring the software on the site pjeby linked and working through the downloadable pdfs. I’m making every effort to give PCT a fair shake.
I’m not certain I understand your terms. If I interpret your words “classically”, then of course I “know what you mean”. However, if I’m viewing them through the PCT lens, those words make no sense at all, or are blatantly false.
When you drive a car and step on the brake, is that a “deliberate action” that’s “discontinuous”? Classically, it seems obvious. PCT-wise, you’re begging the question.
From the PCT perspective, the so-called “action” of braking is a chain of controls looking something like:
Speed controller detects too-high speed, sets speed-change controller to “rapid decrease”
Speed-change controller detects discrepancy between current acceleration and desired deceleration, sets braking controller to “braking hard”
Braking controller notes we aren’t braking, sets foot position to “on brake”
Foot position controller detects foot is out of position, requests new leg position
Leg position controller detects out of position, requests new leg speed/direction
Leg speed controller detects not moving, requests increased muscle force
...etc., until
Foot position controller detects approaching correct position, and lowers requested movement speed, until desired position is reached
Speed controller observes drop of speed below its reference level, sets speed-change controller to “slow accelerate”
Speed-change controller notices that current deceleration is below “slow accelerate” reference, sets “gas” controller to “slight acceleration”
...and so on, until speed stabilizes… and the foot goes up and down slightly on the gas… all very continuously.
So, there is nothing at all “discontinuous” about this. (Modulo the part where nerves effectively use pulse-width modulation to communicate “analog” values).
And it’s precisely this stable continuity of design that makes PCT so elegant; it requires very little coordination (except hierarchically), and it scales beautifully, in the sense that mostly-identical control units can be used. Got a more complex animal? Need more sophisticated behavior? Just add controllers, or new layers of controllers.
Need a new skill? Learning grows in or assigns some new controllers, that measure derived perceptual quantities like “speed of the car”, “braking”, and “putting on the gas”. (Which explains why procedural knowledge is more persistent than propositional knowledge—the controllers represent a hardware investment in knowledge.)
And within this model, actions are merely side-effects of disturbances to the regulated levels of perceptual variables, such as speed. I stopped the upward point of the hierarchy at the speed controller noticing a speed discrepancy, but the reason for that discrepancy could be you noticing you’re late, or it could be that your “distance to next car” controller has issued a request to set the new “desired speed” to “less than the car in front of us”. In either case, the “action” is the same, regardless of what “goal”—or more likely, disturbance—caused it to occur.
That being said, PCT does include “sequence” and “program” controller layers, that can handle doing things in a particular sequence or branching. However, even these are modeled in terms of a perceptual control hierarchy, ala TOTE loops. That is, you can build TOTE loops by wiring controllers together in relatively simple ways.
Reification of programs and “actions” through controller hierararchies is also a good strategy for building a fast machine out of slow components. Rather than share a few ultra-fast, complex components, PCT hierarchies depend on chains of similar, simultaneously-responding, cheap/dumb components, such that the fastest responses are required from the components that are generally nearest (network-wise) to the place where the signals need to be received or delivered to exert control.
These are just some of the obvious properties that make PCT-style design a good set of tradeoffs for designing living creatures, using similar constraints to evolution. (Such as the need to be able to start with primitive versions of the model, and gradually scale up from there.)
As I said, it gives me a better idea of what to look for. After grasping PCT, I was able to identify certain “bugs” in my brain that had previously been more elusive. The time and hierarchy distinctions made it possible for me to identify what I was controlling for, rather than just looking at discrete action triggers, as I did in the past.
In this area, PCT provides a more compact model of what psychologists call “secondary gain” , hypnosis people call “symptom conversion”, and NLP people call “ecology”.
The idea is that when you take away one path for someone to get something (e.g. giving up smoking) they may end up doing something else to satisfy a need that was previously supported by the old behavior (e.g. chewing gum).
What psychologists, NLPers, and hypnosis people never had a good explanation for (AFAIK) is why it takes time for this substitution or reversion to occur! Similarly, why does it take time for people to stop persisting at trying to do something new?
This is an example of a complex behavioral property of humans that falls directly out of the PCT model without any specific attempt to generate it. Since high-level goals are integrated over a longer time period, it takes time for the error signal to rise, and then further time for the controller network reorganization process (part of the PCT model of learning) to find an alternative or extinguish the changed behavior.
I find PCT parsimonious because there are so many little quirks of human nature I know about, that would be naturally expected to occur if behavior was control-system driven in precisely the ways PCT predicts that it is… but which are just weird and/or unexplained under any other model that I know of.
Okay, thank you, that was exactly the kind of answer I was looking for, in terms of breaking down (what is framed by us non-PCTers as) a discrete list of actions into hierarchical feedback loops and what they’re using for comparison. Much appreciated.
But just the same, I think your explanation illuminates my complaint about the usefulness of the model. What it appears to me is, you just took a list of discrete steps and rephrased them as continuous values. So far, so good, but all I see is added complexity. Let me explain.
I would describe my steps in baking a cake (and of course this abstracts away from lower level detail) as:
1) Open preheated oven.
2) Place pan containing batter onto middle of middle over rack.
3) Close oven.
4) Set timer.
Your claimed improvement over this framing of these events is:
1) Define variable for oven openness. Recognize it’s zero and push it toward 1.
2) Define variable for pan distance from middle of middle oven rack. Recognize it’s too high and push it toward zero.
3) Recognize oven openness is 1 and should be zero, push it in that direction.
4) Define variable for oven-timer-value-appropriateness. Recognize it’s too low and move it higher.
Yes, superficially, you’ve made it all continuous, but only by positing new features of the model, like some neural mechanism isomorphic to “detection of oven-timer-value-appropriateness”, which requires you to expand that out into another complex mechanism.
I agree, as I’ve said before, that this is one way to rephrase what is going on. But it doesn’t simplify the problem; it forces you identify the physical correlate of “making sure the oven’s set to the right time” in a form that I’m not convinced is appropriate for the problem. Why isn’t it appropriate?
Among other things, you’re forced to solve the object recognition problem and identify a format for comparison. But if I’ve solved the (biological) object recognition problem, my model can simply invoke the actual neural mechanism being used, without the added complexity of reformatting the causal flow into feedback loops.
You defend this model by its elegance, but you only get the elegance after you solve the problem some other way. That is, I only have an elegant hierarchical feedback loop if I can, somehow, solve the object recognition problem that allows me to actually specify a reference and feedback signal. A model isn’t any good if it presupposes the solution of the problem it’s being used to solve.
Hope that clarifies where I’m coming from.
You could describe them that way, yes, and that would nominally describe the behaviors you emit. However, it’s trivial to prove that this does not describe the implementation in your head that emits those behaviors!
For example, you might forget to preheat the oven, in which case the order of your steps is going to change. There are any number of disruptions that can occur in your sequence of “steps”, that will cause you to change your actions to work around the disruptions, with varying degrees of automaticity, depending on how high in your control hierarchy the disruptions reach.
A simple disruption like a spill on the floor you need to walk around will be handled with barely a conscious notice, while a complex disruption like the power being off (and the oven therefore not working) will induce more complex behavior requiring conscious attention.
If a sequence of steps could actually describe human behavior, we could feed your list of steps to a computer and get it done. It’s actually omitting some of the most important information: the goals of the steps, and how to measure whether they’ve been obtained.
And that’s information our brains have to be able to know and use in order to actually carry out behavior. We tend to assume we do this by “thinking”, but we usually only “think” in order to handle high-level disturbances that require rearrangement of goals, rather than just using existing control systems to work around the disturbance.
When you get to higher levels of modeling, you can certainly deal with sequences of subgoals that are matching of perceptual patterns like “cake is in the oven”. Did you read the TOTE loops reference I gave? TOTE loops act on something until it reaches a certain state, then activate another TOTE loop. PCT incorporates the previously-proposed cog psych notion of TOTE loops, and proposes some ways to build TOTE loops out of simpler controllers.
Part of the modeling elegance of using controllers and TOTE loops to implement overall behavioral programs is that they allow you to notice, for example, that your assistant chef has already set the timer for you as you placed the cake in the oven… and thereby skipping the need to perform that step.
This I don’t get. We already know that (visual) object recognition can be implemented by time-varying hierarchical feature sequences tied to the so-called “grandmother neurons”. Both HTM and PCT include this concept, except that PCT proposes you get an analog “grandmotherness” signal, whereas IIRC the HTM model assumes it’s a digital “grandmother present” signal. But HTM at least has pattern recognition demos that handle automatic learning and pattern extraction from noisy inputs, and it uses exactly the same sort of recognition hierarchy that the full PCT model calls for.
That’s why I keep saying that if you want to see the entire PCT model, you need to read the book. Most of the primers either talk about low-level stuff or high-level stuff. Object recognition, action sequences, and that sort of thing are all in the middle layers that make up the bulk of the book.
Note, by the way, that this is backwards, when applied to listing steps and calling it a model. The steps cannot be used to actually predict behavior, because they list only the nominal case, where everything goes according to plan. The extra information that PCT forces you to include results in a more accurate model—one that does not simply elide or handwave away the parts of behavior that we intuitively ignore.
That is, the parts we don’t usually bother communicating to other human beings, because we assume they’ll fill in the gaps.
PCT is useful because it shows where those gaps are and what is needed to fill them in, in much the same way that the initial description of evolution identified gaps in our knowledge about biology, and what information was needed to fill them in, in place of the asumptive handwaving that “God did it”. In the same way, we currently handwave most of what we don’t understand about behavior as, “X did it”, where X is some blurry entity or other such as learning, environment, intelligence, habit, genetics, conditioning, etc.
(And no, PCT doesn’t merely replace X with “control systems”, because it shows HOW control systems can “do it”, whereas other values of X are simply stopping the explanation at that point.)