All I can say is, why haven’t you posted MORE about this? Your post about control systems seemed to mainly be an argument against brains having models, but PCT doesn’t demand a lack of models, and in any case it’s obvious that brains do model, and they model predictively as well as reflecting current states. And you didn’t mention any of the things that make PCT actually interesting as a behavioral description in human beings. PCT pretty much explains everything that I would’ve wanted to cover in my post sequence on what akrasia really is and how it works, only from a different angle and a better conceptual connection beween the pieces.
My post on models was really a collective reply to comments on one aspect of my original post on PCT. I have been mean ing to post more, but haven’t found the time or the inspiration to formulate something substantial yet.
My post on models was really a collective reply to comments on one aspect of my original post on PCT. I have been mean ing to post more, but haven’t found the time or the inspiration to formulate something substantial yet.
Wait, what? Your two top-level posts weren’t about anything specific to human Perceptual Control Theory, just background control theory, and the whole time I didn’t really see the point. I was thinking,
“Sure, you can model humans as controllers that receive some reference and track it, just as you can model a human as a set of “if-then” loops, but so what? How would that model do any good compressing our description of how human minds work? By the time you’ve actually described what reference someone is tracking (or even a sub-reference like “sexiness”) and how observations are converted into a format capable of being compared, you’ve already solved the problem.”
I wish I had made the point earlier, but I was waiting for a more explicit application to a problem involving a human, which I assumed you had.
By the time you’ve actually described what reference someone is tracking (or even a sub-reference like “sexiness”) and how observations are converted into a format capable of being compared, you’ve already solved the problem
Yes, and that’s precisely what’s useful. That is, it identifies that to solve anyone’s problems, you need only identify the reference values, and find a way to reorganize the control system to either set new reference values or have another behavior that changes the outside world to cause the new reference to be reached. (This is essentially the same idea as Robert Fritz’s structural consulting, except that Fritz’s model is labeled as being about “decisions” rather than “reference values”.)
The main difference between PCT and other Things That Work is that PCT is a testable scientific hypothesis that includes many specific predictions of functional operations in the brain and nervous system that would reductionistically explain how the various Things That Work, do so.
Sure, you can model humans as controllers that receive some reference and track it, just as you can model a human as a set of “if-then” loops, but so what?
The difference is that humans are not like control systems, they are control systems, and are not and cannot be modelled as sets of “if-then” loops, whatever those are supposed to be.
How would that model do any good compressing our description of how human minds work?
That presumes we already have such a description. PCT provides one. It provides the possibility to obtain actual understanding of the matter. Nothing else has yet done that.
And if people are control systems—that is, they vary their actions to obtain their intended perceptions—then that implies that the traditional methods of experimental psychology are invalid. Correlating experimental stimuli and subjects’ responses tells you nothing. Here’s a psychologist writing on this, the late Philip Runkel.
How would that model do any good compressing our description of how human minds work?
That presumes we already have such a description.
We do: it’s the set of all observations of human behavior. The goal of science (or rationality) is to find ever-simpler ways of explaining (describing) the data. The worst case scenario is to explain the data by simply restating it. A theory allows you describe past data without simply restating it because it gives you a generative model.
(There’s probably some LW wiki entry or Eliezer_Yudkowsky post I should reference to give more background about what I’m talking about, but I think you get the idea and it’s pretty uncontroversial.)
That was the standard I was holding your post to: does this description of human behavior as “tweaking outputs to track a reference” help at all to provide a concise description of human behavior? Once again, I find myself trying to side-step a definitional dispute with you (over whether humans “are control systems”) by identifying the more fundamental claim you’re making.
Here, your claim is that there’s some epistemic profit from describing human behavior as a control system. I completely agree that it can be done, just like you can describe human behavior with a long enough computer program. But does this approach actually simplify the problem, or just rename it? I am skeptical that it simplifies because, like I said before, any reference-being-tracked in your model must itself have all the answers that you’re trying to use the model for.
We do: it’s the set of all observations of human behavior. The goal of science (or rationality) is to find ever-simpler ways of explaining (describing) the data. The worst case scenario is to explain the data by simply restating it. A theory allows you describe past data without simply restating it because it gives you a generative model.
(There’s probably some LW wiki entry or Eliezer_Yudkowsky post I should reference to give more background about what I’m talking about, but I think you get the idea and it’s pretty uncontroversial.)
I get the idea and am familiar with it, but I dispute it. There’s a whole lot of background assumptions there to take issue with. Specifically, I believe that:
Besides being consistent with past data, a theory must be consistent with future data as well, data that did not go into making the theory.
Besides merely fitting observastions, past and future, a theory must provide a mechanism, a description not merely of what will be observed, but of how the world produces those observations. It must, in short, be a response to clicking the Explain button.
Description length is not a useful criterion for either discovering or judging a theory. Sure, piling up epicycles is bad, but jumping from there to Kolmogorov complexity as the driver is putting the cart before the horse. (Need I add that I do not believe the Hutter Prize will drive any advance in strong AI?)
But having stated my own background assumptions, I shall address your criticisms in terms of yours, to avoid digressing to the meta-level. (I don’t mind having that discussion, but I’d prefer it to be separate from the current thread. I am sure there are LW wiki entries or EY postings bearing on the matter, but I don’t have an index to them etched on the inside of my forehead either.)
Here, your claim is that there’s some epistemic profit from describing human behavior as a control system. I completely agree that it can be done, just like you can describe human behavior with a long enough computer program. But does this approach actually simplify the problem, or just rename it?
This approach actually simplifies the problem. (It also satisfies my requirements for a theory.)
Here, for example (applet on a web page), is a demo of a control task. I actually wanted to cite another control demo, but I can’t find it online. (I am asking around.) This other program fits a control model to the human performance in that task, with only a few parameters. Running the model on the same data presented to the human operator generates a performance correlating very highly with the human performance. It can also tell the difference between different people doing the same task, and the parameters it finds change very little for the same person attempting the task across many years. Just three numbers (or however many it is, it’s something like that) closely fits an individual’s performance on the task, for as long as they perform it. Is that the sort of thing you are asking for?
Thanks for the detailed reply; I’d like to have the metadiscussion with you, but what exactly would you consider a better place to have it? I’ve had a reply to you on “why mutual information = model” not yet completed, so I guess I could start another top-level post that addresses these issues.
Anyway:
This other program fits a control model to the human performance in that task, with only a few parameters. … Just three numbers (or however many it is, it’s something like that) closely fits an individual’s performance on the task, for as long as they perform it. Is that the sort of thing you are asking for?
Unfortunately, no. It’s not enough to show that humans play some game using a simple control algorithm that happens to work for it. You claimed that human behavior can be usefully described as tweaking output to control some observed variable. What you would need to show, then, is this model applied to behavior for which there are alternate, existing explanations.
For example, how does the controller model fit in with mate selection? When I seek a mate, what is the reference that I’m tracking? How does my sensory data get converted into a format that compares with the reference? What is the output?
I choose this example because it’s an immensely difficult task just to program object recognition. To say that my behavior is explained by trying to track some reference we don’t even know how to define, and by applying an operation to sense data we don’t understand yet, does not look like a simplification!
Or in the more general case: what is the default reference that I’m tracking? What am I tracking when I decide to go to work every day, and how do I know I’ve gotten to work?
Remember, to say you “want” something or that “recognize” something hides an immense amount of complexity, which is why I don’t see how it helps to restate these problems as control problems.
Unfortunately, no. It’s not enough to show that humans play some game using a simple control algorithm that happens to work for it.
It doesn’t “just happen” to work. It works for the same reason that, say, a chemist’s description of a chemical reaction works: because the description describes what is actually happening.
Besides, according to the philosophy you expressed, all that matters in compressing the data. A few numbers to compress with high fidelity an arbitrarily large amount of data is pretty good, I would have thought. ETA: Compare how just one number: local gravitational strength, suffices to predict the path of a thrown rock, given the right theory.
Experiments based on PCT ideas routinely see correlations above 0.99. This is absolutely unheard of in psychology. Editors think results like that can’t possibly be true. But that is the sort of result you get when you are measuring real things. When you are doing real measurements, you don’t even bother to measure correlations, unless you have to talk in the language of people whose methods are so bad that they are always dealing with statistical fog.
You claimed that human behavior can be usefully described as tweaking output to control some observed variable. What you would need to show, then, is this model applied to behavior for which there are alternate, existing explanations.
The alternate, existing explanations are worth no more than alchemical theories of four elements. It’s possible to go back and look at the alchemists’ accounts of their experiments, but there’s really not much point except historical interest. They were asking the wrong questions and making the wrong observations, using wrong theories. Even if you can work out what someone was doing, it isn’t going to cast light on chemistry, only on history.
For example, how does the controller model fit in with mate selection? When I seek a mate, what is the reference that I’m tracking? How does my sensory data get converted into a format that compares with the reference? What is the output?
You’re demanding that the new point of view instantly explain everything. But FWIW, when you seek a mate, the reference is, of course, having a mate. You perceive that you do not have one, and take such steps as you think appropriate to find one. If you want a detailed acount right down to the level of nerve impulses of how that all happens—well, anyone who could do that would know how to build a strong AI. Nobody knows that, yet.
A theory isn’t a machine that will give you answers for free. ETA: Newtonian mechanics won’t hand you the answer to the N-body problem on a plate.
Or in the more general case: what is the default reference that I’m tracking? What am I tracking when I decide to go to work every day, and how do I know I’ve gotten to work?
You’re demanding that the new point of view instantly explain everything.
I’m demanding that it explain exactly what you claimed it could explain: behavior!
FWIW, when you seek a mate, the reference is, of course, having a mate. You perceive that you do not have one, and take such steps as you think appropriate to find one. If you want a detailed acount right down to the level of nerve impulses of how that all happens—well, anyone who could do that would know how to build a strong AI. Nobody knows that, yet.
Yep, that confirms exactly what I was expecting: you’ve just relabeled the problem; you haven’t simplified it. Your model tells me nothing except “this is what you could do, once you did all the real work in understanding this phenomenon, which you got some other way”.
A theory isn’t a machine that will give you answers for free. ETA: Newtonian mechanics won’t hand you the answer to the N-body problem on a plate.
Poor comparison. Newtonian mechanics doesn’t give me a an answer to the general n-body problem, but it gives me more than enough to generate a numerical solution to any specific n-body problem.
Your model isn’t even in the same league. It just says the equivalent of, “Um, the bodies move in a, you know, gravitational-like manner, they figure out where gravity wants them to go, and they bring that all, into effect.”
It feels like an explanation, but it isn’t. The scientific answer would look more like, “The net acceleration any body experiences is equal to the vector sum of the forces on the body obtained from the law of gravitation, divided by its mass. To plot the paths, start with the initial positions and velocities, find the accelerations, and then up date the positions and start over.”
Just as an aside, I don’t think PCT is the ultimate solution to modeling humans in their entirety. I think Hawkins’ HTM model is actually a better description of object and pattern recognition in general, but there aren’t any signficant conflicts between HTM and PCT, in that both propose very similar hierarchies of control units. The primary difference is that HTM emphasizes memory-based prediction rather than reference-matching, but I don’t see any reason why the same hierarchical units couldn’t do both. (PCT’s model includes localized per-controller memory much like HTM does, and suggests that memory is used to set reference values, in much the same way that HTM describes memory being used to “predict” that an action should be taken.)
The main modeling limitation that I see in PCT is that it doesn’t address certain classes of motivated behavior as well as Ainslie’s model of conditioned appetites. But if you glue Ainslie, HTM, and PCT together, you get pretty decent overall coverage. And HTM/PCT are a very strong engineering model for “how it’s probably implemented”, i.e. HTM/PCT models are more or less the simplest things I can imagine building, that do things the way humans appear to do them. Both models look way too simple to be “intelligence”, but that’s more a reflection of our inbuilt mind-projection tendencies than a flaw in the models!
On a more specific note, though:
Or in the more general case: what is the default reference that I’m tracking? What am I tracking when I decide to go to work every day, and how do I know I’ve gotten to work?
“You” don’t track; your brain’s control units do. And they do so in parallel—which is why you can be conflicted. Your reference for going to work every day might be part of the concept “being responsible” or “being on time”, or “not getting yelled at for lateness”, or whatever… possibly more than one.
In order to find out what reference(s) are relevant, you have to do what Powers refers to as “The Test”—that is, selecting hypothesized control variables and then disturb them to see whether they end up being stabilized by your behavior.
In practice, it’s easier with a human, since you can simply imagine NOT going to work, and notice what seems “bad” to you about that, at the somatic response (nonverbal or simple-verbal, System 1) level. Not at all coincidentally, a big chunk of my work has been teaching people to contradict their default responses and then ask “what’s bad about that?” in sequence to identify higher-level control variables for the behavior. (At the time I started doing that, I just didn’t have the terminology to explain what I was doing or why it was helpful.)
Of course, for some people, asking what’s bad about not going to work will produce confusion, because they’re controlling for something good about going to work… like having an exciting job, wanting to get stuff done, etc… but those folks are probably not seeing me about a motivational problem with going to work. ;-)
But does this approach actually simplify the problem, or just rename it?
The best answer to this particular question is the book, Behavior: The Control of Perception. In a way, it’s like a miniature Origin of Species, showing how you can build up from trivial neural control systems to complex behavior… and how these levels more or less match various stages of development in a child’s first year of life. It’s a compelling physical description and hypothesis, not merely an abstract idea like “hey, let’s model humans as control systems.”
The part that I found most interesting is that it provides a plausible explanation for certain functions being widely distributed in the brain, and thereby clarified (for me anyway) some things that were a bit fuzzy or hand-wavy in my own models of how memory, monoidealism, and inner conflict actually work. (My model, being software-oriented, tended to portray the brain as a mostly-unified machine executing a program, whereas PCT shows why this is just an illusion.)
The difference is that humans are not like control systems, they are control systems, and are not and cannot be modelled as sets of “if-then” loops, whatever those are supposed to be.
Humans consist of atoms. The statement was obviously about humans being modelable as physical/[digital computation] processes, forgetting all the stuff about intelligence and control.
The difference is that humans are not like control systems, they are control systems, and are not and cannot be modelled as sets of “if-then” loops, whatever those are supposed to be.
Er, isn’t that the “program” level of Powers’s model? IOW, his model shows how you can build up from more fundamental control structures to get complex “programs”. See chapters 13-18 of Behavior: The Control Of Perception. (At least in the 2nd edition, which is all I’ve read.)
The freebie at livingcontrolsystems.com, followed by “Behavior: The Control of Perception” and “Freedom From Stress”. I figured I’d balance the deep theory version and the layman’s practical version to get the best span of info in the shortest time.
What is “the freebie at livingcontrolsystems.com″? Are you collectively referring to the introductory papers ? Or something more specific? ETA: I plan to read, from that page, the 10 minute intro, neglected phenomenon, and underpinnings to start with.
Are you collectively referring to the introductory papers ?
No, the “book of readings”, although there appears to be a huge overlap between the book and the papers you linked to.
One of the incredibly unfortunate things about that website is that it spends way too much time talking about what you’ll learn once you understand PCT, compared to how much time it spends actually explaining PCT. OTOH, my guess is that most of the people whose writing is quoted there have already read the key book (Behavior: The Control Of Perception), and find it hard to explain the concepts in a much shorter form. However, there are a few good chapters and a lot of small insights in the other chapters of the “book of readings”, so that by the end I was at least convinced enough to plunk down 35 bucks for the 2nd edition of B:CP (as it’s usually abbreviated).
For my purposes, it really didn’t hurt that my mind was already on ideas rather similar to reference levels, based on some recent change experiences (not to mention some discussions here), so I was quite motivated to dig through the testimonials to find some actual meat.
One of the other really useful bits on the site are Powers’ 1979 robotics articles for Byte magazine, which I didn’t find before I bought the book, and which might be an adequate substitute for some portion of the book, if you read all four of them.
One insightful tidbit from the second article:
We have now established the fact that using natural
logic and following causes and effects around the
closed loop as a sequence of events will lead to a wrong
prediction of control system behavior. This immediately
eliminates three-quarters of what biologists,
psychologists, neurologists, and even cyberneticians
have published about control theory and behavior.
We are just beginning to see that one must view all the
variables in a control system as changing together, not
one at a time. This is what I mean by retraining the
intuition. Cartesian concepts of cause and effect, and
Newtonian physics, have trained us to think along
directed lines. What we need to do to understand
control systems is to learn how to think in circles
The above came just after he painstakingly goes through the discrete math (using BASIC code) to show why the intuitive math for a certain control system is wrong, in that it leads to an unstable system… whereas the simpler, PCT-based approach results in more robust behavior. Another tidbit:
You will notice that doubling the error sensitivity,
which doubles the amount of output
generated by a given error, does not double the
amount of output that actually occurs. Far from
it. When, for any reason, the loop gain goes
up, the steady state error simply gets smaller,
assuming that the system remains stable. This
fact does violence to the popular idea that the
brain commands muscles to produce behavior.
If that were the case, doubling the sensitivity of
a muscle to the nerve signals reaching it ought to
produce twice as much muscle tension. Nothing
of the sort happens, unless you’ve lopped off
the rest of the nervous system, particularly the
feedback paths.
Basically (no pun intended), the articles describe a series of models and simulations (written in BASIC) that demonstrate the basic principles and models of behavior being generated by hierarchies of negative-feedback control, where the output of a “higher” control layer determines the reference values for the “lower” layers, and why this is a far more parsimonious and robust model of what living creatures appear to be doing, than more traditional models.
Glad you like it. :-)
My post on models was really a collective reply to comments on one aspect of my original post on PCT. I have been mean ing to post more, but haven’t found the time or the inspiration to formulate something substantial yet.
Which of the PCT materials have you been reading?
Wait, what? Your two top-level posts weren’t about anything specific to human Perceptual Control Theory, just background control theory, and the whole time I didn’t really see the point. I was thinking,
“Sure, you can model humans as controllers that receive some reference and track it, just as you can model a human as a set of “if-then” loops, but so what? How would that model do any good compressing our description of how human minds work? By the time you’ve actually described what reference someone is tracking (or even a sub-reference like “sexiness”) and how observations are converted into a format capable of being compared, you’ve already solved the problem.”
I wish I had made the point earlier, but I was waiting for a more explicit application to a problem involving a human, which I assumed you had.
Yes, and that’s precisely what’s useful. That is, it identifies that to solve anyone’s problems, you need only identify the reference values, and find a way to reorganize the control system to either set new reference values or have another behavior that changes the outside world to cause the new reference to be reached. (This is essentially the same idea as Robert Fritz’s structural consulting, except that Fritz’s model is labeled as being about “decisions” rather than “reference values”.)
The main difference between PCT and other Things That Work is that PCT is a testable scientific hypothesis that includes many specific predictions of functional operations in the brain and nervous system that would reductionistically explain how the various Things That Work, do so.
The difference is that humans are not like control systems, they are control systems, and are not and cannot be modelled as sets of “if-then” loops, whatever those are supposed to be.
That presumes we already have such a description. PCT provides one. It provides the possibility to obtain actual understanding of the matter. Nothing else has yet done that.
And if people are control systems—that is, they vary their actions to obtain their intended perceptions—then that implies that the traditional methods of experimental psychology are invalid. Correlating experimental stimuli and subjects’ responses tells you nothing. Here’s a psychologist writing on this, the late Philip Runkel.
What Vladimir_Nesov said. And,
We do: it’s the set of all observations of human behavior. The goal of science (or rationality) is to find ever-simpler ways of explaining (describing) the data. The worst case scenario is to explain the data by simply restating it. A theory allows you describe past data without simply restating it because it gives you a generative model.
(There’s probably some LW wiki entry or Eliezer_Yudkowsky post I should reference to give more background about what I’m talking about, but I think you get the idea and it’s pretty uncontroversial.)
That was the standard I was holding your post to: does this description of human behavior as “tweaking outputs to track a reference” help at all to provide a concise description of human behavior? Once again, I find myself trying to side-step a definitional dispute with you (over whether humans “are control systems”) by identifying the more fundamental claim you’re making.
Here, your claim is that there’s some epistemic profit from describing human behavior as a control system. I completely agree that it can be done, just like you can describe human behavior with a long enough computer program. But does this approach actually simplify the problem, or just rename it? I am skeptical that it simplifies because, like I said before, any reference-being-tracked in your model must itself have all the answers that you’re trying to use the model for.
I get the idea and am familiar with it, but I dispute it. There’s a whole lot of background assumptions there to take issue with. Specifically, I believe that:
Besides being consistent with past data, a theory must be consistent with future data as well, data that did not go into making the theory.
Besides merely fitting observastions, past and future, a theory must provide a mechanism, a description not merely of what will be observed, but of how the world produces those observations. It must, in short, be a response to clicking the Explain button.
Description length is not a useful criterion for either discovering or judging a theory. Sure, piling up epicycles is bad, but jumping from there to Kolmogorov complexity as the driver is putting the cart before the horse. (Need I add that I do not believe the Hutter Prize will drive any advance in strong AI?)
But having stated my own background assumptions, I shall address your criticisms in terms of yours, to avoid digressing to the meta-level. (I don’t mind having that discussion, but I’d prefer it to be separate from the current thread. I am sure there are LW wiki entries or EY postings bearing on the matter, but I don’t have an index to them etched on the inside of my forehead either.)
This approach actually simplifies the problem. (It also satisfies my requirements for a theory.)
Here, for example (applet on a web page), is a demo of a control task. I actually wanted to cite another control demo, but I can’t find it online. (I am asking around.) This other program fits a control model to the human performance in that task, with only a few parameters. Running the model on the same data presented to the human operator generates a performance correlating very highly with the human performance. It can also tell the difference between different people doing the same task, and the parameters it finds change very little for the same person attempting the task across many years. Just three numbers (or however many it is, it’s something like that) closely fits an individual’s performance on the task, for as long as they perform it. Is that the sort of thing you are asking for?
Thanks for the detailed reply; I’d like to have the metadiscussion with you, but what exactly would you consider a better place to have it? I’ve had a reply to you on “why mutual information = model” not yet completed, so I guess I could start another top-level post that addresses these issues.
Anyway:
Unfortunately, no. It’s not enough to show that humans play some game using a simple control algorithm that happens to work for it. You claimed that human behavior can be usefully described as tweaking output to control some observed variable. What you would need to show, then, is this model applied to behavior for which there are alternate, existing explanations.
For example, how does the controller model fit in with mate selection? When I seek a mate, what is the reference that I’m tracking? How does my sensory data get converted into a format that compares with the reference? What is the output?
I choose this example because it’s an immensely difficult task just to program object recognition. To say that my behavior is explained by trying to track some reference we don’t even know how to define, and by applying an operation to sense data we don’t understand yet, does not look like a simplification!
Or in the more general case: what is the default reference that I’m tracking? What am I tracking when I decide to go to work every day, and how do I know I’ve gotten to work?
Remember, to say you “want” something or that “recognize” something hides an immense amount of complexity, which is why I don’t see how it helps to restate these problems as control problems.
It doesn’t “just happen” to work. It works for the same reason that, say, a chemist’s description of a chemical reaction works: because the description describes what is actually happening.
Besides, according to the philosophy you expressed, all that matters in compressing the data. A few numbers to compress with high fidelity an arbitrarily large amount of data is pretty good, I would have thought. ETA: Compare how just one number: local gravitational strength, suffices to predict the path of a thrown rock, given the right theory.
Experiments based on PCT ideas routinely see correlations above 0.99. This is absolutely unheard of in psychology. Editors think results like that can’t possibly be true. But that is the sort of result you get when you are measuring real things. When you are doing real measurements, you don’t even bother to measure correlations, unless you have to talk in the language of people whose methods are so bad that they are always dealing with statistical fog.
The alternate, existing explanations are worth no more than alchemical theories of four elements. It’s possible to go back and look at the alchemists’ accounts of their experiments, but there’s really not much point except historical interest. They were asking the wrong questions and making the wrong observations, using wrong theories. Even if you can work out what someone was doing, it isn’t going to cast light on chemistry, only on history.
You’re demanding that the new point of view instantly explain everything. But FWIW, when you seek a mate, the reference is, of course, having a mate. You perceive that you do not have one, and take such steps as you think appropriate to find one. If you want a detailed acount right down to the level of nerve impulses of how that all happens—well, anyone who could do that would know how to build a strong AI. Nobody knows that, yet.
A theory isn’t a machine that will give you answers for free. ETA: Newtonian mechanics won’t hand you the answer to the N-body problem on a plate.
See pjeby’s reply. He gets it.
I’m demanding that it explain exactly what you claimed it could explain: behavior!
Yep, that confirms exactly what I was expecting: you’ve just relabeled the problem; you haven’t simplified it. Your model tells me nothing except “this is what you could do, once you did all the real work in understanding this phenomenon, which you got some other way”.
Poor comparison. Newtonian mechanics doesn’t give me a an answer to the general n-body problem, but it gives me more than enough to generate a numerical solution to any specific n-body problem.
Your model isn’t even in the same league. It just says the equivalent of, “Um, the bodies move in a, you know, gravitational-like manner, they figure out where gravity wants them to go, and they bring that all, into effect.”
It feels like an explanation, but it isn’t. The scientific answer would look more like, “The net acceleration any body experiences is equal to the vector sum of the forces on the body obtained from the law of gravitation, divided by its mass. To plot the paths, start with the initial positions and velocities, find the accelerations, and then up date the positions and start over.”
Just as an aside, I don’t think PCT is the ultimate solution to modeling humans in their entirety. I think Hawkins’ HTM model is actually a better description of object and pattern recognition in general, but there aren’t any signficant conflicts between HTM and PCT, in that both propose very similar hierarchies of control units. The primary difference is that HTM emphasizes memory-based prediction rather than reference-matching, but I don’t see any reason why the same hierarchical units couldn’t do both. (PCT’s model includes localized per-controller memory much like HTM does, and suggests that memory is used to set reference values, in much the same way that HTM describes memory being used to “predict” that an action should be taken.)
The main modeling limitation that I see in PCT is that it doesn’t address certain classes of motivated behavior as well as Ainslie’s model of conditioned appetites. But if you glue Ainslie, HTM, and PCT together, you get pretty decent overall coverage. And HTM/PCT are a very strong engineering model for “how it’s probably implemented”, i.e. HTM/PCT models are more or less the simplest things I can imagine building, that do things the way humans appear to do them. Both models look way too simple to be “intelligence”, but that’s more a reflection of our inbuilt mind-projection tendencies than a flaw in the models!
On a more specific note, though:
“You” don’t track; your brain’s control units do. And they do so in parallel—which is why you can be conflicted. Your reference for going to work every day might be part of the concept “being responsible” or “being on time”, or “not getting yelled at for lateness”, or whatever… possibly more than one.
In order to find out what reference(s) are relevant, you have to do what Powers refers to as “The Test”—that is, selecting hypothesized control variables and then disturb them to see whether they end up being stabilized by your behavior.
In practice, it’s easier with a human, since you can simply imagine NOT going to work, and notice what seems “bad” to you about that, at the somatic response (nonverbal or simple-verbal, System 1) level. Not at all coincidentally, a big chunk of my work has been teaching people to contradict their default responses and then ask “what’s bad about that?” in sequence to identify higher-level control variables for the behavior. (At the time I started doing that, I just didn’t have the terminology to explain what I was doing or why it was helpful.)
Of course, for some people, asking what’s bad about not going to work will produce confusion, because they’re controlling for something good about going to work… like having an exciting job, wanting to get stuff done, etc… but those folks are probably not seeing me about a motivational problem with going to work. ;-)
The best answer to this particular question is the book, Behavior: The Control of Perception. In a way, it’s like a miniature Origin of Species, showing how you can build up from trivial neural control systems to complex behavior… and how these levels more or less match various stages of development in a child’s first year of life. It’s a compelling physical description and hypothesis, not merely an abstract idea like “hey, let’s model humans as control systems.”
The part that I found most interesting is that it provides a plausible explanation for certain functions being widely distributed in the brain, and thereby clarified (for me anyway) some things that were a bit fuzzy or hand-wavy in my own models of how memory, monoidealism, and inner conflict actually work. (My model, being software-oriented, tended to portray the brain as a mostly-unified machine executing a program, whereas PCT shows why this is just an illusion.)
Humans consist of atoms. The statement was obviously about humans being modelable as physical/[digital computation] processes, forgetting all the stuff about intelligence and control.
Er, isn’t that the “program” level of Powers’s model? IOW, his model shows how you can build up from more fundamental control structures to get complex “programs”. See chapters 13-18 of Behavior: The Control Of Perception. (At least in the 2nd edition, which is all I’ve read.)
The freebie at livingcontrolsystems.com, followed by “Behavior: The Control of Perception” and “Freedom From Stress”. I figured I’d balance the deep theory version and the layman’s practical version to get the best span of info in the shortest time.
What is “the freebie at livingcontrolsystems.com″? Are you collectively referring to the introductory papers ? Or something more specific? ETA: I plan to read, from that page, the 10 minute intro, neglected phenomenon, and underpinnings to start with.
Btw, thanks for answering my other questions.
No, the “book of readings”, although there appears to be a huge overlap between the book and the papers you linked to.
One of the incredibly unfortunate things about that website is that it spends way too much time talking about what you’ll learn once you understand PCT, compared to how much time it spends actually explaining PCT. OTOH, my guess is that most of the people whose writing is quoted there have already read the key book (Behavior: The Control Of Perception), and find it hard to explain the concepts in a much shorter form. However, there are a few good chapters and a lot of small insights in the other chapters of the “book of readings”, so that by the end I was at least convinced enough to plunk down 35 bucks for the 2nd edition of B:CP (as it’s usually abbreviated).
For my purposes, it really didn’t hurt that my mind was already on ideas rather similar to reference levels, based on some recent change experiences (not to mention some discussions here), so I was quite motivated to dig through the testimonials to find some actual meat.
One of the other really useful bits on the site are Powers’ 1979 robotics articles for Byte magazine, which I didn’t find before I bought the book, and which might be an adequate substitute for some portion of the book, if you read all four of them.
One insightful tidbit from the second article:
The above came just after he painstakingly goes through the discrete math (using BASIC code) to show why the intuitive math for a certain control system is wrong, in that it leads to an unstable system… whereas the simpler, PCT-based approach results in more robust behavior. Another tidbit:
Basically (no pun intended), the articles describe a series of models and simulations (written in BASIC) that demonstrate the basic principles and models of behavior being generated by hierarchies of negative-feedback control, where the output of a “higher” control layer determines the reference values for the “lower” layers, and why this is a far more parsimonious and robust model of what living creatures appear to be doing, than more traditional models.