If you’ve chosen the right sort of inactions to reflect on, you’ll realize that you don’t know why you don’t do them. It’s not just that you want to do these things, but don’t; it’s that you don’t know why you don’t.
I can’t come up with anything where I don’t know the reasons why I am not doing the things I have reasons to do. Now, resolving such conflicts, that is another matter. There are techniques, but I’m not cut out to play the personal development guru, and I don’t want to tout any, since what is wanted here is
deeper generalizations that will hold everywhere
ETA: I’ll amplify that a little, as I believe the following is a deep generalisation that does hold everywhere, and there are some references to cite.
Any time you are “somehow” not doing what you want to do, it is because you also want to not do it, or want to do something that conflicts with it. The mysterious feeling of somehowness arises because you are unaware of the conflicting motives. But they are always there, and there are ways of uncovering them, and then resolving the conflicts.
For the theory behind this, see perceptual control theory (of which I have written here before). For the psychotherapeutic practice developed from that, see the MethodOfLevels.
For the theory behind this, see perceptual control theory (of which I have written here before). For the psychotherapeutic practice developed from that, see the Method Of Levels.
After taking a few days to read up on PCT and MOL, here’s my summation:
PCT is the Deep Theory behind mindhacking, hypnosis, and all other forms of self-help or therapy that actually work. It explains monoidealism and ideomotor responses, it explains backsliding, it provides a better conceptual basis for Ainslie’s model of “interests”, and it does an amazing job of explaining and connecting dozens of previously-isolated principles and techniques I’ve taught, and that I learned by hard experience, rather than deriving from a model. It explains the conflict-resolution model I’ve been posting about in the Applied Picoeconomics thread. And just grasping it almost instantly boosted my ability to self-apply many of my own techniques.
Most of the techniques and methods I’ve taught in the past have been effectively on the level of cutting the “wires” between different control systems, treating the actual control systems as fixed invariants. Now, I also see how to also connect wires, change the “settings”, and even assemble new control systems.
PCT explains the Work of Byron Katie, the Law of Attraction, a sizable chunk of Tony Robbins, T. Harv Eker, and Michael Hall’s work, and even Robert Fritz’s “structural consulting” model.
I have never seen anything that connects so much, using so little. And every time I think of another previously-isolated model that I teach, like say, how self-conscious awareness is an error correction mechanism, I find how PCT ties that into the overall model, too.
Hell, PCT even explains many phenomena Richard Bandler describes as part of NLP, such as non-linear and paradoxical responses to submodality change, and his saying that “brains go in directions” (seek to establish ongoing constant levels of a value or experience, rather than achieving an external goal and then stopping).
All I can say is, why haven’t you posted MORE about this? Your post about control systems seemed to mainly be an argument against brains having models, but PCT doesn’t demand a lack of models, and in any case it’s obvious that brains do model, and they model predictively as well as reflecting current states. And you didn’t mention any of the things that make PCT actually interesting as a behavioral description in human beings. PCT pretty much explains everything that I would’ve wanted to cover in my post sequence on what akrasia really is and how it works, only from a different angle and a better conceptual connection beween the pieces.
Whew.
(Oh, and I almost forgot to mention: by contrast to PCT, MOL barely seems worth the electrons it’s printed with. Many others have described essentially the same thing, with better practical information about how to do it, in more precise, more repeatable ways. The only thing novel is its direct link to PCT, but given that, one can make the same theory link to the other modalities and techniques.)
Wow, you seem pretty satisfied with it. Now, I haven’t done nearly enough reading on any of those topics to dispute anything you’ve said, but, as a poster on LW I’m obligated to check that you haven’t entered an “affective death spiral” by asking the following:
Are there any non-phenomena that PCT can “explain”? That is, could you use PCT to “prove” why certain conceivable things happen, which don’t really happen? Could I e.g. use PCT to prove why thinking hard about whatever I’m procrastinating about will make me motivated to do it, when you already know that doesn’t work?
I’m obligated to check that you haven’t entered an “affective death spiral”
I have to admit, the first bit of PCT literature I read (a sampler of papers and chapters from various PCT books) was a bit off-putting, since most of the first papers seemed a little too self-congratulatory, as if the intended audience were already cult memebrs. Later papers were more informative, enough to convince me to order a couple of the actual books.
Are there any non-phenomena that PCT can “explain”?
I can’t presently imagine how you could do it without distorting the theory. It’d be like trying to equate atheism and amorality. In a sense, PCT is just stimulus-response atheism.
Could I e.g. use PCT to prove why thinking hard about whatever I’m procrastinating about will make me motivated to do it, when you already know that doesn’t work?
It would depend on a far more specific definition of “thinking hard”, and an adequate specification of the other control systems involved in your individual brain. For certain such definitions and specifications, it would work.
To be precise, if “thinking hard” means that you are actually envisioning a specific outcome or actions, linked to a desired reference value, and you do not have any systems that are trying to set common perceptions to match conflicting reference values, then “thinking hard” would work.
This is not the usual definition of “thinking hard”, however, and PCT makes some very specific predictions about inner conflict that essentially say you are 100% screwed unless you fix the conflicts, because we are control systems (i.e. thermostats) “all the way down”.
If it sounds like I’m saying it depends on the individual and some people are screwed, that’s only sort of the case. Everyone can identify when they’re conflicted, and resolve the conflicts in some fashion. Plenty of people have already noticed this and taught it, PCT simply gives a plausible, testable, physical, 100% reductionistic explanation of how our hardware might produce the results we see.
All I can say is, why haven’t you posted MORE about this? Your post about control systems seemed to mainly be an argument against brains having models, but PCT doesn’t demand a lack of models, and in any case it’s obvious that brains do model, and they model predictively as well as reflecting current states. And you didn’t mention any of the things that make PCT actually interesting as a behavioral description in human beings. PCT pretty much explains everything that I would’ve wanted to cover in my post sequence on what akrasia really is and how it works, only from a different angle and a better conceptual connection beween the pieces.
My post on models was really a collective reply to comments on one aspect of my original post on PCT. I have been mean ing to post more, but haven’t found the time or the inspiration to formulate something substantial yet.
My post on models was really a collective reply to comments on one aspect of my original post on PCT. I have been mean ing to post more, but haven’t found the time or the inspiration to formulate something substantial yet.
Wait, what? Your two top-level posts weren’t about anything specific to human Perceptual Control Theory, just background control theory, and the whole time I didn’t really see the point. I was thinking,
“Sure, you can model humans as controllers that receive some reference and track it, just as you can model a human as a set of “if-then” loops, but so what? How would that model do any good compressing our description of how human minds work? By the time you’ve actually described what reference someone is tracking (or even a sub-reference like “sexiness”) and how observations are converted into a format capable of being compared, you’ve already solved the problem.”
I wish I had made the point earlier, but I was waiting for a more explicit application to a problem involving a human, which I assumed you had.
By the time you’ve actually described what reference someone is tracking (or even a sub-reference like “sexiness”) and how observations are converted into a format capable of being compared, you’ve already solved the problem
Yes, and that’s precisely what’s useful. That is, it identifies that to solve anyone’s problems, you need only identify the reference values, and find a way to reorganize the control system to either set new reference values or have another behavior that changes the outside world to cause the new reference to be reached. (This is essentially the same idea as Robert Fritz’s structural consulting, except that Fritz’s model is labeled as being about “decisions” rather than “reference values”.)
The main difference between PCT and other Things That Work is that PCT is a testable scientific hypothesis that includes many specific predictions of functional operations in the brain and nervous system that would reductionistically explain how the various Things That Work, do so.
Sure, you can model humans as controllers that receive some reference and track it, just as you can model a human as a set of “if-then” loops, but so what?
The difference is that humans are not like control systems, they are control systems, and are not and cannot be modelled as sets of “if-then” loops, whatever those are supposed to be.
How would that model do any good compressing our description of how human minds work?
That presumes we already have such a description. PCT provides one. It provides the possibility to obtain actual understanding of the matter. Nothing else has yet done that.
And if people are control systems—that is, they vary their actions to obtain their intended perceptions—then that implies that the traditional methods of experimental psychology are invalid. Correlating experimental stimuli and subjects’ responses tells you nothing. Here’s a psychologist writing on this, the late Philip Runkel.
How would that model do any good compressing our description of how human minds work?
That presumes we already have such a description.
We do: it’s the set of all observations of human behavior. The goal of science (or rationality) is to find ever-simpler ways of explaining (describing) the data. The worst case scenario is to explain the data by simply restating it. A theory allows you describe past data without simply restating it because it gives you a generative model.
(There’s probably some LW wiki entry or Eliezer_Yudkowsky post I should reference to give more background about what I’m talking about, but I think you get the idea and it’s pretty uncontroversial.)
That was the standard I was holding your post to: does this description of human behavior as “tweaking outputs to track a reference” help at all to provide a concise description of human behavior? Once again, I find myself trying to side-step a definitional dispute with you (over whether humans “are control systems”) by identifying the more fundamental claim you’re making.
Here, your claim is that there’s some epistemic profit from describing human behavior as a control system. I completely agree that it can be done, just like you can describe human behavior with a long enough computer program. But does this approach actually simplify the problem, or just rename it? I am skeptical that it simplifies because, like I said before, any reference-being-tracked in your model must itself have all the answers that you’re trying to use the model for.
We do: it’s the set of all observations of human behavior. The goal of science (or rationality) is to find ever-simpler ways of explaining (describing) the data. The worst case scenario is to explain the data by simply restating it. A theory allows you describe past data without simply restating it because it gives you a generative model.
(There’s probably some LW wiki entry or Eliezer_Yudkowsky post I should reference to give more background about what I’m talking about, but I think you get the idea and it’s pretty uncontroversial.)
I get the idea and am familiar with it, but I dispute it. There’s a whole lot of background assumptions there to take issue with. Specifically, I believe that:
Besides being consistent with past data, a theory must be consistent with future data as well, data that did not go into making the theory.
Besides merely fitting observastions, past and future, a theory must provide a mechanism, a description not merely of what will be observed, but of how the world produces those observations. It must, in short, be a response to clicking the Explain button.
Description length is not a useful criterion for either discovering or judging a theory. Sure, piling up epicycles is bad, but jumping from there to Kolmogorov complexity as the driver is putting the cart before the horse. (Need I add that I do not believe the Hutter Prize will drive any advance in strong AI?)
But having stated my own background assumptions, I shall address your criticisms in terms of yours, to avoid digressing to the meta-level. (I don’t mind having that discussion, but I’d prefer it to be separate from the current thread. I am sure there are LW wiki entries or EY postings bearing on the matter, but I don’t have an index to them etched on the inside of my forehead either.)
Here, your claim is that there’s some epistemic profit from describing human behavior as a control system. I completely agree that it can be done, just like you can describe human behavior with a long enough computer program. But does this approach actually simplify the problem, or just rename it?
This approach actually simplifies the problem. (It also satisfies my requirements for a theory.)
Here, for example (applet on a web page), is a demo of a control task. I actually wanted to cite another control demo, but I can’t find it online. (I am asking around.) This other program fits a control model to the human performance in that task, with only a few parameters. Running the model on the same data presented to the human operator generates a performance correlating very highly with the human performance. It can also tell the difference between different people doing the same task, and the parameters it finds change very little for the same person attempting the task across many years. Just three numbers (or however many it is, it’s something like that) closely fits an individual’s performance on the task, for as long as they perform it. Is that the sort of thing you are asking for?
Thanks for the detailed reply; I’d like to have the metadiscussion with you, but what exactly would you consider a better place to have it? I’ve had a reply to you on “why mutual information = model” not yet completed, so I guess I could start another top-level post that addresses these issues.
Anyway:
This other program fits a control model to the human performance in that task, with only a few parameters. … Just three numbers (or however many it is, it’s something like that) closely fits an individual’s performance on the task, for as long as they perform it. Is that the sort of thing you are asking for?
Unfortunately, no. It’s not enough to show that humans play some game using a simple control algorithm that happens to work for it. You claimed that human behavior can be usefully described as tweaking output to control some observed variable. What you would need to show, then, is this model applied to behavior for which there are alternate, existing explanations.
For example, how does the controller model fit in with mate selection? When I seek a mate, what is the reference that I’m tracking? How does my sensory data get converted into a format that compares with the reference? What is the output?
I choose this example because it’s an immensely difficult task just to program object recognition. To say that my behavior is explained by trying to track some reference we don’t even know how to define, and by applying an operation to sense data we don’t understand yet, does not look like a simplification!
Or in the more general case: what is the default reference that I’m tracking? What am I tracking when I decide to go to work every day, and how do I know I’ve gotten to work?
Remember, to say you “want” something or that “recognize” something hides an immense amount of complexity, which is why I don’t see how it helps to restate these problems as control problems.
Unfortunately, no. It’s not enough to show that humans play some game using a simple control algorithm that happens to work for it.
It doesn’t “just happen” to work. It works for the same reason that, say, a chemist’s description of a chemical reaction works: because the description describes what is actually happening.
Besides, according to the philosophy you expressed, all that matters in compressing the data. A few numbers to compress with high fidelity an arbitrarily large amount of data is pretty good, I would have thought. ETA: Compare how just one number: local gravitational strength, suffices to predict the path of a thrown rock, given the right theory.
Experiments based on PCT ideas routinely see correlations above 0.99. This is absolutely unheard of in psychology. Editors think results like that can’t possibly be true. But that is the sort of result you get when you are measuring real things. When you are doing real measurements, you don’t even bother to measure correlations, unless you have to talk in the language of people whose methods are so bad that they are always dealing with statistical fog.
You claimed that human behavior can be usefully described as tweaking output to control some observed variable. What you would need to show, then, is this model applied to behavior for which there are alternate, existing explanations.
The alternate, existing explanations are worth no more than alchemical theories of four elements. It’s possible to go back and look at the alchemists’ accounts of their experiments, but there’s really not much point except historical interest. They were asking the wrong questions and making the wrong observations, using wrong theories. Even if you can work out what someone was doing, it isn’t going to cast light on chemistry, only on history.
For example, how does the controller model fit in with mate selection? When I seek a mate, what is the reference that I’m tracking? How does my sensory data get converted into a format that compares with the reference? What is the output?
You’re demanding that the new point of view instantly explain everything. But FWIW, when you seek a mate, the reference is, of course, having a mate. You perceive that you do not have one, and take such steps as you think appropriate to find one. If you want a detailed acount right down to the level of nerve impulses of how that all happens—well, anyone who could do that would know how to build a strong AI. Nobody knows that, yet.
A theory isn’t a machine that will give you answers for free. ETA: Newtonian mechanics won’t hand you the answer to the N-body problem on a plate.
Or in the more general case: what is the default reference that I’m tracking? What am I tracking when I decide to go to work every day, and how do I know I’ve gotten to work?
You’re demanding that the new point of view instantly explain everything.
I’m demanding that it explain exactly what you claimed it could explain: behavior!
FWIW, when you seek a mate, the reference is, of course, having a mate. You perceive that you do not have one, and take such steps as you think appropriate to find one. If you want a detailed acount right down to the level of nerve impulses of how that all happens—well, anyone who could do that would know how to build a strong AI. Nobody knows that, yet.
Yep, that confirms exactly what I was expecting: you’ve just relabeled the problem; you haven’t simplified it. Your model tells me nothing except “this is what you could do, once you did all the real work in understanding this phenomenon, which you got some other way”.
A theory isn’t a machine that will give you answers for free. ETA: Newtonian mechanics won’t hand you the answer to the N-body problem on a plate.
Poor comparison. Newtonian mechanics doesn’t give me a an answer to the general n-body problem, but it gives me more than enough to generate a numerical solution to any specific n-body problem.
Your model isn’t even in the same league. It just says the equivalent of, “Um, the bodies move in a, you know, gravitational-like manner, they figure out where gravity wants them to go, and they bring that all, into effect.”
It feels like an explanation, but it isn’t. The scientific answer would look more like, “The net acceleration any body experiences is equal to the vector sum of the forces on the body obtained from the law of gravitation, divided by its mass. To plot the paths, start with the initial positions and velocities, find the accelerations, and then up date the positions and start over.”
Just as an aside, I don’t think PCT is the ultimate solution to modeling humans in their entirety. I think Hawkins’ HTM model is actually a better description of object and pattern recognition in general, but there aren’t any signficant conflicts between HTM and PCT, in that both propose very similar hierarchies of control units. The primary difference is that HTM emphasizes memory-based prediction rather than reference-matching, but I don’t see any reason why the same hierarchical units couldn’t do both. (PCT’s model includes localized per-controller memory much like HTM does, and suggests that memory is used to set reference values, in much the same way that HTM describes memory being used to “predict” that an action should be taken.)
The main modeling limitation that I see in PCT is that it doesn’t address certain classes of motivated behavior as well as Ainslie’s model of conditioned appetites. But if you glue Ainslie, HTM, and PCT together, you get pretty decent overall coverage. And HTM/PCT are a very strong engineering model for “how it’s probably implemented”, i.e. HTM/PCT models are more or less the simplest things I can imagine building, that do things the way humans appear to do them. Both models look way too simple to be “intelligence”, but that’s more a reflection of our inbuilt mind-projection tendencies than a flaw in the models!
On a more specific note, though:
Or in the more general case: what is the default reference that I’m tracking? What am I tracking when I decide to go to work every day, and how do I know I’ve gotten to work?
“You” don’t track; your brain’s control units do. And they do so in parallel—which is why you can be conflicted. Your reference for going to work every day might be part of the concept “being responsible” or “being on time”, or “not getting yelled at for lateness”, or whatever… possibly more than one.
In order to find out what reference(s) are relevant, you have to do what Powers refers to as “The Test”—that is, selecting hypothesized control variables and then disturb them to see whether they end up being stabilized by your behavior.
In practice, it’s easier with a human, since you can simply imagine NOT going to work, and notice what seems “bad” to you about that, at the somatic response (nonverbal or simple-verbal, System 1) level. Not at all coincidentally, a big chunk of my work has been teaching people to contradict their default responses and then ask “what’s bad about that?” in sequence to identify higher-level control variables for the behavior. (At the time I started doing that, I just didn’t have the terminology to explain what I was doing or why it was helpful.)
Of course, for some people, asking what’s bad about not going to work will produce confusion, because they’re controlling for something good about going to work… like having an exciting job, wanting to get stuff done, etc… but those folks are probably not seeing me about a motivational problem with going to work. ;-)
But does this approach actually simplify the problem, or just rename it?
The best answer to this particular question is the book, Behavior: The Control of Perception. In a way, it’s like a miniature Origin of Species, showing how you can build up from trivial neural control systems to complex behavior… and how these levels more or less match various stages of development in a child’s first year of life. It’s a compelling physical description and hypothesis, not merely an abstract idea like “hey, let’s model humans as control systems.”
The part that I found most interesting is that it provides a plausible explanation for certain functions being widely distributed in the brain, and thereby clarified (for me anyway) some things that were a bit fuzzy or hand-wavy in my own models of how memory, monoidealism, and inner conflict actually work. (My model, being software-oriented, tended to portray the brain as a mostly-unified machine executing a program, whereas PCT shows why this is just an illusion.)
The difference is that humans are not like control systems, they are control systems, and are not and cannot be modelled as sets of “if-then” loops, whatever those are supposed to be.
Humans consist of atoms. The statement was obviously about humans being modelable as physical/[digital computation] processes, forgetting all the stuff about intelligence and control.
The difference is that humans are not like control systems, they are control systems, and are not and cannot be modelled as sets of “if-then” loops, whatever those are supposed to be.
Er, isn’t that the “program” level of Powers’s model? IOW, his model shows how you can build up from more fundamental control structures to get complex “programs”. See chapters 13-18 of Behavior: The Control Of Perception. (At least in the 2nd edition, which is all I’ve read.)
The freebie at livingcontrolsystems.com, followed by “Behavior: The Control of Perception” and “Freedom From Stress”. I figured I’d balance the deep theory version and the layman’s practical version to get the best span of info in the shortest time.
What is “the freebie at livingcontrolsystems.com″? Are you collectively referring to the introductory papers ? Or something more specific? ETA: I plan to read, from that page, the 10 minute intro, neglected phenomenon, and underpinnings to start with.
Are you collectively referring to the introductory papers ?
No, the “book of readings”, although there appears to be a huge overlap between the book and the papers you linked to.
One of the incredibly unfortunate things about that website is that it spends way too much time talking about what you’ll learn once you understand PCT, compared to how much time it spends actually explaining PCT. OTOH, my guess is that most of the people whose writing is quoted there have already read the key book (Behavior: The Control Of Perception), and find it hard to explain the concepts in a much shorter form. However, there are a few good chapters and a lot of small insights in the other chapters of the “book of readings”, so that by the end I was at least convinced enough to plunk down 35 bucks for the 2nd edition of B:CP (as it’s usually abbreviated).
For my purposes, it really didn’t hurt that my mind was already on ideas rather similar to reference levels, based on some recent change experiences (not to mention some discussions here), so I was quite motivated to dig through the testimonials to find some actual meat.
One of the other really useful bits on the site are Powers’ 1979 robotics articles for Byte magazine, which I didn’t find before I bought the book, and which might be an adequate substitute for some portion of the book, if you read all four of them.
One insightful tidbit from the second article:
We have now established the fact that using natural
logic and following causes and effects around the
closed loop as a sequence of events will lead to a wrong
prediction of control system behavior. This immediately
eliminates three-quarters of what biologists,
psychologists, neurologists, and even cyberneticians
have published about control theory and behavior.
We are just beginning to see that one must view all the
variables in a control system as changing together, not
one at a time. This is what I mean by retraining the
intuition. Cartesian concepts of cause and effect, and
Newtonian physics, have trained us to think along
directed lines. What we need to do to understand
control systems is to learn how to think in circles
The above came just after he painstakingly goes through the discrete math (using BASIC code) to show why the intuitive math for a certain control system is wrong, in that it leads to an unstable system… whereas the simpler, PCT-based approach results in more robust behavior. Another tidbit:
You will notice that doubling the error sensitivity,
which doubles the amount of output
generated by a given error, does not double the
amount of output that actually occurs. Far from
it. When, for any reason, the loop gain goes
up, the steady state error simply gets smaller,
assuming that the system remains stable. This
fact does violence to the popular idea that the
brain commands muscles to produce behavior.
If that were the case, doubling the sensitivity of
a muscle to the nerve signals reaching it ought to
produce twice as much muscle tension. Nothing
of the sort happens, unless you’ve lopped off
the rest of the nervous system, particularly the
feedback paths.
Basically (no pun intended), the articles describe a series of models and simulations (written in BASIC) that demonstrate the basic principles and models of behavior being generated by hierarchies of negative-feedback control, where the output of a “higher” control layer determines the reference values for the “lower” layers, and why this is a far more parsimonious and robust model of what living creatures appear to be doing, than more traditional models.
For the psychotherapeutic practice developed from that, see the Method Of Levels.
Fascinating stuff, there, thanks for the post. It sounds very much like something I’ve been observing recently with myself and certain clients, where a persistent behavioral pattern is being driven by a barely-noticed criterion being checked at a higher level.
That is, when trying to make a decision, there was sort of a “final check” being done in the mind, to check for some obscure criterion like what other people would think of it, or whether it made me look clever enough, or whether I would be “good”. Consciously, there’s only the sensations of hesitation (before the check) and either satisfaction or dissatisfaction afterwards.
Now, I have tools that quickly get rid of things like this once they can be captured in awareness, but I haven’t had a method to reliably detect the presence of one and bring it into debugging scope. If MOL can do that, I will be all over that in a heartbeat.
It’s interesting that the third link you gave describes a process very similar to certain pieces of what I already do, as far as mental observation training, just a little less directly. It also seems that in MOL, there’s an expectation that simple awareness is therapeutic. In this respect it seems somewhat similar to Michael Hall’s meta-states model, in which one is explicitly invited to check the criterion at one level against more global criteria, but in MOL this appears to be implicit.
Hm, oh well, enough rambling. It sounds like the key operator in MOL is to extract the implicit criteria from framing statements—something I don’t do systematically with clients, or at all on myself.
Perceptual control theory sounds interesting. If a person spends half an hour a day either meditating (TM) or brainstorming on higher control levels, which do you think would be more useful (e.g., resulting in higher productivity over a three week period) compared to doing nothing for that half hour?
If a person spends half an hour a day either meditating (TM) or brainstorming on higher control levels
If I understand the links RichardKennaway gave, a “control level” has to become active in order for you to become aware of it—which also matches my experience with other mindhacking techniques. It’s unlikely that brainstorming will do this unless the thing that prompted your brainstorming was, say, a story of someone who does something that reminds you of you.
Meditation might be useful, if it’s awareness-based. That is, if it’s directed towards observing the thoughts that occur as you focus on the specific meditation task.
However, it would probably be most useful for you to meditate, not on a mantra or koan, but on a single, specific situation of interest (not a general category of situations, but one real or imagined incident, happening in present-tense terms), because then you will have the greatest specific activation of relevant control systems.
(In effect, in the analytic stages of mind-hacking, I’m directing clients to repeatedly “meditate” for brief intervals and observe their response to imagined stimuli, and the MOL documents RichardKennaway linked describe an almost-identical process, including a focus on observing thoughts as they occur, and any feelings happening in the body. These are certainly key distinctions in mind hacking, and appear to be in MOL as well.)
If I understand the links RichardKennaway gave, a “control level” has to become active in order for you to become aware of it
It’s active whether you’re aware of it or not. The purpose of MOL is to become aware of things within yourself relevant to the problem but currently outside your awareness. Once you become aware of the conflicting goals (and the higher-level goals for which they are subgoals, and so on, as far as it’s necessary to take it), then you are free to make different choices that eliminate the conflict. According to MOL practitioners, that reorganisation is generally the easiest part of the process. Once the real problem has been uncovered, the client is able to solve it on their own.
We’re using two different meanings of “active”, then. I’m just saying that to become aware of it, you need something that triggers the checking to occur.
According to MOL practitioners, that reorganisation is generally the easiest part of the process. Once the real problem has been uncovered, the client is able to solve it on their own.
I’d imagine so, since that’s basically the same process that occurs in the Work of Byron Katie. Specifically, the parts that ask for “how do you react when you have that thought” and “who would you be without it” seem to be calling for evaluation of one level of control (the “should”) from a higher level of control (the consequences of having that setting). And generally, once you do that, the “problem” just disappears.
I really appreciate the pointer to PCT/MOL, btw. Over the last 24 hours, I’ve been devouring all the material I’ve been able to find, as it’s giving me a unifying view of how certain things fit together, like for example the connection between the Work and Hall’s “executive states” model—a connection I hadn’t seen before. For that matter, some of Tony Robbins’ ideas of “standards” and “values”, and T. Harv Eker’s “wealth thermostat” concepts fit right in.
Even monoidealism and ideodynamics, my own “jedi mind trick” and “pull motivation”, certain aspects of the law of attraction… PCT seems to describe them all, although it seems much easier to get to PCT from those existing things than to develop new things from PCT.
Nonetheless, I intend to do some experimenting to find out how much my methods can be streamlined by focusing on acquiring signal perception and setting high-level reference values directly, rather than operating on lower-level control systems.
In particular, Hall’s executive states model, which previously struck me as vague and superstitious, seems to offer some useful application distinctions for PCT. Or, more precisely, PCT seems like a better explanation for the phenomena he appears to utilize, and his techniques appear to offer ways of rapidly setting up some types of control relationships.
I can’t come up with anything where I don’t know the reasons why I am not doing the things I have reasons to do. Now, resolving such conflicts, that is another matter. There are techniques, but I’m not cut out to play the personal development guru, and I don’t want to tout any, since what is wanted here is
ETA: I’ll amplify that a little, as I believe the following is a deep generalisation that does hold everywhere, and there are some references to cite.
Any time you are “somehow” not doing what you want to do, it is because you also want to not do it, or want to do something that conflicts with it. The mysterious feeling of somehowness arises because you are unaware of the conflicting motives. But they are always there, and there are ways of uncovering them, and then resolving the conflicts.
For the theory behind this, see perceptual control theory (of which I have written here before). For the psychotherapeutic practice developed from that, see the Method Of Levels.
After taking a few days to read up on PCT and MOL, here’s my summation:
PCT is the Deep Theory behind mindhacking, hypnosis, and all other forms of self-help or therapy that actually work. It explains monoidealism and ideomotor responses, it explains backsliding, it provides a better conceptual basis for Ainslie’s model of “interests”, and it does an amazing job of explaining and connecting dozens of previously-isolated principles and techniques I’ve taught, and that I learned by hard experience, rather than deriving from a model. It explains the conflict-resolution model I’ve been posting about in the Applied Picoeconomics thread. And just grasping it almost instantly boosted my ability to self-apply many of my own techniques.
Most of the techniques and methods I’ve taught in the past have been effectively on the level of cutting the “wires” between different control systems, treating the actual control systems as fixed invariants. Now, I also see how to also connect wires, change the “settings”, and even assemble new control systems.
PCT explains the Work of Byron Katie, the Law of Attraction, a sizable chunk of Tony Robbins, T. Harv Eker, and Michael Hall’s work, and even Robert Fritz’s “structural consulting” model.
I have never seen anything that connects so much, using so little. And every time I think of another previously-isolated model that I teach, like say, how self-conscious awareness is an error correction mechanism, I find how PCT ties that into the overall model, too.
Hell, PCT even explains many phenomena Richard Bandler describes as part of NLP, such as non-linear and paradoxical responses to submodality change, and his saying that “brains go in directions” (seek to establish ongoing constant levels of a value or experience, rather than achieving an external goal and then stopping).
All I can say is, why haven’t you posted MORE about this? Your post about control systems seemed to mainly be an argument against brains having models, but PCT doesn’t demand a lack of models, and in any case it’s obvious that brains do model, and they model predictively as well as reflecting current states. And you didn’t mention any of the things that make PCT actually interesting as a behavioral description in human beings. PCT pretty much explains everything that I would’ve wanted to cover in my post sequence on what akrasia really is and how it works, only from a different angle and a better conceptual connection beween the pieces.
Whew.
(Oh, and I almost forgot to mention: by contrast to PCT, MOL barely seems worth the electrons it’s printed with. Many others have described essentially the same thing, with better practical information about how to do it, in more precise, more repeatable ways. The only thing novel is its direct link to PCT, but given that, one can make the same theory link to the other modalities and techniques.)
Wow, you seem pretty satisfied with it. Now, I haven’t done nearly enough reading on any of those topics to dispute anything you’ve said, but, as a poster on LW I’m obligated to check that you haven’t entered an “affective death spiral” by asking the following:
Are there any non-phenomena that PCT can “explain”? That is, could you use PCT to “prove” why certain conceivable things happen, which don’t really happen? Could I e.g. use PCT to prove why thinking hard about whatever I’m procrastinating about will make me motivated to do it, when you already know that doesn’t work?
I have to admit, the first bit of PCT literature I read (a sampler of papers and chapters from various PCT books) was a bit off-putting, since most of the first papers seemed a little too self-congratulatory, as if the intended audience were already cult memebrs. Later papers were more informative, enough to convince me to order a couple of the actual books.
I can’t presently imagine how you could do it without distorting the theory. It’d be like trying to equate atheism and amorality. In a sense, PCT is just stimulus-response atheism.
It would depend on a far more specific definition of “thinking hard”, and an adequate specification of the other control systems involved in your individual brain. For certain such definitions and specifications, it would work.
To be precise, if “thinking hard” means that you are actually envisioning a specific outcome or actions, linked to a desired reference value, and you do not have any systems that are trying to set common perceptions to match conflicting reference values, then “thinking hard” would work.
This is not the usual definition of “thinking hard”, however, and PCT makes some very specific predictions about inner conflict that essentially say you are 100% screwed unless you fix the conflicts, because we are control systems (i.e. thermostats) “all the way down”.
If it sounds like I’m saying it depends on the individual and some people are screwed, that’s only sort of the case. Everyone can identify when they’re conflicted, and resolve the conflicts in some fashion. Plenty of people have already noticed this and taught it, PCT simply gives a plausible, testable, physical, 100% reductionistic explanation of how our hardware might produce the results we see.
Glad you like it. :-)
My post on models was really a collective reply to comments on one aspect of my original post on PCT. I have been mean ing to post more, but haven’t found the time or the inspiration to formulate something substantial yet.
Which of the PCT materials have you been reading?
Wait, what? Your two top-level posts weren’t about anything specific to human Perceptual Control Theory, just background control theory, and the whole time I didn’t really see the point. I was thinking,
“Sure, you can model humans as controllers that receive some reference and track it, just as you can model a human as a set of “if-then” loops, but so what? How would that model do any good compressing our description of how human minds work? By the time you’ve actually described what reference someone is tracking (or even a sub-reference like “sexiness”) and how observations are converted into a format capable of being compared, you’ve already solved the problem.”
I wish I had made the point earlier, but I was waiting for a more explicit application to a problem involving a human, which I assumed you had.
Yes, and that’s precisely what’s useful. That is, it identifies that to solve anyone’s problems, you need only identify the reference values, and find a way to reorganize the control system to either set new reference values or have another behavior that changes the outside world to cause the new reference to be reached. (This is essentially the same idea as Robert Fritz’s structural consulting, except that Fritz’s model is labeled as being about “decisions” rather than “reference values”.)
The main difference between PCT and other Things That Work is that PCT is a testable scientific hypothesis that includes many specific predictions of functional operations in the brain and nervous system that would reductionistically explain how the various Things That Work, do so.
The difference is that humans are not like control systems, they are control systems, and are not and cannot be modelled as sets of “if-then” loops, whatever those are supposed to be.
That presumes we already have such a description. PCT provides one. It provides the possibility to obtain actual understanding of the matter. Nothing else has yet done that.
And if people are control systems—that is, they vary their actions to obtain their intended perceptions—then that implies that the traditional methods of experimental psychology are invalid. Correlating experimental stimuli and subjects’ responses tells you nothing. Here’s a psychologist writing on this, the late Philip Runkel.
What Vladimir_Nesov said. And,
We do: it’s the set of all observations of human behavior. The goal of science (or rationality) is to find ever-simpler ways of explaining (describing) the data. The worst case scenario is to explain the data by simply restating it. A theory allows you describe past data without simply restating it because it gives you a generative model.
(There’s probably some LW wiki entry or Eliezer_Yudkowsky post I should reference to give more background about what I’m talking about, but I think you get the idea and it’s pretty uncontroversial.)
That was the standard I was holding your post to: does this description of human behavior as “tweaking outputs to track a reference” help at all to provide a concise description of human behavior? Once again, I find myself trying to side-step a definitional dispute with you (over whether humans “are control systems”) by identifying the more fundamental claim you’re making.
Here, your claim is that there’s some epistemic profit from describing human behavior as a control system. I completely agree that it can be done, just like you can describe human behavior with a long enough computer program. But does this approach actually simplify the problem, or just rename it? I am skeptical that it simplifies because, like I said before, any reference-being-tracked in your model must itself have all the answers that you’re trying to use the model for.
I get the idea and am familiar with it, but I dispute it. There’s a whole lot of background assumptions there to take issue with. Specifically, I believe that:
Besides being consistent with past data, a theory must be consistent with future data as well, data that did not go into making the theory.
Besides merely fitting observastions, past and future, a theory must provide a mechanism, a description not merely of what will be observed, but of how the world produces those observations. It must, in short, be a response to clicking the Explain button.
Description length is not a useful criterion for either discovering or judging a theory. Sure, piling up epicycles is bad, but jumping from there to Kolmogorov complexity as the driver is putting the cart before the horse. (Need I add that I do not believe the Hutter Prize will drive any advance in strong AI?)
But having stated my own background assumptions, I shall address your criticisms in terms of yours, to avoid digressing to the meta-level. (I don’t mind having that discussion, but I’d prefer it to be separate from the current thread. I am sure there are LW wiki entries or EY postings bearing on the matter, but I don’t have an index to them etched on the inside of my forehead either.)
This approach actually simplifies the problem. (It also satisfies my requirements for a theory.)
Here, for example (applet on a web page), is a demo of a control task. I actually wanted to cite another control demo, but I can’t find it online. (I am asking around.) This other program fits a control model to the human performance in that task, with only a few parameters. Running the model on the same data presented to the human operator generates a performance correlating very highly with the human performance. It can also tell the difference between different people doing the same task, and the parameters it finds change very little for the same person attempting the task across many years. Just three numbers (or however many it is, it’s something like that) closely fits an individual’s performance on the task, for as long as they perform it. Is that the sort of thing you are asking for?
Thanks for the detailed reply; I’d like to have the metadiscussion with you, but what exactly would you consider a better place to have it? I’ve had a reply to you on “why mutual information = model” not yet completed, so I guess I could start another top-level post that addresses these issues.
Anyway:
Unfortunately, no. It’s not enough to show that humans play some game using a simple control algorithm that happens to work for it. You claimed that human behavior can be usefully described as tweaking output to control some observed variable. What you would need to show, then, is this model applied to behavior for which there are alternate, existing explanations.
For example, how does the controller model fit in with mate selection? When I seek a mate, what is the reference that I’m tracking? How does my sensory data get converted into a format that compares with the reference? What is the output?
I choose this example because it’s an immensely difficult task just to program object recognition. To say that my behavior is explained by trying to track some reference we don’t even know how to define, and by applying an operation to sense data we don’t understand yet, does not look like a simplification!
Or in the more general case: what is the default reference that I’m tracking? What am I tracking when I decide to go to work every day, and how do I know I’ve gotten to work?
Remember, to say you “want” something or that “recognize” something hides an immense amount of complexity, which is why I don’t see how it helps to restate these problems as control problems.
It doesn’t “just happen” to work. It works for the same reason that, say, a chemist’s description of a chemical reaction works: because the description describes what is actually happening.
Besides, according to the philosophy you expressed, all that matters in compressing the data. A few numbers to compress with high fidelity an arbitrarily large amount of data is pretty good, I would have thought. ETA: Compare how just one number: local gravitational strength, suffices to predict the path of a thrown rock, given the right theory.
Experiments based on PCT ideas routinely see correlations above 0.99. This is absolutely unheard of in psychology. Editors think results like that can’t possibly be true. But that is the sort of result you get when you are measuring real things. When you are doing real measurements, you don’t even bother to measure correlations, unless you have to talk in the language of people whose methods are so bad that they are always dealing with statistical fog.
The alternate, existing explanations are worth no more than alchemical theories of four elements. It’s possible to go back and look at the alchemists’ accounts of their experiments, but there’s really not much point except historical interest. They were asking the wrong questions and making the wrong observations, using wrong theories. Even if you can work out what someone was doing, it isn’t going to cast light on chemistry, only on history.
You’re demanding that the new point of view instantly explain everything. But FWIW, when you seek a mate, the reference is, of course, having a mate. You perceive that you do not have one, and take such steps as you think appropriate to find one. If you want a detailed acount right down to the level of nerve impulses of how that all happens—well, anyone who could do that would know how to build a strong AI. Nobody knows that, yet.
A theory isn’t a machine that will give you answers for free. ETA: Newtonian mechanics won’t hand you the answer to the N-body problem on a plate.
See pjeby’s reply. He gets it.
I’m demanding that it explain exactly what you claimed it could explain: behavior!
Yep, that confirms exactly what I was expecting: you’ve just relabeled the problem; you haven’t simplified it. Your model tells me nothing except “this is what you could do, once you did all the real work in understanding this phenomenon, which you got some other way”.
Poor comparison. Newtonian mechanics doesn’t give me a an answer to the general n-body problem, but it gives me more than enough to generate a numerical solution to any specific n-body problem.
Your model isn’t even in the same league. It just says the equivalent of, “Um, the bodies move in a, you know, gravitational-like manner, they figure out where gravity wants them to go, and they bring that all, into effect.”
It feels like an explanation, but it isn’t. The scientific answer would look more like, “The net acceleration any body experiences is equal to the vector sum of the forces on the body obtained from the law of gravitation, divided by its mass. To plot the paths, start with the initial positions and velocities, find the accelerations, and then up date the positions and start over.”
Just as an aside, I don’t think PCT is the ultimate solution to modeling humans in their entirety. I think Hawkins’ HTM model is actually a better description of object and pattern recognition in general, but there aren’t any signficant conflicts between HTM and PCT, in that both propose very similar hierarchies of control units. The primary difference is that HTM emphasizes memory-based prediction rather than reference-matching, but I don’t see any reason why the same hierarchical units couldn’t do both. (PCT’s model includes localized per-controller memory much like HTM does, and suggests that memory is used to set reference values, in much the same way that HTM describes memory being used to “predict” that an action should be taken.)
The main modeling limitation that I see in PCT is that it doesn’t address certain classes of motivated behavior as well as Ainslie’s model of conditioned appetites. But if you glue Ainslie, HTM, and PCT together, you get pretty decent overall coverage. And HTM/PCT are a very strong engineering model for “how it’s probably implemented”, i.e. HTM/PCT models are more or less the simplest things I can imagine building, that do things the way humans appear to do them. Both models look way too simple to be “intelligence”, but that’s more a reflection of our inbuilt mind-projection tendencies than a flaw in the models!
On a more specific note, though:
“You” don’t track; your brain’s control units do. And they do so in parallel—which is why you can be conflicted. Your reference for going to work every day might be part of the concept “being responsible” or “being on time”, or “not getting yelled at for lateness”, or whatever… possibly more than one.
In order to find out what reference(s) are relevant, you have to do what Powers refers to as “The Test”—that is, selecting hypothesized control variables and then disturb them to see whether they end up being stabilized by your behavior.
In practice, it’s easier with a human, since you can simply imagine NOT going to work, and notice what seems “bad” to you about that, at the somatic response (nonverbal or simple-verbal, System 1) level. Not at all coincidentally, a big chunk of my work has been teaching people to contradict their default responses and then ask “what’s bad about that?” in sequence to identify higher-level control variables for the behavior. (At the time I started doing that, I just didn’t have the terminology to explain what I was doing or why it was helpful.)
Of course, for some people, asking what’s bad about not going to work will produce confusion, because they’re controlling for something good about going to work… like having an exciting job, wanting to get stuff done, etc… but those folks are probably not seeing me about a motivational problem with going to work. ;-)
The best answer to this particular question is the book, Behavior: The Control of Perception. In a way, it’s like a miniature Origin of Species, showing how you can build up from trivial neural control systems to complex behavior… and how these levels more or less match various stages of development in a child’s first year of life. It’s a compelling physical description and hypothesis, not merely an abstract idea like “hey, let’s model humans as control systems.”
The part that I found most interesting is that it provides a plausible explanation for certain functions being widely distributed in the brain, and thereby clarified (for me anyway) some things that were a bit fuzzy or hand-wavy in my own models of how memory, monoidealism, and inner conflict actually work. (My model, being software-oriented, tended to portray the brain as a mostly-unified machine executing a program, whereas PCT shows why this is just an illusion.)
Humans consist of atoms. The statement was obviously about humans being modelable as physical/[digital computation] processes, forgetting all the stuff about intelligence and control.
Er, isn’t that the “program” level of Powers’s model? IOW, his model shows how you can build up from more fundamental control structures to get complex “programs”. See chapters 13-18 of Behavior: The Control Of Perception. (At least in the 2nd edition, which is all I’ve read.)
The freebie at livingcontrolsystems.com, followed by “Behavior: The Control of Perception” and “Freedom From Stress”. I figured I’d balance the deep theory version and the layman’s practical version to get the best span of info in the shortest time.
What is “the freebie at livingcontrolsystems.com″? Are you collectively referring to the introductory papers ? Or something more specific? ETA: I plan to read, from that page, the 10 minute intro, neglected phenomenon, and underpinnings to start with.
Btw, thanks for answering my other questions.
No, the “book of readings”, although there appears to be a huge overlap between the book and the papers you linked to.
One of the incredibly unfortunate things about that website is that it spends way too much time talking about what you’ll learn once you understand PCT, compared to how much time it spends actually explaining PCT. OTOH, my guess is that most of the people whose writing is quoted there have already read the key book (Behavior: The Control Of Perception), and find it hard to explain the concepts in a much shorter form. However, there are a few good chapters and a lot of small insights in the other chapters of the “book of readings”, so that by the end I was at least convinced enough to plunk down 35 bucks for the 2nd edition of B:CP (as it’s usually abbreviated).
For my purposes, it really didn’t hurt that my mind was already on ideas rather similar to reference levels, based on some recent change experiences (not to mention some discussions here), so I was quite motivated to dig through the testimonials to find some actual meat.
One of the other really useful bits on the site are Powers’ 1979 robotics articles for Byte magazine, which I didn’t find before I bought the book, and which might be an adequate substitute for some portion of the book, if you read all four of them.
One insightful tidbit from the second article:
The above came just after he painstakingly goes through the discrete math (using BASIC code) to show why the intuitive math for a certain control system is wrong, in that it leads to an unstable system… whereas the simpler, PCT-based approach results in more robust behavior. Another tidbit:
Basically (no pun intended), the articles describe a series of models and simulations (written in BASIC) that demonstrate the basic principles and models of behavior being generated by hierarchies of negative-feedback control, where the output of a “higher” control layer determines the reference values for the “lower” layers, and why this is a far more parsimonious and robust model of what living creatures appear to be doing, than more traditional models.
Fascinating stuff, there, thanks for the post. It sounds very much like something I’ve been observing recently with myself and certain clients, where a persistent behavioral pattern is being driven by a barely-noticed criterion being checked at a higher level.
That is, when trying to make a decision, there was sort of a “final check” being done in the mind, to check for some obscure criterion like what other people would think of it, or whether it made me look clever enough, or whether I would be “good”. Consciously, there’s only the sensations of hesitation (before the check) and either satisfaction or dissatisfaction afterwards.
Now, I have tools that quickly get rid of things like this once they can be captured in awareness, but I haven’t had a method to reliably detect the presence of one and bring it into debugging scope. If MOL can do that, I will be all over that in a heartbeat.
It’s interesting that the third link you gave describes a process very similar to certain pieces of what I already do, as far as mental observation training, just a little less directly. It also seems that in MOL, there’s an expectation that simple awareness is therapeutic. In this respect it seems somewhat similar to Michael Hall’s meta-states model, in which one is explicitly invited to check the criterion at one level against more global criteria, but in MOL this appears to be implicit.
Hm, oh well, enough rambling. It sounds like the key operator in MOL is to extract the implicit criteria from framing statements—something I don’t do systematically with clients, or at all on myself.
Perceptual control theory sounds interesting. If a person spends half an hour a day either meditating (TM) or brainstorming on higher control levels, which do you think would be more useful (e.g., resulting in higher productivity over a three week period) compared to doing nothing for that half hour?
If I understand the links RichardKennaway gave, a “control level” has to become active in order for you to become aware of it—which also matches my experience with other mindhacking techniques. It’s unlikely that brainstorming will do this unless the thing that prompted your brainstorming was, say, a story of someone who does something that reminds you of you.
Meditation might be useful, if it’s awareness-based. That is, if it’s directed towards observing the thoughts that occur as you focus on the specific meditation task.
However, it would probably be most useful for you to meditate, not on a mantra or koan, but on a single, specific situation of interest (not a general category of situations, but one real or imagined incident, happening in present-tense terms), because then you will have the greatest specific activation of relevant control systems.
(In effect, in the analytic stages of mind-hacking, I’m directing clients to repeatedly “meditate” for brief intervals and observe their response to imagined stimuli, and the MOL documents RichardKennaway linked describe an almost-identical process, including a focus on observing thoughts as they occur, and any feelings happening in the body. These are certainly key distinctions in mind hacking, and appear to be in MOL as well.)
It’s active whether you’re aware of it or not. The purpose of MOL is to become aware of things within yourself relevant to the problem but currently outside your awareness. Once you become aware of the conflicting goals (and the higher-level goals for which they are subgoals, and so on, as far as it’s necessary to take it), then you are free to make different choices that eliminate the conflict. According to MOL practitioners, that reorganisation is generally the easiest part of the process. Once the real problem has been uncovered, the client is able to solve it on their own.
We’re using two different meanings of “active”, then. I’m just saying that to become aware of it, you need something that triggers the checking to occur.
I’d imagine so, since that’s basically the same process that occurs in the Work of Byron Katie. Specifically, the parts that ask for “how do you react when you have that thought” and “who would you be without it” seem to be calling for evaluation of one level of control (the “should”) from a higher level of control (the consequences of having that setting). And generally, once you do that, the “problem” just disappears.
I really appreciate the pointer to PCT/MOL, btw. Over the last 24 hours, I’ve been devouring all the material I’ve been able to find, as it’s giving me a unifying view of how certain things fit together, like for example the connection between the Work and Hall’s “executive states” model—a connection I hadn’t seen before. For that matter, some of Tony Robbins’ ideas of “standards” and “values”, and T. Harv Eker’s “wealth thermostat” concepts fit right in.
Even monoidealism and ideodynamics, my own “jedi mind trick” and “pull motivation”, certain aspects of the law of attraction… PCT seems to describe them all, although it seems much easier to get to PCT from those existing things than to develop new things from PCT.
Nonetheless, I intend to do some experimenting to find out how much my methods can be streamlined by focusing on acquiring signal perception and setting high-level reference values directly, rather than operating on lower-level control systems.
In particular, Hall’s executive states model, which previously struck me as vague and superstitious, seems to offer some useful application distinctions for PCT. Or, more precisely, PCT seems like a better explanation for the phenomena he appears to utilize, and his techniques appear to offer ways of rapidly setting up some types of control relationships.