Epistemic status: Presentation of an established technique and history. I learned most of my NLP knowledge from Chris Mulzer who’s one of Bandler’s top students. The Origins of Neuro-Linguistic Programming by John Grinder and Frank Pucelik is my main source for the history.
What’s NLP? In 1971 Frank Pucelik and Richard Bandler started teaching Fritz Perls’s Gestalt therapy in a group at the University of California, Santa Cruz where the two were in a Bachelor program of psychology. They were joined by John Grinder who was an assistant professor in Linguistics who had just finished writing his PHD thesis on the topic of deletions. As a linguist he had projects like modeling the language of the Tanzanian Wagogo tribe to be able to communicate with them. He had the idea that if he would create a model of how Fritz Perls was using language to get the results he got in his Gestalt therapy work, he should be able to achieve the same results.
Just like modeling the customs of the Wagogo tribe, the goal was to copy the linguistic patterns that were present in Perls’s work to be able to achieve the same results. As a side job Bandler was transcribing lectures of the late Fritz Perls, so they had plenty of video material to study. In addition to the videos Grinder could also study Bandler and Pucelik as they were doing their Gestalt work.
Modeling in NLP
Later they described the modeling process that they followed as a five-step process description of NLP Modeling:
1. Identification of and obtaining access to a model in the context where he or she is performing as a genius.
2. Unconscious uptake of model’s patterns without any attempt to understand them consciously.
3. Practice in a parallel context to replicate the pattern. The intention is to achieve a performance of the model’s patterns which is equal to the model him/herself.
4. Once the modeler can consistently reproduce the pattern in an applied fashion with equal results, the modeler begins the coding process.
5. Testing to determine if the pattern as coded can be transferred successfully to others who will in turn be able to get equally effective results from the coded results … and then, ultimately to teach those processes to others
Every Monday Bandler and Pucelik had their Gestalt group and the Thursday afterwards Grinder tried to do the same thing as their group to another group of students. They also got additional students to form additional study groups. In their own account they spent around 30 hours per week engaging in modeling and experimenting.
The three lead the group and spent according to their own account 30 hours per week engaging in modeling and experimenting. In addition to Fritz Perls they modeled other famous therapists like Milton Erickson and Virginia Satir as well to learn their ways of interacting with clients. They also modeled people who they believed to have changed themselves like people who overcome their own phobias.
Ideologically, the NLP developers didn’t like trusting authorities. They were also skeptical of developing elaborate theoretical models that intended to fully reflect reality. At the time the cybernetics community provided a skeptical and constructivist framework from which the NLP developers took ideas about how to deal with knowledge. The added the P in Neuro-Linguistic-Programming to refer to programming in the sense it was thought of in the Biological Computer Laboratory. The Biological Computer Laboratory was led by Heinz von Förster who saw cybernetics as an alternative way to science to gather knowledge.
Test-Operate-Test-Exist (TOTE).
From cybernetics work (over George Miller) they borrowed the concept of Test-Operate-Test-Exist (TOTE). In the classic cybernetics example of the thermostat, the thermostat first measures the temperature (test) and whether it’s under the desired level. Then it pumps warm water (operate). The thermostat measures the temperature again (test). If the temperature is at the desired level it shuts off (exit) and otherwise it goes back the previous step.
In NLP the TOTE model gets used, to see whether a technique such as the Fast Phobia Cure works on a patient. Before doing the Fast Phobia Cure the NLP practitioner is supposed to calibrate a test. If a client has a spider phobia and is told to imaging a spider, their body language will react to show fear.
Once the NLP practitioner has it’s test that validates the fact that the phobia is there, they will do the Fast Phobia Cure. After they have done the Fast Phobia Cure they will test again with the test they calibrated earlier. If the test still shows the phobia, they know that the Fast Phobia Cure didn’t do the job for the client yet and they can try again with a slightly different approach. If the test doesn’t show a fear response anymore, it shows that the Fast Phobia Cure had a success.
This way of testing for the perceived body language of a patient has the disadvantage that it’s subject to an ability to read body language. It’s testing against a subjective measure instead of testing against an objective measure. The advantage is that the feedback-cycles are very fast. Fast feedback cycles allow a practitioner to develop practical knowledge faster than the feedback cycles of the traditional psychological research.
It’s best practice to add further tests and tell a person who was just treated with a Fast Phobia Cure, to face the fear in their own lives and report back. It’s however not an either or between tests of whether the phobia exists in situations of daily life and testing whether it can be triggered verbally. Mixing tests with fast feedback with more reliable tests with longer feedback cycles allows for more learning to happen.
As rationalists, when we invent a rationality technique, the paradigm of TOTE is useful. If we are clear about the desired outcomes, both those that are directly available and those that are available over longer time-frames, we can more effectively learn about whether our new rationality technique works and how it’s done most effectively.
Empiricism in NLP : Test Operate Text Exit (TOTE)
Epistemic status: Presentation of an established technique and history. I learned most of my NLP knowledge from Chris Mulzer who’s one of Bandler’s top students. The Origins of Neuro-Linguistic Programming by John Grinder and Frank Pucelik is my main source for the history.
What’s NLP? In 1971 Frank Pucelik and Richard Bandler started teaching Fritz Perls’s Gestalt therapy in a group at the University of California, Santa Cruz where the two were in a Bachelor program of psychology. They were joined by John Grinder who was an assistant professor in Linguistics who had just finished writing his PHD thesis on the topic of deletions. As a linguist he had projects like modeling the language of the Tanzanian Wagogo tribe to be able to communicate with them. He had the idea that if he would create a model of how Fritz Perls was using language to get the results he got in his Gestalt therapy work, he should be able to achieve the same results.
Just like modeling the customs of the Wagogo tribe, the goal was to copy the linguistic patterns that were present in Perls’s work to be able to achieve the same results. As a side job Bandler was transcribing lectures of the late Fritz Perls, so they had plenty of video material to study. In addition to the videos Grinder could also study Bandler and Pucelik as they were doing their Gestalt work.
Modeling in NLP
Later they described the modeling process that they followed as a five-step process description of NLP Modeling:
1. Identification of and obtaining access to a model in the context where he or she is performing as a genius.
2. Unconscious uptake of model’s patterns without any attempt to understand them consciously.
3. Practice in a parallel context to replicate the pattern. The intention is to achieve a performance of the model’s patterns which is equal to the model him/herself.
4. Once the modeler can consistently reproduce the pattern in an applied fashion with equal results, the modeler begins the coding process.
5. Testing to determine if the pattern as coded can be transferred successfully to others who will in turn be able to get equally effective results from the coded results … and then, ultimately to teach those processes to others
Every Monday Bandler and Pucelik had their Gestalt group and the Thursday afterwards Grinder tried to do the same thing as their group to another group of students. They also got additional students to form additional study groups. In their own account they spent around 30 hours per week engaging in modeling and experimenting.
The three lead the group and spent according to their own account 30 hours per week engaging in modeling and experimenting. In addition to Fritz Perls they modeled other famous therapists like Milton Erickson and Virginia Satir as well to learn their ways of interacting with clients. They also modeled people who they believed to have changed themselves like people who overcome their own phobias.
Ideologically, the NLP developers didn’t like trusting authorities. They were also skeptical of developing elaborate theoretical models that intended to fully reflect reality. At the time the cybernetics community provided a skeptical and constructivist framework from which the NLP developers took ideas about how to deal with knowledge. The added the P in Neuro-Linguistic-Programming to refer to programming in the sense it was thought of in the Biological Computer Laboratory. The Biological Computer Laboratory was led by Heinz von Förster who saw cybernetics as an alternative way to science to gather knowledge.
Test-Operate-Test-Exist (TOTE).
From cybernetics work (over George Miller) they borrowed the concept of Test-Operate-Test-Exist (TOTE). In the classic cybernetics example of the thermostat, the thermostat first measures the temperature (test) and whether it’s under the desired level. Then it pumps warm water (operate). The thermostat measures the temperature again (test). If the temperature is at the desired level it shuts off (exit) and otherwise it goes back the previous step.
In NLP the TOTE model gets used, to see whether a technique such as the Fast Phobia Cure works on a patient. Before doing the Fast Phobia Cure the NLP practitioner is supposed to calibrate a test. If a client has a spider phobia and is told to imaging a spider, their body language will react to show fear.
Once the NLP practitioner has it’s test that validates the fact that the phobia is there, they will do the Fast Phobia Cure. After they have done the Fast Phobia Cure they will test again with the test they calibrated earlier. If the test still shows the phobia, they know that the Fast Phobia Cure didn’t do the job for the client yet and they can try again with a slightly different approach. If the test doesn’t show a fear response anymore, it shows that the Fast Phobia Cure had a success.
This way of testing for the perceived body language of a patient has the disadvantage that it’s subject to an ability to read body language. It’s testing against a subjective measure instead of testing against an objective measure. The advantage is that the feedback-cycles are very fast. Fast feedback cycles allow a practitioner to develop practical knowledge faster than the feedback cycles of the traditional psychological research.
It’s best practice to add further tests and tell a person who was just treated with a Fast Phobia Cure, to face the fear in their own lives and report back. It’s however not an either or between tests of whether the phobia exists in situations of daily life and testing whether it can be triggered verbally. Mixing tests with fast feedback with more reliable tests with longer feedback cycles allows for more learning to happen.
As rationalists, when we invent a rationality technique, the paradigm of TOTE is useful. If we are clear about the desired outcomes, both those that are directly available and those that are available over longer time-frames, we can more effectively learn about whether our new rationality technique works and how it’s done most effectively.