Do you see why this would seem useful to me while “NLP Submodalities” doesn’t?
For the same reason that yours and Robin’s writing on biases is more useful than the source material, I imagine. That is, it’s been predigested. It probably also doesn’t hurt that I have to teach “how to tell if you’re making shit up” to every single client of mine, so I have some practice at doing so! (Albeit mostly in real-time interaction.)
FYI, NLP volume I represents the more detailed “brain software” model from which that summary was derived, which I recommended to you because you said you couldn’t follow my writing.
You can also see why I was excited when Robin started posting about near/far stuff on OB—it fit very nicely into the work I was already doing, and into the NLP presupposition that “conscious verbal responses are to be treated as unsubstantiated rumor unless confirmed by unconscious nonverbal response”—i.e., don’t trust what somebody says about their behavior, because that’s not the system that runs the behavior.
The Near/far distinction mainly added an evolutionary explanation that was not a part of NLP, and gave a better why for not trusting the verbal explanation. Near/far in a literal sense, as in “people respond differently based on distance in space/time/abstraction level of visualization”, has been part of the NLP models for over 20 years now. But once again, the mainstream experiments are just now being done, presumably by people who’ve never heard of NLP, or who assume it’s crackpottery.
“How To Tell If You’re Making Shit Up” seems useful. Do you see why this would seem useful to me while “NLP Submodalities” doesn’t?
For the same reason that yours and Robin’s writing on biases is more useful than the source material, I imagine. That is, it’s been predigested. It probably also doesn’t hurt that I have to teach “how to tell if you’re making shit up” to every single client of mine, so I have some practice at doing so! (Albeit mostly in real-time interaction.)
FYI, NLP volume I represents the more detailed “brain software” model from which that summary was derived, which I recommended to you because you said you couldn’t follow my writing.
You can also see why I was excited when Robin started posting about near/far stuff on OB—it fit very nicely into the work I was already doing, and into the NLP presupposition that “conscious verbal responses are to be treated as unsubstantiated rumor unless confirmed by unconscious nonverbal response”—i.e., don’t trust what somebody says about their behavior, because that’s not the system that runs the behavior.
The Near/far distinction mainly added an evolutionary explanation that was not a part of NLP, and gave a better why for not trusting the verbal explanation. Near/far in a literal sense, as in “people respond differently based on distance in space/time/abstraction level of visualization”, has been part of the NLP models for over 20 years now. But once again, the mainstream experiments are just now being done, presumably by people who’ve never heard of NLP, or who assume it’s crackpottery.