The incoming information gets processed through an existing filter that deletes any information that doesn’t fit the paradigm, or mangles the information until it does fit.
By the way, I believe this to be related to the kind of thing that Valentine was trying to point at in the Kensho post:
Imagine you’re in a world where people have literally forgotten how to look up from their cell phones. They use maps and camera functions to navigate, and they use chat programs to communicate with one another. They’re so focused on their phones that they don’t notice most stimuli coming in by other means.
Somehow, by a miracle we’ll just inject mysteriously into this thought experiment, you look up, and suddenly you remember that you can actually just see the world directly. You realize you had forgotten you were holding a cell phone.
In your excitement, you try texting your friend Alex:
YOU: Hey! Look up!
ALEX: Hi! Look up what?
YOU: No, I mean, you’re holding a cell phone. Look up from it!
ALEX: Yeah, I know I have a cell phone.
ALEX: <alex_cell_phone.jpg>
ALEX: If I look up from my phone, I just see our conversation.
YOU: No, that’s a picture of your cell phone. You’re still looking at the phone.
YOU: Seriously, try looking up!
ALEX: Okay…
ALEX: looks up
YOU: No, you just typed the characters “looks up”. Use your eyes!
ALEX: Um… I AM using my eyes. How else could I read this?
YOU: Exactly! Look above the text!
ALEX: Above the text is just the menu for the chat program.
YOU: Look above that!
ALEX: There isn’t anything above that. That’s the top.
ALEX: Are you okay?
You now realize you have a perplexing challenge made of two apparent facts.
First, Alex doesn’t have a place in their mind where the idea of “look up” can land in the way you intend. They are going to keep misunderstanding you.
Second, your only familiar way of interacting with Alex is through text, which seems to require somehow explaining what you mean.
But it’s so obvious! How can it be this hard to convey? And clearly some part of Alex already knows it and they just forgot like you had; otherwise they wouldn’t be able to walk around and use their phone. Maybe you can find some way of describing it to Alex that will help them notice that they already know…?
Or… maybe if you rendezvous with them, you can somehow figure out how to reach forward and just pull their head up? But you’re not sure you can do that; you’ve never used your hands that way before. And you might hurt them. And it seems kind of violating to try.
… my personal impression had been that it’s actually quite easy to see what Looking is and how one might translate it into reductionist third-person perspectives. But my personal experience had been that whenever I tried to share that translation, I’d bounce off of weird walls of misunderstanding. After a while I noticed that the nature of the bounce had a structure to it, and that that structure has self-reference. (Once again, analogies along the lines of “get out of the car” and “look up from your phone” come to mind.) After watching this over several months and running some informal tests, and comparing it to things CFAR has been doing (successfully, strugglingly, or failing to do) over the six years I’ve been there, it became obvious to me that there are some mental structures people run by default that actively block the process of Looking. And for many people, those structures have a pretty strong hold on what they say and consciously think. I’ve learned to expect that explaining Looking to those structures simply will never work. [...]
If I try to argue with a paperclip maximizer about how maximizing paperclips isn’t all there is to life, it will care to listen only to the extent that listening will help it maximize paperclips. I claim that by default, human mind design generates something analogous to a bunch of paperclip maximizers. If I’m stuck talking to one of someone’s paperclip maximizers, then even if I see that there are other parts of their mind that would like to engage with what I’m saying, I’m stuck talking to a chunk of their mind that will never understand what I’m saying.
… with the particular difference that he was pointing to a case where the meta-issue involves one’s basic ontology, and where (if I interpret him correctly) he thinks that most people have that meta-issue by default.
Yep, that’s it exactly. I metaphorically yank people’s heads up from their phones for pay. ;-)
Actually, it’s more like I keep sticking my hand in front of the phone until they learn to notice the difference and look up themselves. Fortunately, this doesn’t take too long for most people, but how fast it goes depends on how good they are at fooling me into thinking they’re actually looking up when they’re really still looking at the phone.
The easiest way to detect “looking at the phone” is to ask someone a yes or no or question, and see how long of an answer you get. If somebody starts talking about the past or future, they’re not actually paying attention to their inner experience, because inner experience is always present tense. e.g. “I see my mom yelling at me” is an experience, while “my mom used to yell at me” is a commentary on experience. Causal chains (x happened because y) are also commentary, as are generalizations.
(Incidentally, this is another reason I dislike IFS’ model: it encourages adding commentary like “a part of me X” instead of just saying “X”, which makes it more difficult to know if what you’re hearing is describing experience, or commentary/abstraction on experience.)
I imagine it would be possible to create a training program for people to recognize these verbal patterns and to then verify their own spoken or written statements using them, but it would be harder to get people to go through such a program vs. paying me to help fix a problem of theirs, and learning it by doing it along the way.
The easiest way to detect “looking at the phone” is to ask someone a yes or no or question, and see how long of an answer you get. If somebody starts talking about the past or future, they’re not actually paying attention to their inner experience, because inner experience is always present tense. e.g. “I see my mom yelling at me” is an experience, while “my mom used to yell at me” is a commentary on experience. Causal chains (x happened because y) are also commentary, as are generalizations.
This is a really really good paragraph. You can also watch eyes more closely for the defocus moment. Hypnotists use this one.
a training program for people to recognize these verbal patterns and to then verify their own spoken or written statements using them
By the way, I believe this to be related to the kind of thing that Valentine was trying to point at in the Kensho post:
And later in this comment:
… with the particular difference that he was pointing to a case where the meta-issue involves one’s basic ontology, and where (if I interpret him correctly) he thinks that most people have that meta-issue by default.
Yep, that’s it exactly. I metaphorically yank people’s heads up from their phones for pay. ;-)
Actually, it’s more like I keep sticking my hand in front of the phone until they learn to notice the difference and look up themselves. Fortunately, this doesn’t take too long for most people, but how fast it goes depends on how good they are at fooling me into thinking they’re actually looking up when they’re really still looking at the phone.
The easiest way to detect “looking at the phone” is to ask someone a yes or no or question, and see how long of an answer you get. If somebody starts talking about the past or future, they’re not actually paying attention to their inner experience, because inner experience is always present tense. e.g. “I see my mom yelling at me” is an experience, while “my mom used to yell at me” is a commentary on experience. Causal chains (x happened because y) are also commentary, as are generalizations.
(Incidentally, this is another reason I dislike IFS’ model: it encourages adding commentary like “a part of me X” instead of just saying “X”, which makes it more difficult to know if what you’re hearing is describing experience, or commentary/abstraction on experience.)
I imagine it would be possible to create a training program for people to recognize these verbal patterns and to then verify their own spoken or written statements using them, but it would be harder to get people to go through such a program vs. paying me to help fix a problem of theirs, and learning it by doing it along the way.
This is a really really good paragraph. You can also watch eyes more closely for the defocus moment. Hypnotists use this one.
Korzybski’s failed ambition. Also similar to the early (good, prior to cult) work of the NLP people on their ‘meta-model’ of map-territory confusions.