I tried it on the same example you proposed: desk-clearing. My desk is a mess; I would quite like it to be less of a mess; clearing it is never a high enough priority to make it happen. But I don’t react to the thought of a clear desk with the “Mmmmmm...” response that you say is necessary for the technique to work.
As for your discussion with Jim: you did not at any point tell him that he didn’t do what you’d told him to, or say anything that implied that; you did say that you think his statements contradict one another (implication: at least one of them is false; implication: you do not believe him). And then when he claimed that what stopped him was apathy and down-prioritizing by “the attention-allocating part of my brain” you told him that that wasn’t really an answer, and your justification for that was that his brain doesn’t really work in the way he said (implication: what he said was false; aliter, you didn’t believe him).
So although you didn’t use the words “I don’t believe him”, you did tell him that what he said couldn’t be correct.
Incidentally, I find your usage of the word “incompatible” as described here so bizarre that it’s hard not to see it as a rationalization aimed at avoiding admitting that you told jimrandomh he’d contradicted himself when in fact all he’d done was to say two things that couldn’t both be true if your model of his mind is correct. However, I’ll take your word for it that you really meant what you say you meant, and suggest that when you’re using a word in so nonstandard a way you might do well to say so at the time.
But I don’t react to the thought of a clear desk with the “Mmmmmm...” response that you say is necessary for the technique to work.
Did you ask yourself what it is that you would enjoy about it if it were already clean? (Again, this is strictly for my information.) Note that the procedure described in the video asks for you to wonder about what sorts of qualities would be good if you already had a clean desk, in order to find something that you like about the idea enough to generate the feeling of pleasure or relief.
As for your discussion with Jim: you did not at any point tell him that he didn’t do what you’d told him to, or say anything that implied that;
Au contraire, I said:
See this comment for an explanation of why that isn’t actually an answer
That is, I directed him to the “How To Know If You’re Making Shit Up” comment—the comment in which I gave him the directions, and which explained why his utterance was not well-formed.
you did say that you think his statements contradict one another (implication: at least one of them is false; implication: you do not believe him).
This is an awful lot of projection on your part. The contradiction I was pointing to was that he was talking about two different things—the statements were incompatible with a description of the same thing.
That is not anything like the same as “I don’t believe you”; from what Jim said, I don’t even have enough information to believe or not-believe something! Hence, “as far as I can tell” (“AFAICT”), and the request for more information… not unlike my requests for more information from you about what you tried.
“It didn’t work” is not an answer which provides me any information suitable for updating a model, any more than it is for a programmer trying to find a bug. The programmer needs to know at a minimum what you did, and what you got instead of the desired result. (Well, in the software case you also want to know what the desired result was; in this kind of context it can sometimes be assumed.)
And then when he claimed that what stopped him was apathy and down-prioritizing by “the attention-allocating part of my brain” you told him that that wasn’t really an answer,
Because it isn’t one: it’s a made-up explanation, not a description of an experience. See the comment I referred him to.
and your justification for that was that his brain doesn’t really work in the way he said (implication: what he said was false; aliter, you didn’t believe him).
If someone states something that is not a testable hypothesis, how can I “believe” or “disbelieve” it? They are simply speaking nonsense. Unless Jim has a blueprint of his brain with something marked “attention-allocating part” and he has an EEG or brain scan to show this activity, how can I possibly assign any truth value to that claim?
In contrast, if Jim presents me with a sensory-specific description of his experience, I have the option of taking him at his word. His experience may be subjective, but it at least is something I can model internally and have a reasonable certainty that I know what he’s talking about.
For example, when a client tells me they have a “feeling”, for instance, my minimum criterion is that they can describe it in sensory terms, including its rough location in the body. If they say, “it’s just a feeling”, then I have no information I can actually use. Same goes for a vague description like “I just can’t do it”, or in Jim’s case, “I’m completely unable to begin”.
If you want to make any sort of progress in an art of thinking and behavior, it is necessary to be excruciatingly precise when you talk about the thinking and behavior. Abstract language is dreadfully imprecise, as you can see from the present exchange. However, people routinely use such abstract language while thinking they’re being precise, which is why the first order of business with my clients is breaking through their fuzzy ways of speaking and thinking about their thinking.
Incidentally, I find your usage of the word “incompatible” as described here so bizarre that it’s hard not to see it as a rationalization aimed at avoiding admitting that you told jimrandomh he’d contradicted himself when in fact all he’d done was to say two things that couldn’t both be true if your model of his mind is correct.
That was not “all” he’d done: he also said things that couldn’t both be true if they were talking about the same thing, and that is what I was referring to. I then proceeded on the assumption that there were thus two different things, occurring in succession, one of which I had virtually no information about, only assumptions.
You seem to want me to speak as if I don’t believe my model is true. However, I have enough experience applying that model to enough different people to know that the probability of someone using imprecise language or not doing precisely what I asked them to do is significantly higher (by which I mean at least one, maybe two orders of magnitude) higher than the probability that they are offering me any information that can update my model, let alone falsify it.
That means I need more bits of data about a hypothetically-disconfirming event, than I do about a confirming event… which is why I asked Jim for more information, and why I’ve done the same with you.
That you are selectively ignoring everything I’m doing to get good information, while simultaneously accusing me of post-hoc rationalization, suggests that it’s your own epistemology that needs a bit more work.
Perhaps you should state in advance what criteria it is that you would like me to meet, so that I don’t have to keep up with a moving target. That is, what evidence would convince you to update?
This discussion is getting waaay too long and distinctly off-topic; but, as briefly as I can manage:
Did you ask yourself [...]
Yes.
[...] accusing me of post-hoc rationalization [...]
No, I did not do that. I said that what you’re doing looks a lot like post-hoc rationalization, but that I’d take your word that it wasn’t. I meant what I said.
what evidence would convince you to update?
I am updating all the time. Lots of things that you’ve said have led to adjustments (both ways) in my estimates for Pr(Philip knows exactly what he’s talking about) and Pr(Philip is an outright charlatan) and the various intermediate possibilities. Perhaps you mean: what evidence would lead to a large upward change for the “better” possibilities? I’m not sure that any single smallish-sized piece of evidence would do that. But how about: some reasonably precise statements explaining key bits of your model, together with some non-anecdotal and publicly avaliable evidence for their correctness.
I think that perhaps the problem here is that we are trying to treat you as a colleague whereas you prefer to treat us as clients. We say “your theories sound interesting; please tell us more about them, and provide some evidence”; you say “well, I want you to do such-and-such, and you have to do exactly what I tell you to”. This is unhelpful because (1) it doesn’t actually answer the question and (2) it is liable to feel patronizing, and people seldom react well to being patronized.
(By “we” it is possible that I really mean “I”, but it looks to me as if there are others who feel the same way.)
But how about: some reasonably precise statements explaining key bits of your model,
There are two modes of thinking. One directly makes you do things, the other one can only do so indirectly. One is based on non-verbal concrete sensory information, the other on verbal and mathematical abstractions.
Verbal abstractions can comment on themselves or on sensory experience, or they can induce sensory experience through the process of self-suggestion—e.g. priming and reading stories are both examples of translating verbal information to the sensory system, to produce emotional responses and/or actions.
More specifically, we make decisions and take action by reference to “feelings” (in the technical definition of physical awareness of the body/mind changes produced by an emotional response).
Feelings (or more precisely, the emotions that generate the feelings) occur in response to predictions made by our brain, using past sensory experience. But because the sensory system does not “understand”, only predict, many of these predictions are based on limited observation, confirmation bias, etc.
When our behavior is not as we expect—when we experience being “blocked”—it is because our conscious verbal/abstract assessment or prediction does not match our sensory-level prediction. We “know” there is no ghost, but run away anyway.
Surfacing the actual sensory prediction allows it to be modified, by comparing it to contradicting sensory evidence, whether real or imagined.
This is the bulk of the portion of my model that relates to treating chronic procrastination, though most of it has further applications.
together with some non-anecdotal and publicly avaliable evidence for their correctness.
You’ll need to define “evidence”. But the parts of what I said above that aren’t part of the experimentally-backed near/far model and the “somatic marker hypothesis” can be investigated in personal experience. And here’s a paper supporting the memory-prediction-emotion-action cycle of my model.
We say “your theories sound interesting; please tell us more about them, and provide some evidence”; you say “well, I want you to do such-and-such, and you have to do exactly what I tell you to”. This is unhelpful because (1) it doesn’t actually answer the question
Actually, it does. I’m trying to tell you how to experience the particular types of experience that demonstrate practical applications of the model given above. Not following instructions won’t produce that result, because you’ll still be using the verbal thinking mode and commenting on your own comments instead of noticing your sensory experience.
My goal is not to define a “true” model of the brain; my goals are about doing useful things with the brain. The model I have exists to serve the results, not the other way around. I already had the model before I heard of “near/far”, “somatic marker hypothesis”, or the “feeling/emotion” model in that paper, so they are merely supporting/confirming results, not what I used to generate the model to start with. I was interested in them because they added interesting or useful details to the model.
(2) it is liable to feel patronizing, and people seldom react well to being patronized.
Actually, I’m handling folks with kid gloves, compared to my students. If Jim were an actual client, there are things he said that I would have cut him off in the middle of, and said, “okay, that’s great, but how about: [repeat question here] Just ask the question, and wait for an answer.”
I usually give people more leeway towards the beginning of a session, and let them finish their ramblings before going on, but I cut it off more and more quickly as the session proceeds… especially if there’s an audience, and they’re thus wasting everyone’s time, not just mine, their own, and the money they’re spending.
I also woudn’t have bothered to refer Jim to my well-formedness guidelines until after I first got the desired result: i.e., a change to his automatic thought process. Once I had a verified success, only then would be the time to re-iterate about different modes of thought, and pointing back to how different statements he made did or did not conform to the guidelines.
Since my goal here was to provide information rather than training services—and because this is a public, rather than private forum—I tilted my responses accordingly. This is not me doing my impression of Eliezer or Jeffreysai; it’s me bending over backwards to be nice, possibly at the expense of conveying quality information.
The real conflict that I see is that for me, “quality information” means “information you can apply”. Whereas, it seems the prevailing standard on LW (at least for the most-vocal commenters) is that “quality” equals some abstraction about “truth”, that progressively retreats. It’s not enough to be true for one person, it must be true for lots of people. No, all people. No, it has to be all people, even if they don’t follow instructions. No, it has to have had experiments in a journal. No, the experiments can’t just be in support of the NLP model, the paper has to say it’s about NLP, because we can’t be bothered to look at where NLP said the same things 20-30 years ago.
Frankly, I’m beginning to forget why I bothered trying to share any information here in the first place.
Frankly, I’m beginning to forget why I bothered trying to share any information here in the first place.
I think the problem here is that the internet is great when you want to share information with people but is not a consistently good venue for convincing people of something, particularly when the initially least convinced people are self-selecting for interaction with you. Pick your battles, I’d say.
My goal is not to define a “true” model of the brain; my goals are about doing useful things with the brain. The model I have exists to serve the results, not the other way around.
Just to check, you agree that to be useful any model of the brain has to correspond to how the brain actually works? To that extent, you are seeking a true model. However, if I understand you correctly, your model is a highly compressed representation of how the mind works, so it might not superficially resemble a more detailed model. If this is correct, I can empathize with your position here: any practically useful model of the brain has to be highly compressed, but at this high level of compression, accurate models are mostly indistinguishable from bullshit at first glance.
I am still very unsure about the accuracy of what you are propounding, but anecdotally your comments here have been useful to me.
Just to check, you agree that to be useful any model of the brain has to correspond to how the brain actually works?
No, it only has to produce the same predictions that a “corresponding” model would, within the area of useful application.
Note, for example, that the original model of electricity is backwards—Benjamin Franklin thought the electrons flowed from the “positive” end of a battery, but we found out later it was the other way ’round.
Nonetheless, this mistake did not keep electricity from working!
Now, let’s compare to the LoA people, who claim that there is a mystical law of the universe that causes nice thoughts to attract nice things. This notion is clearly false… and yet some people are able to produce results that make it seem true.
So, while I would prefer to have a “true” model that explains the results (and I think I have a more-parsimonious model that does), this does not stop anyone from making use of the “false” model to produce a result, as long as they don’t allow their knowledge of its falsity to interfere with them using it.
See also dating advice, i.e., “pickup”—some schools of pickup have models of human behavior which may be false, yet still produce results. Others have refined those models to be more parsimonious, and produced improved results.
Yet all the models produce results for some people—most likely the people who devote their efforts to application first, critique second… rather than the other way around.
but at this high level of compression, accurate models are mostly indistinguishable from bullshit at first glance.
A model can actually BE bullshit and still produce valuable results! It’s not that the model is too compressed, it’s that it includes excessive description.
For example, the LoA is bullshit because it’s just a made-up explanation for a real phenomenon. If all the LoA people said was, “look, we found that if we take this attitude and think certain thoughts in a certain way, we experience increased perception of ways to exploit circumstances to meet our goals, and increased motivation to act on these opportunities”, then that would be a compressed model!
NLP is such a model over a slightly different sphere, in that it says, “when we act as if this set of ideas (the presuppositions) are true, we are able to obtain thus-and-such results.” It is more parsimonious than the LoA and pickup people, in that it explicitly disclaims being a direct description of “reality”.
In particular, NLP explicitly says that the state of mind of the person doing things must be taken into account: if you are not willing to commit to acting as-if the presuppositions are true, you will not necessarily obtain the same results. (However, this does not mean you need to believe the presuppositions are true, any more than the actor playing Hamlet on stage needs to believe his father has been murdered!)
Now, I personally do believe that portions of the NLP model, and most of mine, do in fact reflect reality in some way. But I don’t care much whether this is actually the
case, or that it has any bearing on whether the model is useful. It’s clearly useful to me and lots of other people, so it would be irrational for me to worry about whether it’s also “true”.
However, in the event that science discovers that NLP or I have the terminals labeled backwards, I’ll happily update, as I’ve already happily updated whenever any little bit of experimental data offers a better explanation for one of my puzzling edge cases, or a better evolutionary hypothesis for why something works in a certain way, etc.
But I don’t make these updates for the sake of truth, they’re for the sake of useful.
A more convincing evolutionary explanation is useful for my writing, as it gives a better reason for suspending disbelief. Better explanations of certain brain processes (e.g. the memory reconsolidation hypothesis, affective asynchrony, near/farl, the somatic marker hypothesis, etc.) are also useful for refining procedural instructions and my explanations for why you have to do something in a particular way for it to work. (e.g., memory reconsolidation studies explain why you need to access a memory to change it—a practical truth I discovered for myself in 2006.)
In a sense, these are less updates to the real model (do X to get Y), and more updates to the story or explanation that surrounds the model. The real model is that “if I act as if these things or something like them are true, and perform these other steps, then these other results reliably occur”.
And that model can’t be updated by somebody else’s experiment. All they can possibly change is the explanation for how I got the results to occur.
Meanwhile, if you’re looking for “the truth”, we don’t have the “real” model of what lies under NLP or hypnosis or LoA or my work, and I expect we won’t have it for at least another decade or two. Reconsolidation has been under study for about a decade now, I believe, likewise the roots of affective asynchrony and the SMH. A few of these are still in the “promising hypothesis, but still needs more support” stage.
But the things they’re trying to describe already exist, whether we have the words yet to describe them or not. And if you have something more important to protect than “truth”, you probably can’t afford to wait another decade or two for the research, any more than you’d wait that long for a reverse engineered circuit diagram before you tried turning on your TV.
If all the LoA people said was, “look, we found that if we take this attitude and think certain thoughts in a certain way, we experience increased perception of ways to exploit circumstances to meet our goals, and increased motivation to act on these opportunities”, then that would be a compressed model!
By the way, the technique given in my thoughts-into-action video is based on extracting precisely the above notion, and reproducing the effect on a small scale, with a short timeframe, and without resorting to mysticism or “quantum physics”.
IOW, the people who successfully used the technique therein have already experienced an “increased perception of ways to exploit the circumstances (of a messy desk) to meet the goal (of a clean one), and increased motivation to act on those opportunities”.
Actually, I’m handling folks with kid gloves, compared to my students.
I didn’t say “nasty”, I said “patronizing”.
Actually, it does. I’m trying to tell you how to experience [...]
If someone tells you that by praying in a particular way anyone can achieve spiritual union with the creator of the universe, and you ask for evidence, it is Not Helpful if they tell you “just try it and see”. (Especially if they add that actually, on past experience, the chances are that if you try it you won’t see because you won’t really be doing it right; and that to do it right you have to suspend your disbelief in what they’re telling you and agree to obey all their instructions. But that’s a separate can of worms.) Because (1) you won’t know for sure whether you really have achieved spiritual union with the creator of the universe (it might just feel that way), and (2) you’ll have discovered scarcely anything about how it works for anyone else. You might be more impressed if they can point to some sort of statistical evidence that shows (say) that people who pray in their preferred way are particularly good at discovering new laws of physics, which they attribute to their intimate connection to the creator of the universe.
More briefly: If someone asks for evidence, then “if you do exactly what I tell you to and suspend disbelief, then you might feel what I say you will” is not answering their question.
[...] that “quality” equals some abstraction about “truth”, that progressively retreats.
I haven’t observed this progressive retreat (it looks more to me like a progressive realisation on your part of what the fussier denizens of LW had wanted all along). But I do have a comment on the last step you described—“the paper has to say it’s about NLP”. For anyone who isn’t a professional psychologist, neurologist, cognitive scientist, or whatever, determining whether (and how far) a paper like Damasio’s supports your claims is a decidedly nontrivial business. (It’s easy to verify that some similar words crop up in somewhat-similar contexts, but that’s not the same.) Whereas, if if a paper says “Our findings provide strong confirmation for the wibbling hypothesis of NLP” and what you’re saying is “I accept the wibbling hypothesis as described in NLP texts”, that makes it rather easier to get a handle on how much evidence the research actually gives for your claims.
(In the present case, unfortunately but quite reasonably Google Books only lets me read bits of Damasio’s paper. I have basically no idea to what extent it confirms your underlying model of human cognition, and even less of whether it offers any support for the conclusions you draw from it about how to improve one’s own mind.)
Frankly, I’m beginning to forget why I bothered trying to share any information in the first place.
What, because one or two people haven’t found what you’ve said useful, and have said so? That seems a bit extreme.
If someone tells you that by praying in a particular way anyone can achieve spiritual union with the creator of the universe, and you ask for evidence, it is Not Helpful if they tell you “just try it and see”.
I think this is a little unfair. Extending the Mormon Wednesday discussion, I didn’t take my church leader’s suggestions to “read the Book of Mormon and pray about it” because, in retrospect, I had an extremely low prior probability that my thoughts could be communicated to a divine being who would respond to them with warm fuzzies.
I don’t think pjeby’s claims that practicing certain mental states/self hypnosis (I’m unclear on exactly what he is advocating) can influence our subconscious are that implausible. That doesn’t mean his theories are right, but they seem plausible that even the weak evidence of self-experimentation might say something about them.
I don’t think pjeby’s claims that practicing certain mental states/self hypnosis (I’m unclear on exactly what he is advocating) can influence our subconscious
I’m suggesting that priming, suggestion, hypnosis, NLP, placebo effects, creative visualization and a host of other psychological and new-age phenomena are ALL functions of the near/far divide, relying on a single precondition that might be called “suspension of disbelief”.
Or more precisely, refraining from verbal overshadowing—or something that’s suspiciously close to being able to be described that way.
From an evolutionary POV, you might say my hypothesis is that verbal overshadowing actually evolved in a “persuasion arms race”, specifically as an anti-persuasion defense, to prevent others from verbally exploiting our exposed unconscious processes.
IOW, if simple language evolved first, and was hooked directly to the “near” process (because that’s all there was), then it could be exploited by others—we would be “gullible” or “suggestible”. We would then evolve more sophisticated verbal intelligence, both to better exploit others, and to better defend ourselves.
Unfortunately, while this arguably gave rise to “intelligence” and “consciousness” as we know them, it also means that we’re cut off from being able to exploit our own near systems, unless we learn how to shut off the shields long enough to put stuff in (or take stuff out, change it, etc.).
Most self-help material consists of elaborate explanations to convince people to let down the shields by believing that what they say is true. However, in truth it is only necessary to not engage in disbelieving—to not shoot down the incoming data, whether it’s being provided by one’s self, the therapist or hypnotist, or something you read in a book.
However, instead of “truth” as a guide for what you install in the near system, one should use usefulness, since it is entirely possible to believe different things in the two systems without conflict.
I consider the near system to basically be a robot that I program for my own use, so I can feel free to exploit its beliefs based on what results I, the programmer, wish to accomplish. (And NLP offers a nice set of rules that can be used in place of “truth” as a guide for what “robot” beliefs are hygenic, vs. ones likely to lead to malfunction or undesired results.)
(Whee! I’m getting the explanation shorter! Practice, FTW! Too bad this particular explanation leans heavily on prior knowledge of at least priming, near/far, and verbal overshadowing, and lightly on pickup, suspension of disbelief, and the like. So in its bare form, it’s only really useful for a regular LW reader. But an improvement nonetheless.)
Well, I am genuinely appreciative of your attempts to explain, whether they are getting through or not.
Actually, I should be thanking you and the other people I’ve been replying to, because I just realized what pure gold I ended up with. I didn’t actually realize I had an implicit synthesis of the entire self-help field on my hands; in fact, I never consciously synthesized it before. And when I was telling my wife about it this evening, the ramifications of what should be possible under this simplified model hit me like a ton of bricks.
And it was the questions that Vladimir Nesov, gjm, Vladimir Golovin and others asked—about the techniques, the model, the self-help field in general, the similarities -- combined with sprocket’s post about “A/B” thinking that primed me with the right context to put it all together in a tightly integrated way. The refined model makes everything make a whole lot more sense to me—failures and successes alike. (For example, I now have an idea of why certain “affirmation” techniques are likely to work better than others, for some poeple.)
As soon as I get some rest, I have some things I want to try. Because if this more-unified model is indeed “less wrong” than my previous one, I just “levelled up” in my art. Frackin’ awesome! I think my massive investment of time here is actually going to pay off.
But whether it enables me to do anything new or not, this revision is still a big step forward in simplified communication regarding what I already do. So either way...
Thank you, LWers, I couldn’t have done it without you!
And it was the questions that Vladimir Nesov, gjm, Vladimir Golovin and others asked—about the techniques, the model, the self-help field in general, the similarities—combined with sprocket’s post about “A/B” thinking that primed me with the right context to put it all together in a tightly integrated way. The refined model makes everything make a whole lot more sense to me—failures and successes alike. (For example, I now have an idea of why certain “affirmation” techniques are likely to work better than others, for some poeple.)
Hmmm… I wish you well, but usually this kind of revelation, when put into writing and left to draw on a shelf for a couple of weeks, reveals itself as much less wonderful than it originally seemed to be. Although usually it’s also a step forward, even if in the direction opposite to where you were walking before.
One might get the opposite impression, but in fact I am too. One reason why I keep whingeing at Philip is that his style of presentation makes it very difficult to tell where he is on the charlatan-to-expert spectrum, and that wouldn’t bother me if I didn’t think there was at least a chance that he’s near the expert end.
What, because one or two people haven’t found what you’ve said useful, and have said so? That seems a bit extreme.
No, because the amount of time I’ve spent attempting to communicate these things might have been better spent teaching more people who actually need the information badly enough to jump at the chance to apply it, and whose primary criterion for the quality of the information is whether it helps them.
The only thing that makes it a tossup is that here, I’m forced to search for better and better metaphors, and more compact ways to communicate things… which is good practice/feedback for certain parts of the book I’m currently writing. But my current inability to quantify the effects of that practice, vs. the easily measurable time spent and the equivalent number of words towards a finished book, the tradeoff doesn’t look so good.
I tried it on the same example you proposed: desk-clearing. My desk is a mess; I would quite like it to be less of a mess; clearing it is never a high enough priority to make it happen. But I don’t react to the thought of a clear desk with the “Mmmmmm...” response that you say is necessary for the technique to work.
As for your discussion with Jim: you did not at any point tell him that he didn’t do what you’d told him to, or say anything that implied that; you did say that you think his statements contradict one another (implication: at least one of them is false; implication: you do not believe him). And then when he claimed that what stopped him was apathy and down-prioritizing by “the attention-allocating part of my brain” you told him that that wasn’t really an answer, and your justification for that was that his brain doesn’t really work in the way he said (implication: what he said was false; aliter, you didn’t believe him).
So although you didn’t use the words “I don’t believe him”, you did tell him that what he said couldn’t be correct.
Incidentally, I find your usage of the word “incompatible” as described here so bizarre that it’s hard not to see it as a rationalization aimed at avoiding admitting that you told jimrandomh he’d contradicted himself when in fact all he’d done was to say two things that couldn’t both be true if your model of his mind is correct. However, I’ll take your word for it that you really meant what you say you meant, and suggest that when you’re using a word in so nonstandard a way you might do well to say so at the time.
Did you ask yourself what it is that you would enjoy about it if it were already clean? (Again, this is strictly for my information.) Note that the procedure described in the video asks for you to wonder about what sorts of qualities would be good if you already had a clean desk, in order to find something that you like about the idea enough to generate the feeling of pleasure or relief.
Au contraire, I said:
That is, I directed him to the “How To Know If You’re Making Shit Up” comment—the comment in which I gave him the directions, and which explained why his utterance was not well-formed.
This is an awful lot of projection on your part. The contradiction I was pointing to was that he was talking about two different things—the statements were incompatible with a description of the same thing.
That is not anything like the same as “I don’t believe you”; from what Jim said, I don’t even have enough information to believe or not-believe something! Hence, “as far as I can tell” (“AFAICT”), and the request for more information… not unlike my requests for more information from you about what you tried.
“It didn’t work” is not an answer which provides me any information suitable for updating a model, any more than it is for a programmer trying to find a bug. The programmer needs to know at a minimum what you did, and what you got instead of the desired result. (Well, in the software case you also want to know what the desired result was; in this kind of context it can sometimes be assumed.)
Because it isn’t one: it’s a made-up explanation, not a description of an experience. See the comment I referred him to.
If someone states something that is not a testable hypothesis, how can I “believe” or “disbelieve” it? They are simply speaking nonsense. Unless Jim has a blueprint of his brain with something marked “attention-allocating part” and he has an EEG or brain scan to show this activity, how can I possibly assign any truth value to that claim?
In contrast, if Jim presents me with a sensory-specific description of his experience, I have the option of taking him at his word. His experience may be subjective, but it at least is something I can model internally and have a reasonable certainty that I know what he’s talking about.
For example, when a client tells me they have a “feeling”, for instance, my minimum criterion is that they can describe it in sensory terms, including its rough location in the body. If they say, “it’s just a feeling”, then I have no information I can actually use. Same goes for a vague description like “I just can’t do it”, or in Jim’s case, “I’m completely unable to begin”.
If you want to make any sort of progress in an art of thinking and behavior, it is necessary to be excruciatingly precise when you talk about the thinking and behavior. Abstract language is dreadfully imprecise, as you can see from the present exchange. However, people routinely use such abstract language while thinking they’re being precise, which is why the first order of business with my clients is breaking through their fuzzy ways of speaking and thinking about their thinking.
That was not “all” he’d done: he also said things that couldn’t both be true if they were talking about the same thing, and that is what I was referring to. I then proceeded on the assumption that there were thus two different things, occurring in succession, one of which I had virtually no information about, only assumptions.
You seem to want me to speak as if I don’t believe my model is true. However, I have enough experience applying that model to enough different people to know that the probability of someone using imprecise language or not doing precisely what I asked them to do is significantly higher (by which I mean at least one, maybe two orders of magnitude) higher than the probability that they are offering me any information that can update my model, let alone falsify it.
That means I need more bits of data about a hypothetically-disconfirming event, than I do about a confirming event… which is why I asked Jim for more information, and why I’ve done the same with you.
That you are selectively ignoring everything I’m doing to get good information, while simultaneously accusing me of post-hoc rationalization, suggests that it’s your own epistemology that needs a bit more work.
Perhaps you should state in advance what criteria it is that you would like me to meet, so that I don’t have to keep up with a moving target. That is, what evidence would convince you to update?
This discussion is getting waaay too long and distinctly off-topic; but, as briefly as I can manage:
Yes.
No, I did not do that. I said that what you’re doing looks a lot like post-hoc rationalization, but that I’d take your word that it wasn’t. I meant what I said.
I am updating all the time. Lots of things that you’ve said have led to adjustments (both ways) in my estimates for Pr(Philip knows exactly what he’s talking about) and Pr(Philip is an outright charlatan) and the various intermediate possibilities. Perhaps you mean: what evidence would lead to a large upward change for the “better” possibilities? I’m not sure that any single smallish-sized piece of evidence would do that. But how about: some reasonably precise statements explaining key bits of your model, together with some non-anecdotal and publicly avaliable evidence for their correctness.
I think that perhaps the problem here is that we are trying to treat you as a colleague whereas you prefer to treat us as clients. We say “your theories sound interesting; please tell us more about them, and provide some evidence”; you say “well, I want you to do such-and-such, and you have to do exactly what I tell you to”. This is unhelpful because (1) it doesn’t actually answer the question and (2) it is liable to feel patronizing, and people seldom react well to being patronized.
(By “we” it is possible that I really mean “I”, but it looks to me as if there are others who feel the same way.)
There are two modes of thinking. One directly makes you do things, the other one can only do so indirectly. One is based on non-verbal concrete sensory information, the other on verbal and mathematical abstractions.
Verbal abstractions can comment on themselves or on sensory experience, or they can induce sensory experience through the process of self-suggestion—e.g. priming and reading stories are both examples of translating verbal information to the sensory system, to produce emotional responses and/or actions.
More specifically, we make decisions and take action by reference to “feelings” (in the technical definition of physical awareness of the body/mind changes produced by an emotional response).
Feelings (or more precisely, the emotions that generate the feelings) occur in response to predictions made by our brain, using past sensory experience. But because the sensory system does not “understand”, only predict, many of these predictions are based on limited observation, confirmation bias, etc.
When our behavior is not as we expect—when we experience being “blocked”—it is because our conscious verbal/abstract assessment or prediction does not match our sensory-level prediction. We “know” there is no ghost, but run away anyway.
Surfacing the actual sensory prediction allows it to be modified, by comparing it to contradicting sensory evidence, whether real or imagined.
This is the bulk of the portion of my model that relates to treating chronic procrastination, though most of it has further applications.
You’ll need to define “evidence”. But the parts of what I said above that aren’t part of the experimentally-backed near/far model and the “somatic marker hypothesis” can be investigated in personal experience. And here’s a paper supporting the memory-prediction-emotion-action cycle of my model.
Actually, it does. I’m trying to tell you how to experience the particular types of experience that demonstrate practical applications of the model given above. Not following instructions won’t produce that result, because you’ll still be using the verbal thinking mode and commenting on your own comments instead of noticing your sensory experience.
My goal is not to define a “true” model of the brain; my goals are about doing useful things with the brain. The model I have exists to serve the results, not the other way around. I already had the model before I heard of “near/far”, “somatic marker hypothesis”, or the “feeling/emotion” model in that paper, so they are merely supporting/confirming results, not what I used to generate the model to start with. I was interested in them because they added interesting or useful details to the model.
Actually, I’m handling folks with kid gloves, compared to my students. If Jim were an actual client, there are things he said that I would have cut him off in the middle of, and said, “okay, that’s great, but how about: [repeat question here] Just ask the question, and wait for an answer.”
I usually give people more leeway towards the beginning of a session, and let them finish their ramblings before going on, but I cut it off more and more quickly as the session proceeds… especially if there’s an audience, and they’re thus wasting everyone’s time, not just mine, their own, and the money they’re spending.
I also woudn’t have bothered to refer Jim to my well-formedness guidelines until after I first got the desired result: i.e., a change to his automatic thought process. Once I had a verified success, only then would be the time to re-iterate about different modes of thought, and pointing back to how different statements he made did or did not conform to the guidelines.
Since my goal here was to provide information rather than training services—and because this is a public, rather than private forum—I tilted my responses accordingly. This is not me doing my impression of Eliezer or Jeffreysai; it’s me bending over backwards to be nice, possibly at the expense of conveying quality information.
The real conflict that I see is that for me, “quality information” means “information you can apply”. Whereas, it seems the prevailing standard on LW (at least for the most-vocal commenters) is that “quality” equals some abstraction about “truth”, that progressively retreats. It’s not enough to be true for one person, it must be true for lots of people. No, all people. No, it has to be all people, even if they don’t follow instructions. No, it has to have had experiments in a journal. No, the experiments can’t just be in support of the NLP model, the paper has to say it’s about NLP, because we can’t be bothered to look at where NLP said the same things 20-30 years ago.
Frankly, I’m beginning to forget why I bothered trying to share any information here in the first place.
I think the problem here is that the internet is great when you want to share information with people but is not a consistently good venue for convincing people of something, particularly when the initially least convinced people are self-selecting for interaction with you. Pick your battles, I’d say.
Just to check, you agree that to be useful any model of the brain has to correspond to how the brain actually works? To that extent, you are seeking a true model. However, if I understand you correctly, your model is a highly compressed representation of how the mind works, so it might not superficially resemble a more detailed model. If this is correct, I can empathize with your position here: any practically useful model of the brain has to be highly compressed, but at this high level of compression, accurate models are mostly indistinguishable from bullshit at first glance.
I am still very unsure about the accuracy of what you are propounding, but anecdotally your comments here have been useful to me.
No, it only has to produce the same predictions that a “corresponding” model would, within the area of useful application.
Note, for example, that the original model of electricity is backwards—Benjamin Franklin thought the electrons flowed from the “positive” end of a battery, but we found out later it was the other way ’round.
Nonetheless, this mistake did not keep electricity from working!
Now, let’s compare to the LoA people, who claim that there is a mystical law of the universe that causes nice thoughts to attract nice things. This notion is clearly false… and yet some people are able to produce results that make it seem true.
So, while I would prefer to have a “true” model that explains the results (and I think I have a more-parsimonious model that does), this does not stop anyone from making use of the “false” model to produce a result, as long as they don’t allow their knowledge of its falsity to interfere with them using it.
See also dating advice, i.e., “pickup”—some schools of pickup have models of human behavior which may be false, yet still produce results. Others have refined those models to be more parsimonious, and produced improved results.
Yet all the models produce results for some people—most likely the people who devote their efforts to application first, critique second… rather than the other way around.
A model can actually BE bullshit and still produce valuable results! It’s not that the model is too compressed, it’s that it includes excessive description.
For example, the LoA is bullshit because it’s just a made-up explanation for a real phenomenon. If all the LoA people said was, “look, we found that if we take this attitude and think certain thoughts in a certain way, we experience increased perception of ways to exploit circumstances to meet our goals, and increased motivation to act on these opportunities”, then that would be a compressed model!
NLP is such a model over a slightly different sphere, in that it says, “when we act as if this set of ideas (the presuppositions) are true, we are able to obtain thus-and-such results.” It is more parsimonious than the LoA and pickup people, in that it explicitly disclaims being a direct description of “reality”.
In particular, NLP explicitly says that the state of mind of the person doing things must be taken into account: if you are not willing to commit to acting as-if the presuppositions are true, you will not necessarily obtain the same results. (However, this does not mean you need to believe the presuppositions are true, any more than the actor playing Hamlet on stage needs to believe his father has been murdered!)
Now, I personally do believe that portions of the NLP model, and most of mine, do in fact reflect reality in some way. But I don’t care much whether this is actually the case, or that it has any bearing on whether the model is useful. It’s clearly useful to me and lots of other people, so it would be irrational for me to worry about whether it’s also “true”.
However, in the event that science discovers that NLP or I have the terminals labeled backwards, I’ll happily update, as I’ve already happily updated whenever any little bit of experimental data offers a better explanation for one of my puzzling edge cases, or a better evolutionary hypothesis for why something works in a certain way, etc.
But I don’t make these updates for the sake of truth, they’re for the sake of useful.
A more convincing evolutionary explanation is useful for my writing, as it gives a better reason for suspending disbelief. Better explanations of certain brain processes (e.g. the memory reconsolidation hypothesis, affective asynchrony, near/farl, the somatic marker hypothesis, etc.) are also useful for refining procedural instructions and my explanations for why you have to do something in a particular way for it to work. (e.g., memory reconsolidation studies explain why you need to access a memory to change it—a practical truth I discovered for myself in 2006.)
In a sense, these are less updates to the real model (do X to get Y), and more updates to the story or explanation that surrounds the model. The real model is that “if I act as if these things or something like them are true, and perform these other steps, then these other results reliably occur”.
And that model can’t be updated by somebody else’s experiment. All they can possibly change is the explanation for how I got the results to occur.
Meanwhile, if you’re looking for “the truth”, we don’t have the “real” model of what lies under NLP or hypnosis or LoA or my work, and I expect we won’t have it for at least another decade or two. Reconsolidation has been under study for about a decade now, I believe, likewise the roots of affective asynchrony and the SMH. A few of these are still in the “promising hypothesis, but still needs more support” stage.
But the things they’re trying to describe already exist, whether we have the words yet to describe them or not. And if you have something more important to protect than “truth”, you probably can’t afford to wait another decade or two for the research, any more than you’d wait that long for a reverse engineered circuit diagram before you tried turning on your TV.
By the way, the technique given in my thoughts-into-action video is based on extracting precisely the above notion, and reproducing the effect on a small scale, with a short timeframe, and without resorting to mysticism or “quantum physics”.
IOW, the people who successfully used the technique therein have already experienced an “increased perception of ways to exploit the circumstances (of a messy desk) to meet the goal (of a clean one), and increased motivation to act on those opportunities”.
I didn’t say “nasty”, I said “patronizing”.
If someone tells you that by praying in a particular way anyone can achieve spiritual union with the creator of the universe, and you ask for evidence, it is Not Helpful if they tell you “just try it and see”. (Especially if they add that actually, on past experience, the chances are that if you try it you won’t see because you won’t really be doing it right; and that to do it right you have to suspend your disbelief in what they’re telling you and agree to obey all their instructions. But that’s a separate can of worms.) Because (1) you won’t know for sure whether you really have achieved spiritual union with the creator of the universe (it might just feel that way), and (2) you’ll have discovered scarcely anything about how it works for anyone else. You might be more impressed if they can point to some sort of statistical evidence that shows (say) that people who pray in their preferred way are particularly good at discovering new laws of physics, which they attribute to their intimate connection to the creator of the universe.
More briefly: If someone asks for evidence, then “if you do exactly what I tell you to and suspend disbelief, then you might feel what I say you will” is not answering their question.
I haven’t observed this progressive retreat (it looks more to me like a progressive realisation on your part of what the fussier denizens of LW had wanted all along). But I do have a comment on the last step you described—“the paper has to say it’s about NLP”. For anyone who isn’t a professional psychologist, neurologist, cognitive scientist, or whatever, determining whether (and how far) a paper like Damasio’s supports your claims is a decidedly nontrivial business. (It’s easy to verify that some similar words crop up in somewhat-similar contexts, but that’s not the same.) Whereas, if if a paper says “Our findings provide strong confirmation for the wibbling hypothesis of NLP” and what you’re saying is “I accept the wibbling hypothesis as described in NLP texts”, that makes it rather easier to get a handle on how much evidence the research actually gives for your claims.
(In the present case, unfortunately but quite reasonably Google Books only lets me read bits of Damasio’s paper. I have basically no idea to what extent it confirms your underlying model of human cognition, and even less of whether it offers any support for the conclusions you draw from it about how to improve one’s own mind.)
What, because one or two people haven’t found what you’ve said useful, and have said so? That seems a bit extreme.
I think this is a little unfair. Extending the Mormon Wednesday discussion, I didn’t take my church leader’s suggestions to “read the Book of Mormon and pray about it” because, in retrospect, I had an extremely low prior probability that my thoughts could be communicated to a divine being who would respond to them with warm fuzzies.
I don’t think pjeby’s claims that practicing certain mental states/self hypnosis (I’m unclear on exactly what he is advocating) can influence our subconscious are that implausible. That doesn’t mean his theories are right, but they seem plausible that even the weak evidence of self-experimentation might say something about them.
I’m suggesting that priming, suggestion, hypnosis, NLP, placebo effects, creative visualization and a host of other psychological and new-age phenomena are ALL functions of the near/far divide, relying on a single precondition that might be called “suspension of disbelief”.
Or more precisely, refraining from verbal overshadowing—or something that’s suspiciously close to being able to be described that way.
From an evolutionary POV, you might say my hypothesis is that verbal overshadowing actually evolved in a “persuasion arms race”, specifically as an anti-persuasion defense, to prevent others from verbally exploiting our exposed unconscious processes.
IOW, if simple language evolved first, and was hooked directly to the “near” process (because that’s all there was), then it could be exploited by others—we would be “gullible” or “suggestible”. We would then evolve more sophisticated verbal intelligence, both to better exploit others, and to better defend ourselves.
Unfortunately, while this arguably gave rise to “intelligence” and “consciousness” as we know them, it also means that we’re cut off from being able to exploit our own near systems, unless we learn how to shut off the shields long enough to put stuff in (or take stuff out, change it, etc.).
Most self-help material consists of elaborate explanations to convince people to let down the shields by believing that what they say is true. However, in truth it is only necessary to not engage in disbelieving—to not shoot down the incoming data, whether it’s being provided by one’s self, the therapist or hypnotist, or something you read in a book.
However, instead of “truth” as a guide for what you install in the near system, one should use usefulness, since it is entirely possible to believe different things in the two systems without conflict.
I consider the near system to basically be a robot that I program for my own use, so I can feel free to exploit its beliefs based on what results I, the programmer, wish to accomplish. (And NLP offers a nice set of rules that can be used in place of “truth” as a guide for what “robot” beliefs are hygenic, vs. ones likely to lead to malfunction or undesired results.)
(Whee! I’m getting the explanation shorter! Practice, FTW! Too bad this particular explanation leans heavily on prior knowledge of at least priming, near/far, and verbal overshadowing, and lightly on pickup, suspension of disbelief, and the like. So in its bare form, it’s only really useful for a regular LW reader. But an improvement nonetheless.)
Well, I am genuinely appreciative of your attempts to explain, whether they are getting through or not.
Actually, I should be thanking you and the other people I’ve been replying to, because I just realized what pure gold I ended up with. I didn’t actually realize I had an implicit synthesis of the entire self-help field on my hands; in fact, I never consciously synthesized it before. And when I was telling my wife about it this evening, the ramifications of what should be possible under this simplified model hit me like a ton of bricks.
And it was the questions that Vladimir Nesov, gjm, Vladimir Golovin and others asked—about the techniques, the model, the self-help field in general, the similarities -- combined with sprocket’s post about “A/B” thinking that primed me with the right context to put it all together in a tightly integrated way. The refined model makes everything make a whole lot more sense to me—failures and successes alike. (For example, I now have an idea of why certain “affirmation” techniques are likely to work better than others, for some poeple.)
As soon as I get some rest, I have some things I want to try. Because if this more-unified model is indeed “less wrong” than my previous one, I just “levelled up” in my art. Frackin’ awesome! I think my massive investment of time here is actually going to pay off.
But whether it enables me to do anything new or not, this revision is still a big step forward in simplified communication regarding what I already do. So either way...
Thank you, LWers, I couldn’t have done it without you!
Hmmm… I wish you well, but usually this kind of revelation, when put into writing and left to draw on a shelf for a couple of weeks, reveals itself as much less wonderful than it originally seemed to be. Although usually it’s also a step forward, even if in the direction opposite to where you were walking before.
One might get the opposite impression, but in fact I am too. One reason why I keep whingeing at Philip is that his style of presentation makes it very difficult to tell where he is on the charlatan-to-expert spectrum, and that wouldn’t bother me if I didn’t think there was at least a chance that he’s near the expert end.
No, because the amount of time I’ve spent attempting to communicate these things might have been better spent teaching more people who actually need the information badly enough to jump at the chance to apply it, and whose primary criterion for the quality of the information is whether it helps them.
The only thing that makes it a tossup is that here, I’m forced to search for better and better metaphors, and more compact ways to communicate things… which is good practice/feedback for certain parts of the book I’m currently writing. But my current inability to quantify the effects of that practice, vs. the easily measurable time spent and the equivalent number of words towards a finished book, the tradeoff doesn’t look so good.