Practical Advice Backed By Deep Theories
Once upon a time, Seth Roberts took a European vacation and found that he started losing weight while drinking unfamiliar-tasting caloric fruit juices.
Now suppose Roberts had not known, and never did know, anything about metabolic set points or flavor-calorie associations—all this high-falutin’ scientific experimental research that had been done on rats and occasionally humans.
He would have posted to his blog, “Gosh, everyone! You should try these amazing fruit juices that are making me lose weight!” And that would have been the end of it. Some people would have tried it, it would have worked temporarily for some of them (until the flavor-calorie association kicked in) and there never would have been a Shangri-La Diet per se.
The existing Shangri-La Diet is visibly incomplete—for some people, like me, it doesn’t seem to work, and there is no apparent reason for this or any logic permitting it. But the reason why as many people have benefited as they have—the reason why there was more than just one more blog post describing a trick that seemed to work for one person and didn’t work for anyone else—is that Roberts knew the experimental science that let him interpret what he was seeing, in terms of deep factors that actually did exist.
One of the pieces of advice on OB/LW that was frequently cited as the most important thing learned, was the idea of “the bottom line”—that once a conclusion is written in your mind, it is already true or already false, already wise or already stupid, and no amount of later argument can change that except by changing the conclusion. And this ties directly into another oft-cited most important thing, which is the idea of “engines of cognition”, minds as mapping engines that require evidence as fuel.
If I had merely written one more blog post that said, “You know, you really should be more open to changing your mind—it’s pretty important—and oh yes, you should pay attention to the evidence too.” And this would not have been as useful. Not just because it was less persuasive, but because the actual operations would have been much less clear without the explicit theory backing it up. What constitutes evidence, for example? Is it anything that seems like a forceful argument? Having an explicit probability theory and an explicit causal account of what makes reasoning effective, makes a large difference in the forcefulness and implementational details of the old advice to “Keep an open mind and pay attention to the evidence.”
It is also important to realize that causal theories are much more likely to be true when they are picked up from a science textbook than when invented on the fly—it is very easy to invent cognitive structures that look like causal theories but are not even anticipation-controlling, let alone true.
This is the signature style I want to convey from all those posts that entangled cognitive science experiments and probability theory and epistemology with the practical advice—that practical advice actually becomes practically more powerful if you go out and read up on cognitive science experiments, or probability theory, or even materialist epistemology, and realize what you’re seeing. This is the brand that can distinguish LW from ten thousand other blogs purporting to offer advice.
I could tell you, “You know, how much you’re satisfied with your food probably depends more on the quality of the food than on how much of it you eat.” And you would read it and forget about it, and the impulse to finish off a whole plate would still feel just as strong. But if I tell you about scope insensitivity, and duration neglect and the Peak/End rule, you are suddenly aware in a very concrete way, looking at your plate, that you will form almost exactly the same retrospective memory whether your portion size is large or small; you now possess a deep theory about the rules governing your memory, and you know that this is what the rules say. (You also know to save the dessert for last.)
I want to hear how I can overcome akrasia—how I can have more willpower, or get more done with less mental pain. But there are ten thousand people purporting to give advice on this, and for the most part, it is on the level of that alternate Seth Roberts who just tells people about the amazing effects of drinking fruit juice. Or actually, somewhat worse than that—it’s people trying to describe internal mental levers that they pulled, for which there are no standard words, and which they do not actually know how to point to. See also the illusion of transparency, inferential distance, and double illusion of transparency. (Notice how “You overestimate how much you’re explaining and your listeners overestimate how much they’re hearing” becomes much more forceful as advice, after I back it up with a cognitive science experiment and some evolutionary psychology?)
I think that the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms—thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up. And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.
Note the grade of increasing difficulty in citing:
- Concrete experimental results (for which one need merely consult a paper, hopefully one that reported p < 0.01 because p < 0.05 may fail to replicate)
Causal accounts that are actually true (which may be most reliably obtained by looking for the theories that are used by a majority within a given science)
Math validly interpreted (on which I have trouble offering useful advice because so much of my own math talent is intuition that kicks in before I get a chance to deliberate)
If you don’t know who to trust, or you don’t trust yourself, you should concentrate on experimental results to start with, move on to thinking in terms of causal theories that are widely used within a science, and dip your toes into math and epistemology with extreme caution.
But practical advice really, really does become a lot more powerful when it’s backed up by concrete experimental results, causal accounts that are actually true, and math validly interpreted.
- Well-Kept Gardens Die By Pacifism by 21 Apr 2009 2:44 UTC; 245 points) (
- My Algorithm for Beating Procrastination by 10 Feb 2012 2:48 UTC; 145 points) (
- Willpower Hax #487: Execute by Default by 12 May 2009 6:46 UTC; 84 points) (
- The Case for a Bigger Audience by 9 Feb 2019 7:22 UTC; 68 points) (
- How to brainstorm effectively by 19 May 2012 21:29 UTC; 63 points) (
- Book Review: How Learning Works by 19 Jan 2014 20:45 UTC; 56 points) (
- Soft Paternalism in Parenting by 4 Jan 2014 15:52 UTC; 53 points) (
- 96 Bad Links in the Sequences by 7 Apr 2011 10:39 UTC; 51 points) (
- The Craft and the Community by 26 Apr 2009 17:52 UTC; 44 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- Bad reasons for a rationalist to lose by 18 May 2009 22:57 UTC; 34 points) (
- Plan while your ugh field is down by 23 Jan 2014 17:42 UTC; 33 points) (
- Share Your Anti-Akrasia Tricks by 15 May 2009 19:06 UTC; 25 points) (
- 14 Jul 2013 5:33 UTC; 23 points) 's comment on Model Combination and Adjustment by (
- Readiness Heuristics by 15 Jun 2009 1:53 UTC; 22 points) (
- My Elevator Pitch for FAI by 23 Feb 2012 22:41 UTC; 21 points) (
- Mindfulness Meditation Thread by 24 Apr 2012 20:08 UTC; 20 points) (
- 4 Jan 2013 17:01 UTC; 17 points) 's comment on A Critique of Leverage Research’s Connection Theory by (
- Less Meta by 26 Apr 2009 5:38 UTC; 17 points) (
- 1 Mar 2010 2:50 UTC; 14 points) 's comment on Improving The Akrasia Hypothesis by (
- Rationality Reading Group: Part Z: The Craft and the Community by 4 May 2016 23:03 UTC; 10 points) (
- Introduction to the Sequence Reruns by 19 Apr 2011 19:39 UTC; 10 points) (
- Financial Effectiveness Repository by 18 Nov 2014 9:57 UTC; 7 points) (
- 15 Dec 2011 3:04 UTC; 6 points) 's comment on Compressing Reality to Math by (
- 6 May 2009 7:26 UTC; 6 points) 's comment on Without models by (
- 22 Jul 2009 22:23 UTC; 6 points) 's comment on It’s all in your head-land by (
- 19 May 2009 9:12 UTC; 5 points) 's comment on Bad reasons for a rationalist to lose by (
- 16 May 2012 13:33 UTC; 4 points) 's comment on Open Thread, May 16-31, 2012 by (
- [SEQ RERUN] Practical Advice Backed By Deep Theories by 11 May 2013 3:38 UTC; 4 points) (
- 24 May 2012 21:21 UTC; 3 points) 's comment on [LINK] How to develop a photographic memory by (
- Introduction to the Sequence Reruns by 19 Apr 2011 4:48 UTC; 2 points) (
- 27 Apr 2014 21:29 UTC; 0 points) 's comment on Meetup Notes: Community Building by (
- 21 Jun 2012 20:11 UTC; -2 points) 's comment on Minimum viable workout routine by (
The thing is, it can take a long time until the deep theory to support a given practical advice is discovered and understood. Moving forward through trial and error can give faster and as effective results.
If you look at human history you will find several examples like the making of steel where practical procedures where discovered through massive experimentation centuries before the theoretical basis to understand them.
This comment is I think an essential couterbalance to the post’s valid points. To expand a little, the book Good Calories, Bad Calories by Gary Taubes argues that bad nutritional recommendations were adopted by leading medical and then governmental associations, partly justified by the above advice (we need recommendations to help people now, can’t wait for full testing). So someone could refer to this as an example of why the comment above is dangerous in areas that are harder to test than the efficacy of steel production (which I presume they knew worked better than other procedures, whereas some nutritional effects have long term consequences that aren’t clear or it’s not clear which component of the recommendation is affecting what). However, Taubes also shows that this was also used to justify overlooking flaws in the evidence, and he points to a group heuristic bias (if that’s the right term) of information cascades. There are other biases and failures of rationality (how certain statistical evidence was interpreted) in the story as well. So all this to say, while trial and error give give faster and as effective results, the less clear the measurement of the results are, the more care required interpreting them. When stated, it sounds obvious and I almost feel dumb for saying it, yet it’s one of those rules honored more in the breach as they say. In the field of nutrition, you’ll have headlines that say “Meat causes cancer” based on a study that points to a small statistical correlation between two diets which have very many differences other than type and amount of meat and itself concludes that more studies are called for to examine possible links between meat and cancer but not other possible causes that are just as much pointed to by the study.
The harm didn’t come from “leading medical and then governmental associations” adopting recommendations before they were proven, it came from them holding to those recommendations when the evidence had turned.
I probably would have voted this comment up had it been formatted more nicely. A lot of your point was lost on me because of the single large paragraph.
In my comment I wasn’t thinking particularly about nutrition. Regarding bad nutritional recommendations(and health recommendations in general) they may also be the consequence of studies. The thing is, when will we ever be done with the “full testing”? Science is constantly improving and in the future we will probably be horrified by some of the things we do now and that will later be proven to be wrong.
The best thing we can do is to be careful and prepared to update swiftly on new evidence.
It seems to me that many people don’t realize that math results have to be validly interpreted in order to be compelling. LOTS of bad thinking by smart people tends to involve sloppiness in the interpretation of the math. Auman was prone to this problem and so are people thinking about his agreement theorem.
This may be pointing at a bias that I don’t have a name for—the belief that the pathway between a possible cause-effect pair can be neglected.
It’s believing that all you need is the right laws, without having to pay attention to how they’re enforced. It’s believing that if you are the right sort of person, your life will automatically work well. It’s believing that more education will lead to a more prosperous society without having ways for people to apply what they know.
“Roberts knew the experimental science that let him interpret what he was seeing, in terms of deep factors that actually did exist.”
As these the same kinds of deep factors that show that watching talking heads on TV in the morning will cure insomnia because “Anthropological research suggests that early humans had lots of face-to-face contact every morning ”? - Roberts’ solution for insomnia as described in NYT: http://www.nytimes.com/2005/09/11/magazine/11FREAK.html
watching life-sized talking heads in the morning is roberts’ way of lifting his spirits, not his cure for insomnia.
ok, but it’s still merely a ‘just-so’ story with no worthwhile evidence behind it.
So far as the Shangri-La Diet is concerned, a boring explanation for the weird pattern of strong success, partial success, and utter failure is that biology is complicated.
There’s a little about the biological basis for hunger and satiety in Gina Kolata’s Rethinking Thin: The New Science of Weight Loss—and the Myths and Realities of Dieting. IIRC, there was only one chapter about hormones, and it was written for a popular audience. I skimmed it anyway, and don’t remember the details.
I doubt Seth’s evolutionary explanation, though I wouldn’t mind a little research on whether success with his diet is correlated with food neophilia and/or food neophobia.
http://sethroberts.net/science/ is totally unconvincing. The main promoter of the diet doesn’t seem to have any decent evidence that it works.
Lacking evidence, it seems like another fad diet, whose most obvious purpose is to sell diet books by telling people what they desperately want to hear—that they can diet and lose weight—while still eating whatever they like.
To me, it looks like junk science that distracts people from advice that might actually help them.
The graph of Roberts’s weight compared to fructose water intake on p. 73 of “What makes food fattening?” is very persuasive in my mind. I don’t think there is any evidence that it is effective in the population at large, but I think it is clear cut that it worked for Roberts.
I don’t think the cynical explanation gets very far. The details of the diet are freely available. There is only a single, cheap, slim book that Roberts published so that someone could learn about the diet in a format other than his website. Roberts could easily be mistaken, but I think his tone has consistently been “here is a little-known, easy technique that was highly effective for me; I have a theory why it could work for you too”. It’s hard to make money by telling someone to take three tablespoons of extra-light olive oil a day in addition to whatever other diet they are following.
One rat is just not statistically significant evidence—especially not when the rat is also the salesman. I don’t know whether Roberts is motivated by wealth, fame, or whatever—nor do I care very much.
Many tests on the same rat can be statistically significant! Do X, Y changes in the rat. Undo it, Y changes back. Repeat until it’s statistically certain connection...
We just have no particular reason to expect that it’ll generalize well to others.
This really stands out to me as a physicist because we do things like one rat tests all the time. Well, usually we get a few other ‘rats’, but we rely heavily on the notion that identically prepared matter is… identical. Biology, of course, doesn’t allow that shortcut.
Clinicians sometimes have a cohort of 1 for rare diseases… but of course that’s simply the best they can do under the circumstances.
True—but it won’t be too convincing if self-experimenting on yourself with your own diet. Science is based on confirmations of experiments by other scientists.
The rat being the salesman is the more serious issue there, yes.
I agree that the theory is unconvincing. Roberts seems to argue that organisms have brain-regulated mechanism which force the organisms to eat more if the food is more easily available. Such behaviour could be beneficial because during famines the supplies would be later depleted, but the explanation smells of group selection—I suppose that especially during famines the individual who eats as much as possible and stores that as fat will have great advantage against more modest members of his group, not speaking about other species. Am I missing something?
Pop evo-psych stories are a marketing strategy for diets, not a real reason to follow one. Look at the paleo diet—which apparently promotes the ancestral state of malnourishment and dehydration, on the basis of an evo-psych story.
Diets are best evaluated by testing them, not by telling memorable stories about their origins.
Why evo-psych? Psychology has nothing to do with that.
Diets are, of course, evaluated by testing, but Roberts goes further and makes an explanation of his diet, and whether this explanation is consistent from evolutionary perspective is a relevant question.
Or, in my view, not as far, by promoting an almost totally-untested diet.
Yes—the cost of gathering the food. Roberts’s hypothesis is that if food is not plentiful, it’s counterproductive to be so hungry that you burn a lot of calories looking for more food, versus sitting tight and drawing on your fat stores. Conversely, if food is plentiful, you’d be an idiot not to go get as much as you can handle.
What if you trust the author? In that case, perhaps it’s a more efficient use of your time to have the author “just tell you what to do”.
Derev Sivers thinks so—https://sivers.org/2do.
I think that there is a certain level of abstraction for which advice is most effective. The level of abstraction most people use is obviously way too high, but getting into experimental results and math seems to be too low a level of abstraction. The chain of logical steps that link experiments/math to advice is long, and I think below the level of consciousness.
Kurt Lewin, speaking about psychological theories in particular
Actually, speaking as somebody who’s done this, what I can tell you is that a huge amount of the experimenters get stuff wrong in their models and conclusions, because their terminology is at cross-purposes to what’s really happening.
NLP, on the other hand, actually does have a vocabulary that matches the territory, but one that has been largely unexplored by experimental psychology, in much the way that hypnosis has had limited study. The catch in both is that you need a skilled operator to observe or produce many of the phenomena in question, because people differ in surface characteristics that have to be bypassed before you get to the similarities.
NLP’s rep-systems and strategies models actually do have the necessary vocabulary and “behavioral caclulus” to discuss subjective experience, and in particular the parts needed to get past surface dissimilarities in processing.
I suggest “Neuro-linguistic Programming, Volume I”, by Dilts et al, as an introduction for the theory-minded. A brief excerpt:
Another:
IOW, if you’re looking for a vocabulary, run, don’t walk, to get that book. It is generally considered the least-successful/popular book on NLP ever written, for precisely the same reason I’m recommending it to you: it’s full of math, big words, and attempts at being precise.
(It is almost 30 years old, btw, so it shouldn’t be considered the latest or greatest. There are a LOT of things in it that have been supplanted by more streamlined methods. However, the key underlying model of sensory representation strategy sequences (both in and out of consciousness) is just as valid today. There are just a lot more things known today about how we code things in those sensory represenations, and how to obtain information about them, install new representations, etc.)
This post was, to some extent, directed particularly at you. It would seem that you haven’t taken my advice… I wish I knew of some good experimental results to back it up, as this would render it less ignorable.
What you’re talking about above is not a concrete experimental result. Neither is it a standard causal theory, nor is it a causal theory that strikes me as particularly likely to be true in the absence of experimental validation. Nor is it valid math validly interpreted, or logic that seems necessarily true across lawful possible worlds. I don’t care if it works for you and for other people you know; that doesn’t show anything about the truth of the model; there’s this thing called a placebo effect. The advice fails to meet the standard we’re accustomed to, and that’s why we’re ignoring it. It is just one more theory on the Internet at this point, and one more set of orders delivered in a confident tone but not explained well enough to interpret at all, really.
I’m relieved to read this Eliezer, because I thought it was just me who perceived pjeby’s advice as misguided.
I’ve been whining at him for a while, though my complaint isn’t so much that his advice is misguided, as that he keeps offering pronouncements about how the mind works and how to make it work better, but evidence that his model and methods are sound seems sorely lacking (here, at least).
...much of which has come attached with things that are actually possible to investigate and test on your own, and a few people have actually posted comments describing their results, positive or negative. I’ve even pointed to bits of research that support various aspects of my models.
But if you’re allergic to self-experimentation, have a strong aversion to considering the possibility that your actions aren’t as rational as you’d like to think, or just don’t want to stop and pay attention to what goes on in your head, non-verbally… then you really won’t have anything useful to say about the validity or lack thereof of the model.
I think it’s very interesting that so far, nobody has opposed anything I’ve said on the grounds that they tested it, and it didn’t work.
What they’ve actually been saying is, they don’t think it’s right, or they don’t think it will work, or that NLP has been invalidated, or ANYTHING at all other than: I tried thus-and-such using so-and-so procedure, and it appears that my results falsify this-or-that portion of the model you are proposing.”
In a community of self-professed rationalists, I find that very interesting. Not as interesting, mind you, as I would an actual result falsifying a portion of my model, though.
Because that, I would actually LEARN something from. I could try and replicate the person’s result, offer other things to try, or maybe even update my model. It does happen, pretty regularly—and the updates are almost equally likely to come from:
more-or-less mainstream psych and popularizations thereof,
pop, new age, or NLP stuff,
self-experimentation, and
unexpected events in client work
A recent mainstream psych example would be Dweck’s fixed/growth mindsets model, which I’ve now converted to a more specific model for change work that I call “or”/”more” thinking.
That is, a belief that “either I do this OR I fail”—a digital control variable of avoidance—is less useful than one where “the MORE I do this the more/closer I get”: an analog variable under your control.
This is a much finer-grained distinction than my older notion that didn’t include discrete/continuous, but focused strictly on the approach/avoidance aspect of the variables. It’s also a more narrowly-focused understanding of the difference than Dweck’s work, which speaks more about the effects of these mindsets than the mechanism of them, or how to change that mechanism in practice.
So now that I have this distinction, I’ve gone back and reviewed other things I’ve read that tie into this idea in one way or another, giving it more depth. That is, I can look at other discussions of “naturally successful” behavior, hypnotic techniques or NLP submodality techniques that link an increase in one thing to an increase in another, and so on.
In particular, I’ve found various techniques by Richard Bandler that describe how certain successful athletes and entertainers he worked with transformed “or” variables into “more” variables (although he didn’t use those terms).
I’m now in the process of self-experimenting with some of those techniques, preparatory to selecting ones to add to my personal and training repertoire.
That, more or less is my method for model refinement: read about ideas, try ideas, figure out what works, update models, find relevant techniques, try techniques w/self, w/clients, get ideas about what other ideas might be worth investigating, rinse and repeat.
Is it “the scientific method”. Probably not. Is it closer to the scientific method than the “I read something or believe something that means that won’t work, but can’t be bothered to tell whether it’s the same thing” approach favored by some folks? Hell yeah.
Btw, that attitude is why every new self-help author or guru has to come up with new names for every damn thing: the old names get worn out by people who conclude they already “know” what that thing is, because their brother told them something about something like that once and it sounded kind of like something else they tried that didn’t work.
Yet century-old techniques work fine, if you actually know how to do them, and you actually DO them. But surprisingly few people ever actually try, let alone try with all their might, in the “shut up and do the impossible” sense.
I am unable to make enough sense of what you say to try it. It is not written in a language I can read.
And that’s not a criticism I have a problem with. Hell, if you actually tried something and it didn’t work, and you gave me enough information to be able to tell what you did and what result you got instead , that would be excellent criticism, in my book.
Helpful criticism is helpful, and always welcomed, at least by me.
Why shouldn’t you?
I don’t understand. Why should I have a problem with Eliezer’s criticism, or any considered criticism or honest opinion? It is only ignorant criticism and anti-applause lights that I have a problem with.
Well, that’s ambiguity in interpretation of “having a problem with something”. I (mis)interpreted your statement to mean “this kind of criticism doesn’t bother me”, that is you are not going to change anything in yourself in response, which would be unhealthy, whereas you seem to have intended it to say “this kind of criticism doesn’t offend me”.
I’m allergic to self-experimentation. I find that I’m not a very good judge on my own reactions. Furthermore, self-experimentation is probably the worst way to go about setting up a true model of the world.
So basically are you saying Eliezer, gjm and others are falling for the fallacy fallacy ?
Have you read the book? If not, I respectfully suggest you have not the slightest clue what you’re talking about.
This kind of argument is a winner in a war of attrition. It is a true game stopper, better than the responses of ever increasing length. It’s only fair that you have to argue the opponent into getting the book first. As a quick preliminary check, I looked it up on Wikipedia, and the following characterization doesn’t inspire:
-- NLP and science on Wikipedia
Online source? I’ve read The Gentle Art of Verbal Self-Defense on matching modalities and it did not much impress me; I followed Nesov’s link and it says that NLP is currently in a state of having tried and failed to present evidence. I’m not likely to buy another book at that point, but could perhaps be convinced to read an online source which presents the result of an experiment.
Argh. You edited after I started replying. Here’s an online source that presents the result of an experiment, from the “NLP and Science” page on Wikipedia:
By the way, as far as I can tell, the entire “NLP and science” page on Wikipedia is devoted to discussion of claims made in books other than NLP volume I, or that any rate are not central to the rep-systems and strategies model presented in volume I.
The major popular confusion about NLP is confusing techniques with the modeling method. Volume I is about modeling strategies: understanding what people do in their heads and bodies as a way of communicating those behaviors to other people. This is only tangentially related to therapeutic or persuasive applications of the models.
So, the idea of predicate matching is an application of NLP; not NLP itself. I’ve never read the Gentle Art of Verbal Self-Defense, so I’ve got no idea what it says or whether it’s sensible, any more than I could say whether an arbitrary “science” book is useful or helpful.
FWIW, Worldcat says there’s a copy at a library roughly 43 miles from SIAI HQ.
This seems… so classically crackpot. I admit to initial skepticism towards NLP, but your posts have done nothing to alleviate that and most everything to confirm it. Are you saying that the best book (and thus the model) is 30 years old and the best experiments are 20 years old?
How about the experiments that went into proposing the model? To paraphrase someone, how was this model carved out of existence? Which information led to its identification contrary to the thousands of crackpot ‘theories’ of the mind? And what is your obsession with self-experimentation? That sounds like Hare Krishna.
You’re not doing well to distinguish NLP over the run-of-the-mill internet woo.
No, the best book I know of, about the core model of NLP: that everything we call “thinking” consists of manipulating sensory information, in one form or another, and that cognitive algorithms consist of transforming, combining, and comparing information across different sensory systems.
30 years ago, that was a revolutionary idea; now, it’s not actually that far off the beaten track, in that there’s recent mainstream support for a many of its ideas. (NLP had near/far distinctions 20 years ago, for example, and the critical role of physical sensations in mental recognition of emotions.)
Bandler was editing books on therapy, listening to recordings of some very successful therapists, and noticed some interesting commonalities in their language. He talked to a linguistics professor at his college, who noticed it too.
Building on Bateson and Korzybski, they put together a linguistic model of information processing, to show how surface language structure reflects deep structure—i.e., what something says about how you’re likely thinking, grounded in what the therapists were doing to identify broken internal models in their clients.
In other words, they noticed that the successful therapists were noticing certain patterns of things people said, and then asking questions that forced the clients to reconsider their mental model of a situation.
Now, if this sounds familiar, it’s because REBT and CBT are based on the exact same thing, just without—AFAIK—as precise of a model as the linguistic one developed by B&G. And AFAIK, B&G described it first.
In my original version of this post, I went on to describe how they got to other models—that also now have experimental support—but it got bloody long. Short version: they got microexpressions first too, AFAIK, although they didn’t claim them to be universal. NLP practice drills focus on recognizing what the person in front of you is doing, not what everyone in the world might do.
That it produces useful results for the experimenter.
Online source?
The link I gave to Amazon. If you mean a free online version, I don’t know of any. The Structure of Magic, Volume I is probably easier to find as a torrent or something, but it deals mostly with the mapping between linguistic structure and inner models. It predates NLP vI, and was the basis for the method by which they discovered the rep system and strategies model that was begun in NLP vI. It has been literally decades since I read it, and I don’t own a copy, so offhand I don’t know how illustrative it would be by comparison.
When I read this, I get the same feeling as before, when you wrote about changing your ways in order to introduce your techniques to this forum. The feeling is that when you talk of rigor, you see it as a mere custom, something socially required, and quite amusing, really, since all that rigor can’t be true, anyway. After all, it’s only possible to make attempts at being precise, so who are you kidding. Plus, truth is irrelevant. And here we are, the LessWrong crowd, all for the image, none for the substance, bad for efficiency.
I wouldn’t say that of everybody on LessWrong, but there is certainly a vocal contingent of that stripe. That contingent unfortunately also suffers from the use of cognitive models that, to me, are as primitive as the medieval four-humors model.
So when they push my “ignorance and superstition” buttons in the same posts where they’re demanding properly validated rituals and papers for things they could verify for themselves in ten minutes by simple self-experimentation, it’s rather difficult to take them seriously as “rationalists”. (Especially when they go on to condemn theists for suffering from the same delusions as they are, just externally directed.)
I totally don’t mind engaging with people who want to learn something and are willing to actually look at experience, instead of just talking about it and telling themselves they already know what works or what is likely to work, without actually trying it. The other people, I can’t do a damn thing for.
If your interest is in “science”, I can’t help you. I’m not a scientist, and I’m not trying to increase the body of knowledge of science. Science is a movement; I’m interested in individuals. And individual rationalists ought to be able to figure things out for themselves, without needing the stamp of authority.
I also have no interest in being an authority—the only authority that counts in any field is your own results.
This is why I hope that the next P. J. Eby starts out by first reading the OBLW sequences, and only then begins his explorations into akrasia and willpower.
You cannot verify anything by self-experimentation to nearly the same strength as by “properly validated rituals and papers”. The control group is not there as impressive ritual. It is there because self-experimentation is genuinely unreliable.
I agree with Seth Roberts that self-experimentation can provide a suggestive source of anecdotal evidence in advance of doing the studies. It can tell you which studies to do. But in this case it would appear that formal studies were done and failed to back up the claims previously supported by self-experimentation. This is very, very bad. And it is also very common—the gold standard shows that introspection is not systematically trustworthy.
I’m a bit confused as to your goal, Eliezer.
Are you trying to find a fully general solution to the akraisia problem, applicable to any human currently alive… or do you want to know how you can overcome akrasia? The first is going to be a fair bit harder than the second, and you probably don’t have time to do that and save the world.
If you shoot a little lower on this one and just try to find something that works for you I think your argument will change… quite a lot.
If you think that’s the case, you didn’t read the whole Wikipedia page on that, or the cite I gave to a 2001 paper that independently re-creates a portion of NLP’s model of emotional physiology. I’ve seen more than one other peer-reviewed paper in the past that’s recreated some portion of “NLP, Volume I”, as in, a new experimental result that supports a portion of the NLP model.
Hell, hyperbolic discounting using the visual representation system was explained by NLP submodalities research two decades ago, for crying out loud. And the somatic marker hypothesis is at the very core of NLP. Affective asynchrony? See discussions of “incongruence” and “anchor collapsing” in NLP vI, which demonstrate and explain the existence of duality of affect.
IOW, none of the real research validation of NLP has the letters “N-L-P” on it .
Unreliable for what purpose? I would think that for any individual’s purpose, self-experimentation is the ONLY standard that counts… it’s of no value to me if a medicine is statistically proven to work 99% of the time, if it doesn’t work for ME.
This sounds like being uninterested in the chances of winning a lottery, since the only thing that matters is whether the lottery will be won by ME, and it costs only a buck to try (perform a self-experiment).
And yet, this sort of thinking produces people who get better results in life, generally. Successful people know they benefit from learning to do one more useful thing than the other guy, so it doesn’t matter if they try fifty things and 49 of them don’t work, whether those fifty things are in the same book or different books, because the payoff of something that works is (generally speaking) forever.
Success in learning, IOW, is a black-swan strategy: mostly you lose, and occasionally you win big. But I don’t see anybody arguing that black swan strategies are mathematically equivalent to playing the lottery.
IMO, the rational strategy is to try things that might work better, knowing that they might fail, yet trying to your utmost to take them seriously and make them work. Hell, I even read “Dianetics” once, or tried to. I got a third of the way through that huge tome before I concluded that it was just a giant hypnotic induction via boredom. (Things I read later about Scientology’s use of the book seem to actually support this hypothesis.)
This became infeasible with the invention of printing press. There is too much stuff out there, for any given person to learn. Or to ever see all the titles of the stuff that exists. Or the names of the fields for which it’s written. There is too much science, and even more nonsense. You can’t just tell “read everything”. It’s physically impossible.
P.S. See this disclaimer, on second thought I connotationally disagree with this comment.
What happened to “Shut up and do the impossible”? ;-)
More seriously, what difference does it make? The winning attitude is not that you have to read everything, it’s that if you find one useful thing every now and then that improves your status quo, you already win.
Also, when it comes to self-help, you’re in luck—the number of actually different methods that exist is fairly small, but they are infinitely repeated over and over again in different books, using different language.
My personal sorting tool of choice is looking for specificity of language: techniques that are described in as much sensory-oriented, “near” language as possible, with a minimum of abstraction. I also don’t bother evaluating things that don’t make claims that would offer an improvement over anything else I’ve tried, and I have a preference for reading authors who’ve offered insightful models and useful techniques in the past.
Lately, I’ve gotten over my snobbish tendency to avoid authors who write things I know or suspect aren’t true (e.g. stupid quantum mechanics interpretations); I’ve realized that it just doesn’t have as much to do with whether they will actually have something useful to say, as I used to think it did.
PJ, is there a survey / summary / list of these methods online? Could you please link, or, if there’s no such survey, summarize the methods briefly?
90% of everything is hypnosis, NLP, or the law of attraction—and in a very significant way, they are all the same thing “under the hood”, at different degrees of modeling detail and with different preferred operating channels.
NLP has the most precise models, and the greatest emphasis on well-formedness criteria and testing. (At least, the founders had those emphases; “pop NLP” often seems to not even know what well-formedness is.) Hypnosis, OTOH, is just a trancy-form of NLP, LoA, or both.
Pretty much everything in the self-help field can be viewed as a special case, application, or “tips and hints” variation of one of those three things, but using individual authors’ terminology, metaphors, and case histories. The possible failure modes are pretty much the same across all of them, too.
There is, by the way, one author who writes about non-mystical applications of the so-called “law of attraction”: Robert Fritz. He’s the only person I’m aware of who’s brought an almost-NLP level of rigor and precision to that concept, and with absolutely no mystical connotations or bad science whatsoever. He doesn’t call it LoA; he refers to it as the “creative process”, and shows how it’s the process that artists, musicians, and even inventors and entrepreneurs normally use to create results. (i.e., a strictly mental+physical process that engages the brain’s planning systems, much like what I showed in my video, but on a larger scale.)
His books also contain the largest collection of documented failure modes (biases and broken beliefs) that interfere with this process, based on his workshops and client work. I’ve found it to be invaluable in my own practice.
(The biggest shortcoming of Fritz’s work compared to some more mystical LoA works, however, is that he doesn’t address general emotional state or “abundance mindset” issues, at least not directly.)
BTW, I think that the Law of Attraction is basically a manifestation of successful self-priming (plus the other self-conditioning phenomenon Anna Salamon posted about—can’t find the post). And yes, the pull motivation trick seems to fit here perfectly.
Viva randomness! At least it’s better than stupidity. And is about as effective as reversed stupidity. Which is not intelligence.
You should know better what you need, what’s good for you, than a random number generator. And you should work on your field of study being better than a procedure for crafting another random option for such a random choice. I wonder how long it’ll take to stumble on success if you use a hypothetical “buy a random popular book” order option on Amazon.
P.S. See this disclaimer, on second thought I connotationally disagree with this comment.
Strawman?
Guilty. It doesn’t particularly apply in this case, since the argument is that randomness is the best available option for now, because intelligence doesn’t work yet for this case. I’m overidentifying with the general negative move I’ve made on pjeby, and as a result I’ve indulged myself in a couple of wrong responses, in a comment above and to an extent in a preceding one, although both also hold a fair amount of truth, but express it with dishonest connotation.
This comment was based on an argument with a person who explicitly insisted that tossing a coin is better than deciding for yourself.
Kindly point to the specific words which you think meant that, so that I can see whether I need to be more clear, or whether you just rounded to a cliche.
Edit to add: Whoops, I just did the same thing to you. I see now that your comment was saying that you were rounding to a cached argument from a discussion with somebody else about tossing coins, not implying that that was what I said. Sorry for the confusion.
But pjeby isn’t even saying that – even reading completely random books, which AFAICT he doesn’t advocate, invokes a powerful optimization process (writers and publishers).
You always do the random thing relative to the options you are given. That doesn’t change the problem, as far as I can see, just applies it to a different situation
Point taken; still, different from my very literal interpretation of letting a random number generator decide what you need.
You can’t literally make only random actions. You can’t make random muscle movements. You may use random long-term goals, which can be analogized with being a fanatic, or middle-term goals, analogy with a crazy person, or random short-term goals, analogous to being clinically mad. In any case, whatever I could mean by random action, it’s necessarily already quite abstract, selected from few intelligent options.
You sound like someone arguing that evolution shouldn’t be able to work because it’s all “blind chance”. Learning, like evolution, is “unblind chance”: what interests me is a combination of what I encounter plus what I already know.
The more I learn, the more I learn about what is and isn’t useful, and I’ve found it useful to drop (or at least reduce the priority of) certain filters that I previously had, while tightening up other filters. That’s not really “random”, in the same way that natural selection is not “random”.
That still isn’t the same as self-experimenting with every procedure that was ever thought up and supported by a visible enough school. As an intelligent being, you should be able to do better than randomness, and well better than evolution. That’s the power of intelligence.
Still strawman? pjeby said:
See? I don’t even remember reading it.
You keep using that phrase. I do not think it means what you think it does.
The phrase makes some kind of sense to me (although not in that particular case), so in case you’re not just trying to drop a geeky reference, let me try to explain what I make of this phrase.
Assume members of alien species X have two reasoning modes A and B which account for all their thinking. In my mind, I model these “modes” as logical calculi, but I guess you could translate this to two distinct points in the “space of possible minds”.
An Xian is at any one time instance either in mode A or B, but under certain conditions the mode can flip. Except for these two reasoning modes, there is a heuristic faculty, which guides the application of specific rules in A and B. Some conclusions can be reached in mode A but not in B, and vice versa, so ideally, an Xian would master performing switches between them.
Now here’s the problem: Switching between A and B can only happen if a certain sequence of seemingly nonsensical reasoning steps is taken. Since the sequence is nonsensical, an Xian with a finely tuned heuristic for either A or B will be unlikely to encounter it in the course of normal reasoning.
Now, say that Bloob, an accomplished Xian A-thinker, finds out how to do the switch to B and thus manages to prove a theorem of high-value. Bloob will now have major problems communicating his results to his A-thinking peers. They will look at a couple of his proof steps, conclude that they are nonsensical and label him a crackpot.
Bloob might instead decide (whatever that word means in my story) to target people who are familiar with the switch from A to B. He can show them one of the proof steps, and hope that their heuristic “remembers” that they lead to something good down the road. Such a nonsensical proof step may be saying “Shut up and to the impossible”.
So, I suspect that humans do have something like those reasoning modes. They are not necessarily just two, it might not be appropriate to call all of them reasoning, but the main point is that thinking a thought might change the rules of thinking.
I think this idea is very close to the whole area of NLP, hypnosis, and some new-age ideas, e.g., Carlos Castaneda explicitly wants to “teach” you how to shift your mind-state around in the space of possible minds (which is egg-shaped incidentally). Not that any of these have ever done anything for me, but I also haven’t tried following them.
From self-experimentation (sorry), Buddhist meditation seems to be a kind of thinking that can change the rules of thinking, and I think there is some evidence that it actually changes the brain structurally.
Given the possibility of certain thoughts changing the rules of thinking, what is the rational thing to do? If there’s a good answer to this I’m grateful for a link.
Excellent comment! You have hit the nail very nearly square on the head. Allow me to make one minor adjustment to your aim, and then relate your analogy back to the fields of self-help, NLP, Zen, normal waking consciousness, etc.
See, it’s not the content of the thought that switches modes, but how you think the thought, or rather, what portion of your thoughts you pay attention to.
In suspension of disbelief—and hypnosis, suggestion, etc.-- you simply refrain from commenting on your experience in-progress, because it interferes with the perception of the experience itself. (See e.g. current studies on how explicit commenting can reduce satisfaction with decision making and accuracy of classification.)
So if “B” is experience, and “A” is commenting-about-experience, to the extent that you do both at the same time, one or the other will suffer, just like your experience of a movie will be degraded by a running commentary by audience members… unless you prefer the humor of the commentary to the experience of the movie. (But in that case, the movie still suffers relative to the commentary, you just like it better that way!)
Now, whether you refrain from commenting on something is partly determined by what you already believe. Movies that violate my understanding of say, computer technology, will be much more tempting to internally dispute or comment on, thus voiding my enjoyment and use of “B”-mode thinking. In contrast, someone who knows less about computers will not be induced to comment by the same scene, and thus not suspend their disbelief.
Self-help techniques use B-mode thinking, but the more intelligent you are, the more ways you can find to object to the “truthfulness” of thoughts that you nonetheless would find useful to have installed in your “B” system. But if you give in to the temptation to meta-comment on those thoughts, then you will not succeed in installing them in the “B” system… assuming you didn’t already throw the book down in disgust, long before even trying to!
Religion works in roughly the same way, of course: you’re discouraged from meta-commenting, so various B-mode thoughts can be installed and left running.
Of course, we all know that this is bad, but it’s not because B-mode itself is bad, it’s because religions include many poor-quality beliefs, in addition to the ones that might have some personal or social utility!
Part of the foundation of NLP, however, is a set of principles known as the “outcome frame” and “ecology”—attempts to codify quality standards for “B-mode beliefs”, based on well-formedness rules for the beliefs themselves, and standards for evaluating the likely long-term systemic effects of carrying that belief.
Most of the original NLP clique have also been very careful, when defining their techniques, to offer guidelines for what kind of beliefs to install in people, and how to avoid “junk beliefs”.
(For example, one is cautioned to prefer installing beliefs of capability rather than ability, e.g. “I can learn to do this better”, not “I am the best there is”.)
Most self-help material—including much popular work on NLP, alas—does not adhere to such standards.
My experience of Zen meditation is that it trains you to refrain from commenting on your thoughts and experiences, which is why it provides benefits for learning skills that require you to focus on experience instead of commenting. (See e.g. “The Inner Game of Tennis”.) So, AFAICT, it’s definitely related to the same “B” mode as other self-help modalities, and really just consists of practicing trying to stay in B mode, no matter what thoughts try to pull you into A mode.
In contrast, hypnosis tries to get you so relaxed that it seems like “too much work” to do any “A” mode thinking, versus just drifting along with your ongoing “B” experience.
NLP techniques, including my own, work on controlled alternation of attention between the A and B modes.
And normal consciousness for most people also alternates between A and B, but “A” dominates, and we actually spend good money (e.g. on movies and other entertainment, hobbies, etc.) so we can spend some quality time in “B”.
I’d generally agree with that, but I was recently at an excellent qi gong workshop taught by Yang Yang, who told the students to do qi gong with an attitude of “I am a master”. As far as I can tell, this has the advantage of overriding habits of thinking “I’m just a student, I’m not very good at this”. It might also override habits of thinking “I have to show how good I am”.
Note that “I am a master” is not falsifiable, unless you also have some idea of what being a master consists of. This isn’t a problem if you believe (for example) that a master is someone who is always learning and improving, and who makes mistakes.
Of course, at that point, you are right back to having a capability belief. ;-)
Okay. Another take. Is this really true? How long would it take for a new-commer to walk through every available option? How much would it cost? What is the chance he should expect before starting the whole endeavor that any of the available options will help? For the last question, the lottery analogy fits perfectly, no “works only for ME” excuse.
I’ve read dozens of self-help books and numerous websites, etc. and pjeby’s claims of repetition seem mostly true (and his point that some who have unscientific philosophies have great practical advice is definitely true in my experience).
That huge numbers of books are about the same things, in different language? Absolutely. Books that contain something genuinely new in self-help are exceedingly rare in my experience. Books that have one or two new twists or better metaphors for explaining the same things are enormously common.
Take for example, “the law of attraction”. I don’t believe it has any objective external basis: rather, it’s a matter of 1. motivation and 2. making your own luck—i.e. “chance favors the prepared mind”. However, the quality of information about its practical applications varies widely, and some of the most woo-woo crazy books—like one of the ones supposedly written by a spirit being channeled from another universe—actually have the best practical information for leveraging the psychological benefits of belief.
I’m specifically talking about the “emotional energy scale” model from the book “Ask and It Is Given”. Note that I don’t know if they invented that model or swiped it from some psych researcher… and I don’t really care. By putting that information into a useful context, they gave me more usable information than raw experimental data would have provided.
Now, if I were looking for “truth”, I’d certainly trust peer-reviewed research more than I’d trust a channeled being from beyond. But if the being from beyond offers a useful model distinction, I don’t especially care if it’s true.
Now, some people reading this are going to think because I mentioned the LoA that I believe all that quantum garbage—but I do not. I do believe, however, that self-fulfilling prophecies are useful, and the LoA literature is a great source of raw practical data in the application of self-fulfilling prophecy, as long as you ignore all their theories about why anything works, and focus on testing specific physical and mental techniques, and break down the attitudes.
For example, one fascinating commonality of themes in this literature: the idea of gratitude or abundance, giving things freely to others and it will be given unto you, and a “friendly universe”. It’s interesting that, although some of these writers are borrowing from each other, others seem to have independently stumbled on an idea or attitude that reflects this notion: that in some larger way, “everything happens for a reason” or “the world is an abundant and giving place”.
Most will also insist on the importance of adopting this mindset for achieving results, which makes me wonder: could it be that there is some hardwired machinery in our brains that is triggered by conditions of perceived “abundance”? Is it then triggered by acting as-if conditions are abundant, in the same way that smiling can trigger happiness or friendliness?
It’s certainly food for further thought, although in my current simplified model of LoA, I assume that this is more of a test condition: i.e., if someone cannot act as-if they are in abundance, then they have not successfully made whatever internal transition is required. This seems a more parsimonious model at this point, than assuming that the actions themselves are relevant.
They would probably be FAR better off picking ONE book and sticking to it with absolute Zen-master determination, especially if they choose a book that offers sensory based language, and most importantly, a way to tell if you’re doing it right in a relatively short period of time. Comparatively few books contain this, but browsing in a bookstore will certainly find you a few. (I’ve linked to a few here in the past; “Loving What Is” and “Re-create Your Life” are two of the easiest for a beginner to master, if they pay close attention to the extra distinctions about “listening to yourself” that I’ve thrown out here on LW. )
Sadly, if you limit yourself to books only, this might well be true. Live trainings and coaching are substantially more likely to make a difference, because the feedback loop can be closed.
I have had more than one student report that after live work with me, they were able to go back and understand all the things in self-help books that they were never able to apply before, because now they knew what those books were actually talking about, once they had experiential reference points. (It’s unfortunately a lot easier to recognize whether a guru is “for real” once you are one, than before.)
My original goal for the book I am currently writing was to create a kind of Rosetta Stone for self-help material, but I have concluded that all I can really do is make such a Rosetta Stone for the sort of person who already would’ve found my approach enlightening—or more precisely, I can write a book that will get past the kind of filters that would keep a lot of those people from learning from the sources I learned things from. But the very fact that I do it that way will be a filter for a different group of people!
And this, by the way, is why we won’t see a scientifically-validated model of these things any time soon: learning them really requires a feedback loop of some kind, and most books don’t include enough of one to work with EVERYBODY, only for the set of people whose perceptual filters initially match those used by the writer. (Of course, even if there was such a feedback loop, it’s not prestigious to test practical ideas that somebody else came up with, versus impractical new ones.)
In the first draft of my book, I listed all sorts of ways to get a certain popular visualization technique wrong, that had bedeviled me and some of my students in the past. My newer students read it… and promptly found NEW ways to get it wrong, that I had to give them live feedback to fix.
I’ll add those ways of getting it wrong to the second draft, but I’m now far less confident that it is possible to eliminate ALL the ways that somebody can misinterpret a discussion of how to observe or manipulate their internal experience.
(And if I actually included ALL the ways I know of to get popular techniques or self-help ideas wrong, it would be much longer than the instructions of how to get them right… thereby making an unusable and unmarketable book. Which is probably why most self-help books only give a handful of misinterpretations and hope for the best. It probably doesn’t hurt that there are also financial rewards for selling some of your readers on live programs, but I honestly would like there to be a book that doesn’t need that option… I’ve just given up on my current book being that book.)
By far the best way to learn is with someone who can tell from your external behavior whether you’re doing it wrong, being a kind of human biofeedback system. The way I learned was definitely the hard way.
However, for the kind of successful person that I was talking about, these caveats don’t apply. A person with the attitude I was referring to, will find something useful in virtually anything they read, and promptly apply it. These are also the people who need self-help least, but that was actually part of my original point.
What I probably wasn’t clear enough on, was that it’s this attitude that determines the person’s success in LIFE, not their success in finding good self-help books! We are now way off of that particular reservation.
I haven’t read the above yet, I’ll do it later; but I want to make a general observation for now: everybody would be better off if your replies were shorter. You are already talking past many of the people here, so you should focus on communicating clearly, which may mean fast back-and-forth understanding checks, not on communicating lots of stuff, all of which doesn’t do any good.
My initial question was an introduction to the rest, which ask whether the method of looking at everything is going to pay off. I don’t ask for details about the content, since the worth of looking at these details is exactly what I’m asking about. I split the following question into its own thread:
Now you are talking past my question again. The conversation started where you asserted that it’s possible to test all of the available methods on yourself, since there are so few genuinely different ones. In response you recommend sticking to one method. Fine. What are the answers to my questions for a single randomly selected method (among a number of surface-filtered available options)?
My available samples say: Years, thousands, and slim. Of course, people for whom these things are not the case, will be considerably less likely to be my customer, so it’s a severely biased sample. (Which also means that it’s possible my techniques work best on people who try lots of self-help and fail, but that seems more like an advantage than a disadvantage to me.)
However, I have noticed that highly-successful people also own large self-help libraries, but they are not disappointed in them, because they always find at least ONE thing of use to them in EVERY book.
My original point, which you still seem to be ignoring, is that I am not and have never been advocating that a self-help seeker engage in a random walk of self-help books. I am saying that people who succeed in life have the attitude that they can find at least one useful thing in every circumstance they encounter, if they apply themselves to looking for it, and applying it.
Cultivating that attitude is what I actually recommended, as you will see if you return to the beginning of the thread.
My question, however, was about the worth of studying the theories of which you speak, and in particular of interpreting your long comments that try to communicate them. Thank you for answering it.
What might well be true? The connotation of my question that implies that your field is worthless? I was specifically asking how much it’s worth, only the conclusion that you may draw, as an expert, not the reflections leading to naught.
The rest of your comment also talks past the questions. You note that you receive student feedback that could answer my questions, talk about your book implying that it’ll answer my questions, talk about how the still completely unknown to me efficiency of your methods improves from personal tutoring.
Yes. I’m a rather outspoken critic of the field, and not just for marketing reasons.
The problem isn’t the industry, it’s that developing “kicking” skills requires practice, and for practice to work you have to have feedback, even if that feedback is you yourself checking your performance against some model. Most self-help material doesn’t even teach explicitly making these checks, let alone giving substantive criteria for telling whether you’ve done something correctly or not. People are left to blindly stumble on the right method, if they happen to hear a metaphor that works for them or read in someone’s story about doing it wrong, how they’re doing it wrong.
The entire field—at least in books—is like teaching people to ride bicycles without giving them any bicycles to practice on. Common practice in workshops isn’t a hell of a lot better, but your odds are a lot better of stumbling on a workshop where you can get coached or walked through something. Even there, testability, repeatability, and trainability are not the focus.
So yes, the entire self-help field might as well be a lottery right now, if you have no information on where to start. Many of my students, like me, own literally hundreds of self-help books, from which they got little or no help until they “got it” from something I wrote or said or did with them.
As for me, I just got lucky enough to get an insight from computer programming that opened my eyes to what was going on, that gave me my first “rosetta stone” for the field.
Unreliable for getting true explanations. Self-experimentation is generally too poorly controlled to give unconfounded data about what really caused a result. (Also, typically sample size is too small to justify generalizability.)
The way I’d put it for this stuff is that experiments help communicate why someone would try a technique, they help people distinguish signal from noise, because there are a ton of people out there saying X works for me.
The plural of anecdote is not data. Many people will tell you how they were cured by faith healers or other quacks, and, indeed, they had problems that went away after being “treated” by the quack. Does that make the quacks effective or give credibility to their theories about the human body?
The same applies to methods of affecting the human brain. As a non-expert, from the outside I can’t tell the difference between NLP, Freudian psychotherapy, and whatever hocus-pocus Scientology says helps people. All have elaborate theories to explain their alleged benefits, and all have had people who swear it works.
To quote Wikipedia:
Until I do see some acceptance among the academic community, I remain unconvinced that NLP is anything more than a self-reinforcing collection of hypotheses, speculation, and metaphors. It could very well be otherwise, but I can’t know that it isn’t!
Few of your comments here seem to me to describe things that are obviously checkable in ten minutes by simple self-experimentation. (Even ignoring the severe unreliability of self-experimentation, since doubtless there are at least some instances in which self-experimentation can provide substantial evidence.) Perhaps they are so checkable with the help of extra information that you’ve declined to provide. Perhaps I’ve just not read the right comments. Perhaps I’ve read the right comments and forgotten them. Would you care to clarify?
Mostly, I’ve offered questions that people could ask themselves in relation to specific procrastination scenarios, that would give them an insight into the process of how they’re doing it. IIRC, two people have reported back with positive hits; one of the two also had a second scenario, for which my first question did not produce a result, but it’s not clear yet what the answer to my second question was. (I gave both questions up front, along with the sequence to use them in, and criteria for determining whether an answer was “near” or “far”, along with instructions to reject the “far” answers. One respondent gave a “far” answer, so I asked them to repeat.)
I’ve also linked to a video offering a simple motivational technique based on my model; a few people have posted positive comments here, and I’ve also gotten a number of private emails from users here via the feedback form on my site, expressing gratitude for its usefulness to them. The video is just about 10 minutes long.
In another comment, I described a simple NLP submodalities exercise that could be tried in a few minutes, albeit with the disclaimer that some people find it hard to consciously observe or manipulate submodalities directly. (The technique in my video is a bit more indirect, and designed to avoid conscious interference in the less-conscious aspects of the process.)
I’ve referenced various books on other techniques I’ve used, and I believe I even mentioned that Byron Katie’s site at thework.org includes a free 20-page excerpt from Loving What Is that provides instructions for a testable technique that operates on the same fundamental basis as my models.
I’m really not sure what the heck else people want. Even if you claim, as Eliezer does, that he can’t understand my writing, it’s not like I haven’t referenced plenty of other people’s writing, and even my spoken language (in the video) as alternative options.
I also find your writing difficult. If you’ll accept a recommendation, I think your readership here might get more from shorter comments in which more work has gone into each word.
Can you link to these things? Your comments? Here? There’s an LW search box.
How To Tell If You’re Making Shit Up
NLP Submodalities Experiment
Motivation technique video
Edit to add:
Useful background on “entry criterion” for these techniques
“How To Tell If You’re Making Shit Up” seems useful. Do you see why this would seem useful to me while “NLP Submodalities” doesn’t?
For the same reason that yours and Robin’s writing on biases is more useful than the source material, I imagine. That is, it’s been predigested. It probably also doesn’t hurt that I have to teach “how to tell if you’re making shit up” to every single client of mine, so I have some practice at doing so! (Albeit mostly in real-time interaction.)
FYI, NLP volume I represents the more detailed “brain software” model from which that summary was derived, which I recommended to you because you said you couldn’t follow my writing.
You can also see why I was excited when Robin started posting about near/far stuff on OB—it fit very nicely into the work I was already doing, and into the NLP presupposition that “conscious verbal responses are to be treated as unsubstantiated rumor unless confirmed by unconscious nonverbal response”—i.e., don’t trust what somebody says about their behavior, because that’s not the system that runs the behavior.
The Near/far distinction mainly added an evolutionary explanation that was not a part of NLP, and gave a better why for not trusting the verbal explanation. Near/far in a literal sense, as in “people respond differently based on distance in space/time/abstraction level of visualization”, has been part of the NLP models for over 20 years now. But once again, the mainstream experiments are just now being done, presumably by people who’ve never heard of NLP, or who assume it’s crackpottery.
deleted
So, I watched the video (some time ago, when you posted about it) and gave it one trial. The technique wasn’t effective for me on the task I tried it on. The particular failure mode was one you mentioned in the video, and if you are correct about the generality with which it makes the technique not work then I would expect the technique to be generally ineffective for the things I’d benefit from motivational help with.
Your suggestions about identifying the causes of procrastination: I haven’t tried that yet, and it sounds interesting; I notice that when someone did try it and got results that didn’t perfectly match your theory your immediate response was not “oh, that’s interesting; perhaps my theory needs some tweaking” but “I don’t believe you”. Can you see how this might make people skeptical?
Referencing books is only helpful in so far as (1) it’s not necessary to read the whole of a lengthy book to extract the small piece of information you’ve been asked for, (2) the book is clearly credible, and (3) the book is actually available (e.g., in lots of libraries, or inexpensive, or online). To those who are skeptical about the whole self-help business, #2 is a pretty difficult criterion to meet.
Indeed. It is supposed to be a free sample, after all. The work I charge for is fixing those things that make it not work. The things that make motivation not work are much, much more diverse than the things that make it actually work.
My response was, “you didn’t follow directions”, actually. Unless you’re talking about the first part where the only information given was, “it didn’t work”. If you’ve ever done software tech support, you already know that “it didn’t work” is not a well-formed answer. (Similarly, the later answer given was also not well-formed, by the criteria I laid out in advance.)
Failure to meet entry criterion for a technique does not constitute failure of the technique or the model: if you build a plane without an engine, and it doesn’t take off, this does not represent a failure of aerodynamics. Indeed, aerodynamics predicts that failure mode, and so did I.
The response I got was not unexpected; it’s common for people to have trouble at first, especially on things they don’t want to look too closely at. I’ve had people spend up to 30 minutes in the “talk around the problem” failure mode before they could actually look at what they were thinking. The other most common failure mode is that somebody does see or hear something, but rejects it as nonsensical or irrelevant, then reports that they didn’t get anything.
Third most common failure mode is lack of body awareness or physical suppression, but I know he doesn’t have that as a general problem because his first response indicated awareness. His first response also indicated he is capable of perceiving responses, so that pretty much narrows it down to avoidance or assumption of irrelevance . If it’s neither, then it might be relevant to a model update, especially if it’s a repeatable result.
(At this point, however, he’s going to have to repeat the asking of the second question, to test that, though, because these responses don’t stick in long-term memory; in a sense, they are long-term memory.)
I think this (not the fact that it’s a free sample, but the fact that apparently it’s a feature, not a bug, if it doesn’t work well for many people) makes it rather unuseful as a try-it-yourself demonstration of how good your models and techniques are.
There was no such first part; even jimrandomh’s initial response had more information than that in it. And after he gave more information your reply was still “I don’t believe you” rather than “you didn’t follow directions”. Interested parties can check the thread for themselves.
No, to be sure. But once you hedge your description of your technique and what it’s supposed to achieve with so many qualifications—once you say, in so many words, that you expect it not to work when tried—how can it possibly be reasonable for you to use it as an example of how you’ve supplied us with empirically testable evidence for what you say?
Saying “You can check my ideas by trying this technique—but of course it’s quite likely not to work” is just like saying “You can check my belief in God by praying to him for a miracle—but of course he works in mysterious ways and often says no.”
The point of the exercise is that it’s targeted to work for as many people as possible for a fairly narrow range of tasks, so as to give a sample of what it’s like when it works.
Even chronic procrastinators can achieve success with the technique, as long as they don’t use it on the thing they’re procrastinating on—it only works if you don’t distract yourself with other thoughts, and if you’re stressed about something, you’re probably going to distract yourself with other thoughts.
Most people, however, don’t seem to have any significant stressors about cleaning their desk. Also, it’s not a difficult thing to visualize in its completed form.
Btw, just as a datapoint, what did you try it on, and what failure mode did you encounter? I am, ironically, MORE interested in failure reports than successes; the video continually gets rave reviews, but as much as I enjoy them, I can’t learn anything new from another success report!
I just rechecked myself; here are the relevant portions. Jim said:
I took this statement as a literal description of what happened, i.e., jim thought about “it”—whatever “it” was—got no physical response, and had thoughts about the details of the task. THEN (2nd step) he was unable to begin working on it.
“Unable to begin working on it” is the part I referred to as not well-formed; this does not contain any description of how he arrived at that conclusion. It is the equivalent of “it doesn’t work” in tech support.
The unspecified “it” is also potentially relevant; I don’t know if he refers there to the task itself, or one of the questions I said to ask about the task; and this is an important distinction. I’ve also noticed that some people can “think about their task” and not get a response because they are not thinking about actually starting on the task… and Jim’s statements would be consistent with a sequence of thinking about the idea of the task, followed by preparing to actually perform the task… at which point an undescribed response is occurring, whereby he is then “unable to” perform the task.
I commented on the conflict between these two statements:
Meaning: as far as I can tell, those statements are not talking about the same thing. I.e., one is a referent to some sort of pre-task preparation unrelated to the problem, and the other is actually about beginning it.
In other words: all the information was in the first sentence, but the second one is where the problem actually is. So I then asked Jim to direct his attention to that part of his thought process, and get more specific:
He then replied with two more not well-formed statements; instead of describing his thoughts or experiences, he replied with abstract, “far” explanations about the subject matter, instead of his direct response to the subject matter, i.e.:
and:
Neither of these utterances describes a concrete experience; they are verbalizations of precisely the kind I described in the “how to know if you’re making shit up” comment beforehand. They are far, not near thinking, and my techniques only use far thinking to ask questions, and determine what questions to ask. The answers sought, however, are exclusively “near”.
Thus, when someone replies with a “far” answer, I know that they have not actually answered my question or followed instructions—they are not using the part of their brain that will produce the desired result.
Notice, by the way, that at no time did I say I did not believe him. I took him quite literally at his word, to the extent that he gave me words that map to some sort of experience.
I tried it on the same example you proposed: desk-clearing. My desk is a mess; I would quite like it to be less of a mess; clearing it is never a high enough priority to make it happen. But I don’t react to the thought of a clear desk with the “Mmmmmm...” response that you say is necessary for the technique to work.
As for your discussion with Jim: you did not at any point tell him that he didn’t do what you’d told him to, or say anything that implied that; you did say that you think his statements contradict one another (implication: at least one of them is false; implication: you do not believe him). And then when he claimed that what stopped him was apathy and down-prioritizing by “the attention-allocating part of my brain” you told him that that wasn’t really an answer, and your justification for that was that his brain doesn’t really work in the way he said (implication: what he said was false; aliter, you didn’t believe him).
So although you didn’t use the words “I don’t believe him”, you did tell him that what he said couldn’t be correct.
Incidentally, I find your usage of the word “incompatible” as described here so bizarre that it’s hard not to see it as a rationalization aimed at avoiding admitting that you told jimrandomh he’d contradicted himself when in fact all he’d done was to say two things that couldn’t both be true if your model of his mind is correct. However, I’ll take your word for it that you really meant what you say you meant, and suggest that when you’re using a word in so nonstandard a way you might do well to say so at the time.
Did you ask yourself what it is that you would enjoy about it if it were already clean? (Again, this is strictly for my information.) Note that the procedure described in the video asks for you to wonder about what sorts of qualities would be good if you already had a clean desk, in order to find something that you like about the idea enough to generate the feeling of pleasure or relief.
Au contraire, I said:
That is, I directed him to the “How To Know If You’re Making Shit Up” comment—the comment in which I gave him the directions, and which explained why his utterance was not well-formed.
This is an awful lot of projection on your part. The contradiction I was pointing to was that he was talking about two different things—the statements were incompatible with a description of the same thing.
That is not anything like the same as “I don’t believe you”; from what Jim said, I don’t even have enough information to believe or not-believe something! Hence, “as far as I can tell” (“AFAICT”), and the request for more information… not unlike my requests for more information from you about what you tried.
“It didn’t work” is not an answer which provides me any information suitable for updating a model, any more than it is for a programmer trying to find a bug. The programmer needs to know at a minimum what you did, and what you got instead of the desired result. (Well, in the software case you also want to know what the desired result was; in this kind of context it can sometimes be assumed.)
Because it isn’t one: it’s a made-up explanation, not a description of an experience. See the comment I referred him to.
If someone states something that is not a testable hypothesis, how can I “believe” or “disbelieve” it? They are simply speaking nonsense. Unless Jim has a blueprint of his brain with something marked “attention-allocating part” and he has an EEG or brain scan to show this activity, how can I possibly assign any truth value to that claim?
In contrast, if Jim presents me with a sensory-specific description of his experience, I have the option of taking him at his word. His experience may be subjective, but it at least is something I can model internally and have a reasonable certainty that I know what he’s talking about.
For example, when a client tells me they have a “feeling”, for instance, my minimum criterion is that they can describe it in sensory terms, including its rough location in the body. If they say, “it’s just a feeling”, then I have no information I can actually use. Same goes for a vague description like “I just can’t do it”, or in Jim’s case, “I’m completely unable to begin”.
If you want to make any sort of progress in an art of thinking and behavior, it is necessary to be excruciatingly precise when you talk about the thinking and behavior. Abstract language is dreadfully imprecise, as you can see from the present exchange. However, people routinely use such abstract language while thinking they’re being precise, which is why the first order of business with my clients is breaking through their fuzzy ways of speaking and thinking about their thinking.
That was not “all” he’d done: he also said things that couldn’t both be true if they were talking about the same thing, and that is what I was referring to. I then proceeded on the assumption that there were thus two different things, occurring in succession, one of which I had virtually no information about, only assumptions.
You seem to want me to speak as if I don’t believe my model is true. However, I have enough experience applying that model to enough different people to know that the probability of someone using imprecise language or not doing precisely what I asked them to do is significantly higher (by which I mean at least one, maybe two orders of magnitude) higher than the probability that they are offering me any information that can update my model, let alone falsify it.
That means I need more bits of data about a hypothetically-disconfirming event, than I do about a confirming event… which is why I asked Jim for more information, and why I’ve done the same with you.
That you are selectively ignoring everything I’m doing to get good information, while simultaneously accusing me of post-hoc rationalization, suggests that it’s your own epistemology that needs a bit more work.
Perhaps you should state in advance what criteria it is that you would like me to meet, so that I don’t have to keep up with a moving target. That is, what evidence would convince you to update?
This discussion is getting waaay too long and distinctly off-topic; but, as briefly as I can manage:
Yes.
No, I did not do that. I said that what you’re doing looks a lot like post-hoc rationalization, but that I’d take your word that it wasn’t. I meant what I said.
I am updating all the time. Lots of things that you’ve said have led to adjustments (both ways) in my estimates for Pr(Philip knows exactly what he’s talking about) and Pr(Philip is an outright charlatan) and the various intermediate possibilities. Perhaps you mean: what evidence would lead to a large upward change for the “better” possibilities? I’m not sure that any single smallish-sized piece of evidence would do that. But how about: some reasonably precise statements explaining key bits of your model, together with some non-anecdotal and publicly avaliable evidence for their correctness.
I think that perhaps the problem here is that we are trying to treat you as a colleague whereas you prefer to treat us as clients. We say “your theories sound interesting; please tell us more about them, and provide some evidence”; you say “well, I want you to do such-and-such, and you have to do exactly what I tell you to”. This is unhelpful because (1) it doesn’t actually answer the question and (2) it is liable to feel patronizing, and people seldom react well to being patronized.
(By “we” it is possible that I really mean “I”, but it looks to me as if there are others who feel the same way.)
There are two modes of thinking. One directly makes you do things, the other one can only do so indirectly. One is based on non-verbal concrete sensory information, the other on verbal and mathematical abstractions.
Verbal abstractions can comment on themselves or on sensory experience, or they can induce sensory experience through the process of self-suggestion—e.g. priming and reading stories are both examples of translating verbal information to the sensory system, to produce emotional responses and/or actions.
More specifically, we make decisions and take action by reference to “feelings” (in the technical definition of physical awareness of the body/mind changes produced by an emotional response).
Feelings (or more precisely, the emotions that generate the feelings) occur in response to predictions made by our brain, using past sensory experience. But because the sensory system does not “understand”, only predict, many of these predictions are based on limited observation, confirmation bias, etc.
When our behavior is not as we expect—when we experience being “blocked”—it is because our conscious verbal/abstract assessment or prediction does not match our sensory-level prediction. We “know” there is no ghost, but run away anyway.
Surfacing the actual sensory prediction allows it to be modified, by comparing it to contradicting sensory evidence, whether real or imagined.
This is the bulk of the portion of my model that relates to treating chronic procrastination, though most of it has further applications.
You’ll need to define “evidence”. But the parts of what I said above that aren’t part of the experimentally-backed near/far model and the “somatic marker hypothesis” can be investigated in personal experience. And here’s a paper supporting the memory-prediction-emotion-action cycle of my model.
Actually, it does. I’m trying to tell you how to experience the particular types of experience that demonstrate practical applications of the model given above. Not following instructions won’t produce that result, because you’ll still be using the verbal thinking mode and commenting on your own comments instead of noticing your sensory experience.
My goal is not to define a “true” model of the brain; my goals are about doing useful things with the brain. The model I have exists to serve the results, not the other way around. I already had the model before I heard of “near/far”, “somatic marker hypothesis”, or the “feeling/emotion” model in that paper, so they are merely supporting/confirming results, not what I used to generate the model to start with. I was interested in them because they added interesting or useful details to the model.
Actually, I’m handling folks with kid gloves, compared to my students. If Jim were an actual client, there are things he said that I would have cut him off in the middle of, and said, “okay, that’s great, but how about: [repeat question here] Just ask the question, and wait for an answer.”
I usually give people more leeway towards the beginning of a session, and let them finish their ramblings before going on, but I cut it off more and more quickly as the session proceeds… especially if there’s an audience, and they’re thus wasting everyone’s time, not just mine, their own, and the money they’re spending.
I also woudn’t have bothered to refer Jim to my well-formedness guidelines until after I first got the desired result: i.e., a change to his automatic thought process. Once I had a verified success, only then would be the time to re-iterate about different modes of thought, and pointing back to how different statements he made did or did not conform to the guidelines.
Since my goal here was to provide information rather than training services—and because this is a public, rather than private forum—I tilted my responses accordingly. This is not me doing my impression of Eliezer or Jeffreysai; it’s me bending over backwards to be nice, possibly at the expense of conveying quality information.
The real conflict that I see is that for me, “quality information” means “information you can apply”. Whereas, it seems the prevailing standard on LW (at least for the most-vocal commenters) is that “quality” equals some abstraction about “truth”, that progressively retreats. It’s not enough to be true for one person, it must be true for lots of people. No, all people. No, it has to be all people, even if they don’t follow instructions. No, it has to have had experiments in a journal. No, the experiments can’t just be in support of the NLP model, the paper has to say it’s about NLP, because we can’t be bothered to look at where NLP said the same things 20-30 years ago.
Frankly, I’m beginning to forget why I bothered trying to share any information here in the first place.
I think the problem here is that the internet is great when you want to share information with people but is not a consistently good venue for convincing people of something, particularly when the initially least convinced people are self-selecting for interaction with you. Pick your battles, I’d say.
Just to check, you agree that to be useful any model of the brain has to correspond to how the brain actually works? To that extent, you are seeking a true model. However, if I understand you correctly, your model is a highly compressed representation of how the mind works, so it might not superficially resemble a more detailed model. If this is correct, I can empathize with your position here: any practically useful model of the brain has to be highly compressed, but at this high level of compression, accurate models are mostly indistinguishable from bullshit at first glance.
I am still very unsure about the accuracy of what you are propounding, but anecdotally your comments here have been useful to me.
No, it only has to produce the same predictions that a “corresponding” model would, within the area of useful application.
Note, for example, that the original model of electricity is backwards—Benjamin Franklin thought the electrons flowed from the “positive” end of a battery, but we found out later it was the other way ’round.
Nonetheless, this mistake did not keep electricity from working!
Now, let’s compare to the LoA people, who claim that there is a mystical law of the universe that causes nice thoughts to attract nice things. This notion is clearly false… and yet some people are able to produce results that make it seem true.
So, while I would prefer to have a “true” model that explains the results (and I think I have a more-parsimonious model that does), this does not stop anyone from making use of the “false” model to produce a result, as long as they don’t allow their knowledge of its falsity to interfere with them using it.
See also dating advice, i.e., “pickup”—some schools of pickup have models of human behavior which may be false, yet still produce results. Others have refined those models to be more parsimonious, and produced improved results.
Yet all the models produce results for some people—most likely the people who devote their efforts to application first, critique second… rather than the other way around.
A model can actually BE bullshit and still produce valuable results! It’s not that the model is too compressed, it’s that it includes excessive description.
For example, the LoA is bullshit because it’s just a made-up explanation for a real phenomenon. If all the LoA people said was, “look, we found that if we take this attitude and think certain thoughts in a certain way, we experience increased perception of ways to exploit circumstances to meet our goals, and increased motivation to act on these opportunities”, then that would be a compressed model!
NLP is such a model over a slightly different sphere, in that it says, “when we act as if this set of ideas (the presuppositions) are true, we are able to obtain thus-and-such results.” It is more parsimonious than the LoA and pickup people, in that it explicitly disclaims being a direct description of “reality”.
In particular, NLP explicitly says that the state of mind of the person doing things must be taken into account: if you are not willing to commit to acting as-if the presuppositions are true, you will not necessarily obtain the same results. (However, this does not mean you need to believe the presuppositions are true, any more than the actor playing Hamlet on stage needs to believe his father has been murdered!)
Now, I personally do believe that portions of the NLP model, and most of mine, do in fact reflect reality in some way. But I don’t care much whether this is actually the case, or that it has any bearing on whether the model is useful. It’s clearly useful to me and lots of other people, so it would be irrational for me to worry about whether it’s also “true”.
However, in the event that science discovers that NLP or I have the terminals labeled backwards, I’ll happily update, as I’ve already happily updated whenever any little bit of experimental data offers a better explanation for one of my puzzling edge cases, or a better evolutionary hypothesis for why something works in a certain way, etc.
But I don’t make these updates for the sake of truth, they’re for the sake of useful.
A more convincing evolutionary explanation is useful for my writing, as it gives a better reason for suspending disbelief. Better explanations of certain brain processes (e.g. the memory reconsolidation hypothesis, affective asynchrony, near/farl, the somatic marker hypothesis, etc.) are also useful for refining procedural instructions and my explanations for why you have to do something in a particular way for it to work. (e.g., memory reconsolidation studies explain why you need to access a memory to change it—a practical truth I discovered for myself in 2006.)
In a sense, these are less updates to the real model (do X to get Y), and more updates to the story or explanation that surrounds the model. The real model is that “if I act as if these things or something like them are true, and perform these other steps, then these other results reliably occur”.
And that model can’t be updated by somebody else’s experiment. All they can possibly change is the explanation for how I got the results to occur.
Meanwhile, if you’re looking for “the truth”, we don’t have the “real” model of what lies under NLP or hypnosis or LoA or my work, and I expect we won’t have it for at least another decade or two. Reconsolidation has been under study for about a decade now, I believe, likewise the roots of affective asynchrony and the SMH. A few of these are still in the “promising hypothesis, but still needs more support” stage.
But the things they’re trying to describe already exist, whether we have the words yet to describe them or not. And if you have something more important to protect than “truth”, you probably can’t afford to wait another decade or two for the research, any more than you’d wait that long for a reverse engineered circuit diagram before you tried turning on your TV.
By the way, the technique given in my thoughts-into-action video is based on extracting precisely the above notion, and reproducing the effect on a small scale, with a short timeframe, and without resorting to mysticism or “quantum physics”.
IOW, the people who successfully used the technique therein have already experienced an “increased perception of ways to exploit the circumstances (of a messy desk) to meet the goal (of a clean one), and increased motivation to act on those opportunities”.
I didn’t say “nasty”, I said “patronizing”.
If someone tells you that by praying in a particular way anyone can achieve spiritual union with the creator of the universe, and you ask for evidence, it is Not Helpful if they tell you “just try it and see”. (Especially if they add that actually, on past experience, the chances are that if you try it you won’t see because you won’t really be doing it right; and that to do it right you have to suspend your disbelief in what they’re telling you and agree to obey all their instructions. But that’s a separate can of worms.) Because (1) you won’t know for sure whether you really have achieved spiritual union with the creator of the universe (it might just feel that way), and (2) you’ll have discovered scarcely anything about how it works for anyone else. You might be more impressed if they can point to some sort of statistical evidence that shows (say) that people who pray in their preferred way are particularly good at discovering new laws of physics, which they attribute to their intimate connection to the creator of the universe.
More briefly: If someone asks for evidence, then “if you do exactly what I tell you to and suspend disbelief, then you might feel what I say you will” is not answering their question.
I haven’t observed this progressive retreat (it looks more to me like a progressive realisation on your part of what the fussier denizens of LW had wanted all along). But I do have a comment on the last step you described—“the paper has to say it’s about NLP”. For anyone who isn’t a professional psychologist, neurologist, cognitive scientist, or whatever, determining whether (and how far) a paper like Damasio’s supports your claims is a decidedly nontrivial business. (It’s easy to verify that some similar words crop up in somewhat-similar contexts, but that’s not the same.) Whereas, if if a paper says “Our findings provide strong confirmation for the wibbling hypothesis of NLP” and what you’re saying is “I accept the wibbling hypothesis as described in NLP texts”, that makes it rather easier to get a handle on how much evidence the research actually gives for your claims.
(In the present case, unfortunately but quite reasonably Google Books only lets me read bits of Damasio’s paper. I have basically no idea to what extent it confirms your underlying model of human cognition, and even less of whether it offers any support for the conclusions you draw from it about how to improve one’s own mind.)
What, because one or two people haven’t found what you’ve said useful, and have said so? That seems a bit extreme.
I think this is a little unfair. Extending the Mormon Wednesday discussion, I didn’t take my church leader’s suggestions to “read the Book of Mormon and pray about it” because, in retrospect, I had an extremely low prior probability that my thoughts could be communicated to a divine being who would respond to them with warm fuzzies.
I don’t think pjeby’s claims that practicing certain mental states/self hypnosis (I’m unclear on exactly what he is advocating) can influence our subconscious are that implausible. That doesn’t mean his theories are right, but they seem plausible that even the weak evidence of self-experimentation might say something about them.
I’m suggesting that priming, suggestion, hypnosis, NLP, placebo effects, creative visualization and a host of other psychological and new-age phenomena are ALL functions of the near/far divide, relying on a single precondition that might be called “suspension of disbelief”.
Or more precisely, refraining from verbal overshadowing—or something that’s suspiciously close to being able to be described that way.
From an evolutionary POV, you might say my hypothesis is that verbal overshadowing actually evolved in a “persuasion arms race”, specifically as an anti-persuasion defense, to prevent others from verbally exploiting our exposed unconscious processes.
IOW, if simple language evolved first, and was hooked directly to the “near” process (because that’s all there was), then it could be exploited by others—we would be “gullible” or “suggestible”. We would then evolve more sophisticated verbal intelligence, both to better exploit others, and to better defend ourselves.
Unfortunately, while this arguably gave rise to “intelligence” and “consciousness” as we know them, it also means that we’re cut off from being able to exploit our own near systems, unless we learn how to shut off the shields long enough to put stuff in (or take stuff out, change it, etc.).
Most self-help material consists of elaborate explanations to convince people to let down the shields by believing that what they say is true. However, in truth it is only necessary to not engage in disbelieving—to not shoot down the incoming data, whether it’s being provided by one’s self, the therapist or hypnotist, or something you read in a book.
However, instead of “truth” as a guide for what you install in the near system, one should use usefulness, since it is entirely possible to believe different things in the two systems without conflict.
I consider the near system to basically be a robot that I program for my own use, so I can feel free to exploit its beliefs based on what results I, the programmer, wish to accomplish. (And NLP offers a nice set of rules that can be used in place of “truth” as a guide for what “robot” beliefs are hygenic, vs. ones likely to lead to malfunction or undesired results.)
(Whee! I’m getting the explanation shorter! Practice, FTW! Too bad this particular explanation leans heavily on prior knowledge of at least priming, near/far, and verbal overshadowing, and lightly on pickup, suspension of disbelief, and the like. So in its bare form, it’s only really useful for a regular LW reader. But an improvement nonetheless.)
Well, I am genuinely appreciative of your attempts to explain, whether they are getting through or not.
Actually, I should be thanking you and the other people I’ve been replying to, because I just realized what pure gold I ended up with. I didn’t actually realize I had an implicit synthesis of the entire self-help field on my hands; in fact, I never consciously synthesized it before. And when I was telling my wife about it this evening, the ramifications of what should be possible under this simplified model hit me like a ton of bricks.
And it was the questions that Vladimir Nesov, gjm, Vladimir Golovin and others asked—about the techniques, the model, the self-help field in general, the similarities -- combined with sprocket’s post about “A/B” thinking that primed me with the right context to put it all together in a tightly integrated way. The refined model makes everything make a whole lot more sense to me—failures and successes alike. (For example, I now have an idea of why certain “affirmation” techniques are likely to work better than others, for some poeple.)
As soon as I get some rest, I have some things I want to try. Because if this more-unified model is indeed “less wrong” than my previous one, I just “levelled up” in my art. Frackin’ awesome! I think my massive investment of time here is actually going to pay off.
But whether it enables me to do anything new or not, this revision is still a big step forward in simplified communication regarding what I already do. So either way...
Thank you, LWers, I couldn’t have done it without you!
Hmmm… I wish you well, but usually this kind of revelation, when put into writing and left to draw on a shelf for a couple of weeks, reveals itself as much less wonderful than it originally seemed to be. Although usually it’s also a step forward, even if in the direction opposite to where you were walking before.
One might get the opposite impression, but in fact I am too. One reason why I keep whingeing at Philip is that his style of presentation makes it very difficult to tell where he is on the charlatan-to-expert spectrum, and that wouldn’t bother me if I didn’t think there was at least a chance that he’s near the expert end.
No, because the amount of time I’ve spent attempting to communicate these things might have been better spent teaching more people who actually need the information badly enough to jump at the chance to apply it, and whose primary criterion for the quality of the information is whether it helps them.
The only thing that makes it a tossup is that here, I’m forced to search for better and better metaphors, and more compact ways to communicate things… which is good practice/feedback for certain parts of the book I’m currently writing. But my current inability to quantify the effects of that practice, vs. the easily measurable time spent and the equivalent number of words towards a finished book, the tradeoff doesn’t look so good.