Richard Bandler hasn’t demonstrated even a single verifiable, undisputable result with his methods, and he’s been fabricating things like this for decades?
There’s research that indicates that the NLP Fast Phobia Cure produce effects but there no research that it’s better than other CBT techniques.
I consider basic claims by Bandler about rapport as nowadays accepted by psychology as mimicry of bodylanguage. As far as I see nobody cited Bandler for that and mainstream psychology developed their ideas about mimicry separately decades later.
The idea that there are eye accessing cues that are the same in every person that NLP taught in it’s early days has been shown to be false in methodically bad studies and it’s not taught anymore by Bandler and good modern NLP trainers. You will however still find articles on the internet proclaiming the theory to be true as claimed in the early days of NLP.
In Bandler latest book he mostly talks about applying an idea about strengthing emotions that you want by spinning them in your body and disassociating negative emotions. I’m not aware about good published research around those mechanisms.
Another significant claim of Bandler is that he can cure schizophrenics. I don’t know his approach with schizophrenics works and as far as I know there no research investigating it.
his methods don’t lead to his results in a way that matches his predictions?
NLP trainers after Bandler are not in the habit of using language with the goal of saying things that are objective true, but focus on saying things that they believe will produce positive change in the person they are talking with.
Bandler is not open about what he beliefs he’s doing when he’s training NLP trainers. Science itself rests on people openly stating what they believe.
the creator of NLP is not qualified to decide whether or not his methods are NLP?
Bandler does tell people at the end of his NLP trainer programs that there no such thing as NLP, so the issue of whether he decides whether or not his methods are NLP is not straightforward.
NLP works very differently with epistomological questions. It has a different approach to the question of how you teach a person skills to be a good therapist than mainstream psychology.
I’m aware that Strugeon’s law is in full effect within the NLP community, my questions were specifically about Bandler and his results.
I fail to see how anything you said has an impact on the observation that Andy did not need to return to the mental institute. Unless you dispute at least that single claim, the lack of research is better explained with the hypothesis that the researchers failed to understand the topic well enough to account for enough variables, like how Bandler almost always teaches NLP in the context of hypnosis.
If whatever Bandler does is producing verifiable results, shouldn’t it be at least an explicit goal of science to find out why it works for him, as opposed to whether it works if you throw an NLP manual at an undergrad? Shouldn’t it be a goal of science to find out how he came up with his techniques, and how to do that better than him?
If whatever Bandler does is producing verifiable results, shouldn’t it be at least an explicit goal of science to find out why it works for him, as opposed to whether it works if you throw an NLP manual at an undergrad?
YES!
Personally, I wouldn’t take Bandler very seriously because of the whole “narcissistic liar” thing and the fact that the one intervention of his I saw was thoroughly lacking in displayed skill (and noteworthy result), but yes, you should look at the experts, not at the undergrads handed a manual designed by the researcher who isn’t an expert himself. It’s much better to study “effectiveness of this expert”, not “effectiveness of this technique”. I’d just rather see someone like Steve Andreas studied.
I know from personal experience that even people with good intentions will strawman the shit out of you if you talk about this kind of thing because there’s so much behind it that they just aren’t gonna get. Ironically enough, Milton Erickson, the guy who Bandler modeled NLP after, allegedly had this exact complaint about NLP (“Bandler and Grinder think they have me in a nut shell, but all they have is a nutshell.” )
Personally, I wouldn’t take Bandler very seriously because of the whole “narcissistic liar” thing and the fact that the one intervention of his I saw was thoroughly lacking in displayed skill (and noteworthy result), but yes, you should look at the experts, not at the undergrads handed a manual designed by the researcher who isn’t an expert himself. It’s much better to study “effectiveness of this expert”, not “effectiveness of this technique”. I’d just rather see someone like Steve Andreas studied.
A while ago I would have agreed, today I’m not sure whether that would go somewhere. I think you need researchers with both scientific skills and which actual abilities.
Part of the reason why I respect Danis Bois so much is that after he was successful at teaching bodywork he went and worked through the proper academic way because he found the spiritual community to dogmatic. He got a real PHD and then a professorship.
For hypnosis it likely would have to be similar. Someone who went deep into it. Who lives in the mental world of hypnosis and does 90%+ of his day to day communication in that mode but who then feels bad about the unscientific attitude of his community. A person who then starts a scientific career might really bring the field forward.
Yeah, I see the distinction you’re getting at and completely agree. I was referring more to showing “hey, this can’t be nonsense since somehow this guy actually gets results even though I have no idea what he’s doing”, which is an important step on its own, even if it’s not scientific evidence behind individual teachable things.
Look at the state of pyschology today. They tried to replicate 100 findings. A third checked out. A third nearly checked out and another third didn’t check out at all.
If you are a psychologists at the moment and get embarrased as a result, you want to move in a direction where more results replicate. Studying highly performing people like Steve Andreas could very well not help with that goal.
To me, that looks like a slightly different angle on the same thing. If you want to nail down some things so you can say “hey look, we know some things”, then studying high performing people wouldn’t be the way to go. If, on the other hand, you’re pretty okay with saying “hey look, of course we don’t know anything, that’s why we’re still in exploration mode, but look at all this cool shit we’re sifting through!”, then it starts to look a lot more appealing.
It certainly doesn’t surprise me that this kind of research isn’t being done, and I can empathize with that embarrassment and wanting to have something nailed down to show the nay sayers. I also find it rather unfortunate. It strikes me as eating the marshmallow. Personally, I’d rather fast for a few days then drag back a moose.
If, on the other hand, you’re pretty okay with saying “hey look, of course we don’t know anything, that’s why we’re still in exploration mode, but look at all this cool shit we’re sifting through!”, then it starts to look a lot more appealing.
That, actually, depends on whether this cool shit is a stable pattern or just transient noise. Looking at cool-shit noise is fine as an aesthetic experience, but I wouldn’t call it science (or “exploration mode” either).
And, of course, there is the issue of intellectual honesty: saying “we found this weird thing that looks curious” is different from saying “we have conclusively demonstrated a statistically significant at the 0.0X level result”.
Personally, I’d rather fast for a few days then drag back a moose.
I suspect you’ll go off chasing butterflies and will never get anywhere, if we’re getting into hunter-gatherer metaphors.
I suspect you’ll go off chasing butterflies and will never get anywhere, if we’re getting into hunter-gatherer metaphors.
That’s a very reasonable thing to suspect. It’s a less reasonable thing to take as given, especially considering the size of the prize and the ease of asking a hunter “ever killed anything?”.
LOL. Besides the whole going-meta-on-aesthetics thing, wouldn’t that depend on how cool the shit it?
and the ease of asking a hunter “ever killed anything?
The hunter will proudly show you his collection of butterflies, all nicely pinned and displayed in proper boxes. Proper boxes are very important, dontcha know?
I have a feeling we have different images in mind. You have a vision of intrepid explorers deep in the jungle, too busy collecting specimens and fighting off piranhas and anacondas to suitably process all they see—the solid scientific work can wait until they return to the lab and can properly sort and classify all they brought back.
I see a medieval guild of piece workers, producing things. Some things are OK, some not really, but you must produce the pieces, otherwise you’ll starve and never make it from apprentice to the master. It would be, of course, very nice to craft a masterpiece, but if you can’t a steady flow of adequate (as determined by your peers who are not exactly unbiased judges) pieces will be sufficient and the more the better.
The point is that how “cool” something is is supposed to track the potential value there. In practice it doesn’t always (carbon fiber decals are a thing), but that just means they’re doing it wrong.
The hunter will proudly show you his collection of butterflies, all nicely pinned and displayed in proper boxes. Proper boxes are very important, dontcha know?
I’d find that very strange, but could happen. And if so, you can confirm your suspicion that they weren’t getting anything interesting done. Still seems worth asking to me.
I have a feeling we have different images in mind.
It seems like you see me as implicitly asking “why do you guys keep making pieces instead of going on an adventure!?!?!” and answering with “you see epic adventure, but what they see is the necessity of making their pieces. If they didn’t have to get their pieces made, and if there actually was epic adventure to have, of course they’d do that instead. It’s that they don’t agree with you”.
I agree. That’s why they do what they do - ’twas never a mystery to me. I see room for that and epic and lucrative anaconda fighting adventures. Or for fools chasing that fantasy and running off into the jungle to starve. Or all three and more.
I have a couple points here even before getting into what happens when you quit and seek adventure.
1) “you must produce the pieces”. Really? You sure sure? What number do you put on that confidence? How you think you know?
Often people get caught up running from what seems like a “must” only for it to turn out to be not mission critical. Literal hunger makes for a perfect example. When people fast for a few days for the first time, it often really changes the way they think about the hunger signal. It’s no longer “You must eat” and instead becomes more of just a suggestion.
2) “I’m not convinced adventuring is worth it”. Of course not. You haven’t done your research.
And from your mindset—if you really must produce the pieces, then you didn’t need to. If I offer you a chance of a million dollars or a sure $500, but the mob is gonna kill you if you don’t pay off your $500 debt, there’s little point in asking what the chance is if you already know it isn’t “all but guaranteed”.
However, even if it’s only a 15% chance, you’re losing out on an expected $149,500. If there’s any chance that 1) not producing the pieces isn’t an immediate game ender or 2) it’s not completely impossible to sell your chance for much more than $500, then you should probably at least ask what your chance of winning the million is before settling for $500.
So what I see is not “and adventure that is sure to pay off in excess and yeah it might be uncomfortable, but it’s not like there’s any real downside so don’t be stupid”, but rather “these people aren’t being careful to consider their confidence levels when it’s crucial, and so they are going to end up stuck as pieceworkers even if there’s a way to have much much more”
The point is that how “cool” something is is supposed to track the potential value there
Nope. How useful something is is supposed to track the potential value. If I were to go meta, I’d say that “cool” implies a particular kind of signaling to a specific social sub-group. There isn’t much “potential value” other than the value of the signal itself.
It seems like you see me as implicitly asking “why do you guys keep making pieces instead of going on an adventure!?!?!”
Still nope. Most people don’t want to go on a real adventure—it’s too risky, dangerous, uncomfortable. Most people—by far—prefer the predictable job of producing the pieces so that they can pay the mortgage on their suburban McMansion. In the case of academia, going for broke usually results in your being broke (and tenure-less) while a steady production of published papers gives you quite good chances of remaining in academia. Maybe not in the Ivies, but surely there is a college in South Dakota that wants you as a professor :-/
“you must produce the pieces”. Really?
If you want tenure, yes. If you don’t want tenure, you can do whatever you want.
then you should probably at least ask what your chance of winning the million is before settling for $500.
Sure. The answer is a shrug and if you want a verbalization, it will go along the lines of “Nobody knows”.
so they are going to end up stuck as pieceworkers even if there’s a way to have much much more”
There is no way for all of them to “have much much more”. Whether you think the trade-off is acceptable depends, among other things, on your risk tolerance, but in any case the mode—the most likely outcome—is still of you losing.
To be clear I do see the whole “intrepid explorers” thing pretty much exactly how you said it. I went that way myself and I’m super glad I did. It has been fun and had large payoff for me.
At the same time though, I realize that this is not how everyone sees it. I realize that a lot of the payoffs I’ve gotten can be interpreted other ways or not believed. I realize that other people want other things. I realize that I am in a sense lucky to not only get anything out of it, but to even be able to afford trying. And I realize why many people wouldn’t even consider the possibility.
Given that, it’d be pretty stupid to run around saying “drop what you’re doing and go on an adventure!” (or anything like it) as if it weren’t that from their perspective not only is “adventure” almost certainly going to lead nowhere, but they must make the pieces. As if “adventure” actually is a good idea for them—for most people, all things considered, it probably isn’t.
My point is entirely on the meta level. It’s not even about this topic in particular. I frequently see people rounding “this is impossible within my current models” to “this is impossible”. Pointing this out is rarely a “woah!” moment for people, because people generally realize that they could be wrong and at some point you have to act on your models. If you’ve looked and don’t see any errors it doesn’t mean none exist, but knowing that errors might exist doesn’t exactly tell you where to look or what to do differently.
What I think people don’t realize is how important it is to think through how you’re making that decision—and what actually determines whether they round something off to impossible or not. I don’t think people take seriously the idea that taking negligible in-model probabilities seriously will pay off on net—since they’ve never seen it happen and it seems like a negligible probability thing.
And who knows, maybe it won’t pay off for them. Maybe I’m an outlier here too and even if people went through the same mental motions as me it’d be a waste. Personally though, I’ve noticed that not always but often enough those things that feel “impossible” aren’t. I find that if I look hard enough, I often find holes in my “proof of impossibility” and occasionally I’ll even find a way to exploit those holes and pull it off. And I see them all the time in other people—people being wrong where they don’t even track the possibility that they’re wrong and therefore there is no direct path to pointing out their error because they’ll round my message to something that can exist in their worldview. I have other things to say about what’s going on here that makes me really doubt they’re right here, but I think this is sufficient for now.
Given that, I am very hesitant to round p=epsilon down to p=0, and if the stakes are potentially high I make damn sure that my low probability is stable upon more reflection and assumption questioning. I won’t always find any holes in my “proof”, nor will I always succeed if I do. Nor will I always try, of course. But the motions of consciously tracking the stakes involved and value of an accurate estimate has been very worthwhile for me.
The point I’m making is in the abstract, but one that I see as applying very strongly here. Given that this is one of the examples that seems to have paid off for me, it’d take something pretty interesting (and dare I say “cool”?) to convince me that it was never worth even taking the decision seriously :)
Yes, I agree that people sometimes construct a box for themselves and then become terribly fearful of stepping outside this box (=”this is impossible”). This does lead to them either not considering at all the out-of-the-box options or assigning, um, unreasonable probabilities to what might happen once you step out.
The problem, I feel, is that there is no generally-useful advice that can be given. Sometimes your box is genuinely constricting and you’d do much better by getting out. But sometimes the box is really the best place (at least at the moment) and getting out just means you become lunch. Or you wander in the desert hoping for a vision but getting a heatstroke instead.
You say
I don’t think people take seriously the idea that taking negligible in-model probabilities seriously will pay off on net
but, well, should they? My “in-model probabilities” tell me that I’m not going to become rich by playing the lottery. Should I take the lottery idea seriously? Negligible probabilities are often (but not always) negligible for a good reason.
Given that, I am very hesitant to round p=epsilon down to p=0
Sure. But things have costs. If the costs (in time, effort, money, opportunity) are high enough, you don’t care whether it’s epsilon or a true zero, the proposal fails the cost-benefit test anyway.
Yes. From the inside it can be very tough to tell, but from the outside they’re clearly they’re wrong about them all being low probability. They don’t check for potential problems with the model before trusting it without reservation, and that causes them to be wrong a lot. Even if your “might as well be 100%” is actually 97% - which is extremely generous, you’ll be wrong about these things on a regular basis. It’s a separate question of what—if anything—to do about it, but I’m not going to declare that I know there’s nothing for me to do about it until I’m equally sure of that.
I think one of the real big things that makes the answer feel like “no” is that even if you learn you’re wrong, if you can’t learn how you’re wrong and in which direction to update even after thinking about it, then there’s no real point in thinking about it. If you can’t figure it out (or figure out that you can trust that you’ve figured it out) even when it’s pointed out to you, then there’s less point in listening. I think “I don’t see what I can do here that would be helpful” often gets conflated with “it can’t happen”, and that’s a mistake. The proper way to handle those doesn’t involve actively calling them “zero”. It involves calling them “not worth thinking about” and the like. There is nothing to be gained by writing false confidences in mental stone and much to be lost.
My “in-model probabilities” tell me that I’m not going to become rich by playing the lottery. Should I take the lottery idea seriously? Negligible probabilities are often (but not always) negligible for a good reason.
Right. With the lottery, you have more than a vague intuitive “very low odds” of winning. You have a model that precisely describes the probability of winning and you have a vague intuitive but well backed “practically certain” odds of your model being correct. If I were to ask “how do you know that your odds are negligible?” you’d have an answer because you’ve already been there. If I were to ask you “well how do you know that your model of how the lottery works is right?” you could answer that too because you’ve been there too. You know how you know how the lottery works. Winning the lottery may be a very big win, but the expected value of thinking about it further is still very low because you have detailed models and metamodels that put firm bounds on things.
At the end of the day, I’m completely comfortable saying “it is possible that it would be a very costly mistake to not think harder about whether winning the lottery might be doable or how I’d go about doing it if it were AND I’m not going to think about it harder because I have better things to do”.
If I were gifted a lotto ticket and traded it for a burrito, I’d feel like it was a good trade. Even if the lottery ticket ended up winning the jackpot, I could stand there and say “I was right to trade that winning lotto ticket for a burrito” and not feel bad about it. It’d be a bit of a shock and I’d have to go back and make sure that I didn’t err, but ultimately I wouldn’t have any regrets.
If, say, it was given to me as a “lucky ticket” with a wink and a nod by some mob boss whose life I’d recently saved… and I traded it for a freaking burrito because “it’s probably 1 in 100 million, and 1 in 100 million isn’t worth taking seriously”… I’d be kicking myself real hard for not taking a moment to question the “probably” when I learned that I traded a winning ticket for a burrito.
And all those times the ticket from the mob boss didn’t win (or I didn’t realize it won because I traded it for a burrito) would still be tremendous mistakes. Just invisible mistakes if I don’t stop to think and it doesn’t happen to whack me in the head. The idea of making mistakes, not realizing, and then using that lack of realization as further evidence that I’m not overconfident is a trap I don’t want to fall into.
My brief attempt at “general advice” is to make sure you actually think it through and would be not just willing to but comfortable eating the loss if you’re wrong. If you’re not, there’s your little hint that maybe you’re ignoring something important.
When I point people to these considerations (“you say you’re sure, so you’d be comfortable eating that loss if it turns out not to be the case, the vast majority of the times when they stop deflecting and give a firm “yes” or “no”, the answer is “no”—and they rethink things. There are all sorts of caveats here, but the main point stands—when its important, most people conclude they’re sure without actually checking to their own standards.
That’s just not making bad decisions relative to your own best models/metamodels—you can still make bad decisions by more objective standards. This can’t save you from that but what it can do is make sure your errors stand out and don’t get dismissed prematurely. In the process of coming to say “yes, and I can eat the loss if I’m wrong” you end up figuring out what kinds of things you don’t expect to see and committing to the fact that your model predicts they shouldn’t happen. This makes it a lot easier to both notice the fact that your model is wrong and harder to let yourself get away with pretending it isn’t.
From the inside it can be very tough to tell, but from the outside they’re clearly they’re wrong about them all being low probability.
I don’t know about that. That clearly depends on the situation—and while you probably have something in mind where this is true, I am not sure this is true in the general case. I am also not sure of how would you recognize this type of situation without going circular or starting to mumble about Scotsmen.
if you learn you’re wrong, if you can’t learn how you’re wrong and in which direction to update even after thinking about it
What do you mean, can you give some examples? Normally, if people locked themselves in a box of their own making, they can learn that the box is not really there.
The idea of making mistakes, not realizing, and then using that lack of realization as further evidence that I’m not overconfident is a trap I don’t want to fall into.
That’s a good point—I agree that if you don’t realize what opportunity costs you are incurring, your cost-benefit analysis might be wildly out of whack. But again, the issue is how do you reliably distinguish ex ante where you need to examine things very carefully and where you do not have to do this. I expect this distinguishing to be difficult.
“Actually thinking it through” is all well and good, but it basically boils down to “don’t be stupid” and while that’s excellent advice, it’s not terribly specific. And “can you eat the loss?” is not helping much. For example, let’s say one option is me going to China and doing a start-up there. My “internal model” says this is a stupid idea and I will fail badly. But the “loss” is not becoming a multimillionaire—can I eat that? Well, on the one hand I can, of course, otherwise I wouldn’t have a choice. On the other hand, would I be comfortable not becoming a multimillionaire? Um, let’s say I would much prefer to become one :-) So should I spend sleepless nights contemplating moving to China?
I don’t know about that. That clearly depends on the situation—and while you probably have something in mind where this is true, I am not sure this is true in the general case. I am also not sure of how would you recognize this type of situation without going circular or starting to mumble about Scotsmen.
I mean about the whole group of things that any given person decides or would decide is “low probability”. I see plenty of “p=0″ cases being true, which is plenty to show that the group “p=0” as a whole is overconfident—I’m not trying to narrow it down to a group where they’re probably wrong, just overconfident.
What do you mean, can you give some examples? Normally, if people locked themselves in a box of their own making, they can learn that the box is not really there.
It’s not that they can’t learn that the box isn’t really there, it’s that even if they know it’s not there they don’t know how to climb out of it.
There are a lot of things I know I might be wrong about (and care about) that I don’t look into further. It’s not that I think it’s unlikely that there’s anything for me to find, but that it’s unlikely for me to find it in the next unit of effort. Even if someone is working with an obviously broken model with no attempts to better their model, it doesn’t necessarily mean they haven’t seriously considered the possibility that they’re wrong. It might just mean that they don’t know in which direction to update and are stuck working with a shitty model.
Some things are like saying “check your shoelaces”. Others are like saying “check your shoelaces” to a kid too young to know how to tie his own shoes.
“Actually thinking it through” is all well and good, but it basically boils down to “don’t be stupid” and while that’s excellent advice, it’s not terribly specific.
Heh. Yes, it is difficult and I expect that just comes with the territory. And yes, it kinda sorta just boils down to “don’t be stupid”. The funny thing is that when dealing with people who know me (and therefore get the affection and intent behind it) “don’t be stupid” is often advice I give, and it gets the intended results. The specificity of “you’re doing something stupid right now” is often enough.
And “can you eat the loss?” is not helping much. For example, let’s say one option is me going to China and doing a start-up there. My “internal model” says this is a stupid idea and I will fail badly. But the “loss” is not becoming a multimillionaire—can I eat that? Well, on the one hand I can, of course, otherwise I wouldn’t have a choice. On the other hand, would I be comfortable not becoming a multimillionaire? Um, let’s say I would much prefer to become one :-) So should I spend sleepless nights contemplating moving to China?
I’d much prefer to be a multimillionaire too, yet I’m comfortable with choosing not to pursue a startup in china because I am sufficiently confident that it is not the best thing for me to pursue right now—and I’m sufficiently confident that I wouldn’t change my mind if I looked into it a little further. It’s not that I don’t care about millions of dollars, its that when multiplied by the intuitive chance that thinking one step further will lead to me having it, it rounds down to an acceptable loss.
If, on the other hand, when you look at it you hear this little voice that says “Eek! Millions of dollars is a lot! How do I know that I shouldn’t be pursuing a china startup!?”, then yes, I’d say you should think about it (or how you make those kinds of decisions) until you’re comfortable eating that potential loss instead of living your life by pushing it away.
You say “don’t be stupid” as if it’s something that we’re beyond as a general rule. I see it as something that takes a whole lot of thought to figure out how not to be stupid this way. Once I started paying attention to these signs of incongruity, I started to recognizing it everywhere. Even in places that used to be or still are outside my “box”.
Science itself is about the search for finding knowledge and not about sifting through cool shit. I also consider it okay that our society has academic psychologists who attempt to build reliable knowledge.
I think it’s worthwhile to have different communities of people persuing different strategies of knowledge generation.
I fail to see how anything you said has an impact on the observation that Andy did not need to return to the mental institute.
Given the current scientific framework you don’t change a theory based on anecdotal evidence and single case studies. Especially when it comes to a person who’s known to be at least partly lying about the anecdotes he tells.
If whatever Bandler does is producing verifiable results, shouldn’t it be at least an explicit goal of science to find out why it works for him, as opposed to whether it works if you throw an NLP manual at an undergrad?
What do you mean with the phrase “explicit goal of science”? The goals that grand funding agencies set when they give out grants?
To the extent that you think studying people with high abilities is good approach of advancing science, I wouldn’t pick a person who’s in the habit of lying and showmanship but a person who values epistemically true beliefs and who’s open about what they think they are doing.
I think the term pseudoscience doens’t really apply for Bandler. For me the term means a person who’s pretending to play with the rules of science but who doesn’t. Bandler isn’t playing with the rules or pretending to do so.
That doesn’t mean that he’s wrong and what he teaches isn’t effective but at the same time it doesn’t bring his work into science.
It’s typical for New Atheists to reject everything that’s not part of the scientific mosaic as useless discredited pseudoscience. I don’t think that’s useful way of looking at how the world works. If you want to go further into that direction of thought, a nice talk was recently shared on the Facebook LW group:
Scientific Pluralism and the Mission of History and Philosophy of Science
For full disclosure, I do have a decent amount of NLP training with Chris Mulzer who attended Bandlers trainer training program every year for a decade. I know multiple people who attended seminars with Bandler.
Given the current scientific framework you don’t change a theory based on anecdotal evidence and single case studies.
Oh, I see the problem now. You’re waiting for research to allow you to decide to do the research you’re waiting for. When the scientific framework tells you there isn’t enough research to reach a conclusion, doesn’t it also tell you to do more research? Picking a research topic should not be as rigorous a process as the research itself.
Even if all the anecdotal and single case studies are false, shouldn’t you at least be interested in why so many people believe in it? NLP is not a religion, you pick it up as an adult. Even if the entire NLP/hypnosis/seduction/whatever industry is just a giant crackpot convention, they still demonstrate enough persuasion techniques to convince people it’s real. Shouldn’t you be swarming over that with the idea of eliminating your suicide rate?
I have more formal credentials with NLP then with academic psychology.
Even if the entire NLP/hypnosis/seduction/whatever industry is just a giant crackpot convention
I have multiple friends who makes their living in that industry. One of my best friends worked for a while as a salesperson for Bandlers seminars. I don’t have friends who have as much friends who have degrees in academic psychology.
I just understands both sides well enough to tell you about the situation we have at the moment.
There’s research that indicates that the NLP Fast Phobia Cure produce effects but there no research that it’s better than other CBT techniques.
I consider basic claims by Bandler about rapport as nowadays accepted by psychology as mimicry of bodylanguage. As far as I see nobody cited Bandler for that and mainstream psychology developed their ideas about mimicry separately decades later.
The idea that there are eye accessing cues that are the same in every person that NLP taught in it’s early days has been shown to be false in methodically bad studies and it’s not taught anymore by Bandler and good modern NLP trainers. You will however still find articles on the internet proclaiming the theory to be true as claimed in the early days of NLP.
In Bandler latest book he mostly talks about applying an idea about strengthing emotions that you want by spinning them in your body and disassociating negative emotions. I’m not aware about good published research around those mechanisms.
Another significant claim of Bandler is that he can cure schizophrenics. I don’t know his approach with schizophrenics works and as far as I know there no research investigating it.
NLP trainers after Bandler are not in the habit of using language with the goal of saying things that are objective true, but focus on saying things that they believe will produce positive change in the person they are talking with.
Bandler is not open about what he beliefs he’s doing when he’s training NLP trainers. Science itself rests on people openly stating what they believe.
Bandler does tell people at the end of his NLP trainer programs that there no such thing as NLP, so the issue of whether he decides whether or not his methods are NLP is not straightforward.
NLP works very differently with epistomological questions. It has a different approach to the question of how you teach a person skills to be a good therapist than mainstream psychology.
I’m aware that Strugeon’s law is in full effect within the NLP community, my questions were specifically about Bandler and his results.
I fail to see how anything you said has an impact on the observation that Andy did not need to return to the mental institute. Unless you dispute at least that single claim, the lack of research is better explained with the hypothesis that the researchers failed to understand the topic well enough to account for enough variables, like how Bandler almost always teaches NLP in the context of hypnosis.
If whatever Bandler does is producing verifiable results, shouldn’t it be at least an explicit goal of science to find out why it works for him, as opposed to whether it works if you throw an NLP manual at an undergrad? Shouldn’t it be a goal of science to find out how he came up with his techniques, and how to do that better than him?
YES!
Personally, I wouldn’t take Bandler very seriously because of the whole “narcissistic liar” thing and the fact that the one intervention of his I saw was thoroughly lacking in displayed skill (and noteworthy result), but yes, you should look at the experts, not at the undergrads handed a manual designed by the researcher who isn’t an expert himself. It’s much better to study “effectiveness of this expert”, not “effectiveness of this technique”. I’d just rather see someone like Steve Andreas studied.
I know from personal experience that even people with good intentions will strawman the shit out of you if you talk about this kind of thing because there’s so much behind it that they just aren’t gonna get. Ironically enough, Milton Erickson, the guy who Bandler modeled NLP after, allegedly had this exact complaint about NLP (“Bandler and Grinder think they have me in a nut shell, but all they have is a nutshell.” )
A while ago I would have agreed, today I’m not sure whether that would go somewhere. I think you need researchers with both scientific skills and which actual abilities.
Part of the reason why I respect Danis Bois so much is that after he was successful at teaching bodywork he went and worked through the proper academic way because he found the spiritual community to dogmatic. He got a real PHD and then a professorship.
For hypnosis it likely would have to be similar. Someone who went deep into it. Who lives in the mental world of hypnosis and does 90%+ of his day to day communication in that mode but who then feels bad about the unscientific attitude of his community. A person who then starts a scientific career might really bring the field forward.
Yeah, I see the distinction you’re getting at and completely agree. I was referring more to showing “hey, this can’t be nonsense since somehow this guy actually gets results even though I have no idea what he’s doing”, which is an important step on its own, even if it’s not scientific evidence behind individual teachable things.
Look at the state of pyschology today. They tried to replicate 100 findings. A third checked out. A third nearly checked out and another third didn’t check out at all.
If you are a psychologists at the moment and get embarrased as a result, you want to move in a direction where more results replicate. Studying highly performing people like Steve Andreas could very well not help with that goal.
Right.
To me, that looks like a slightly different angle on the same thing. If you want to nail down some things so you can say “hey look, we know some things”, then studying high performing people wouldn’t be the way to go. If, on the other hand, you’re pretty okay with saying “hey look, of course we don’t know anything, that’s why we’re still in exploration mode, but look at all this cool shit we’re sifting through!”, then it starts to look a lot more appealing.
It certainly doesn’t surprise me that this kind of research isn’t being done, and I can empathize with that embarrassment and wanting to have something nailed down to show the nay sayers. I also find it rather unfortunate. It strikes me as eating the marshmallow. Personally, I’d rather fast for a few days then drag back a moose.
That, actually, depends on whether this cool shit is a stable pattern or just transient noise. Looking at cool-shit noise is fine as an aesthetic experience, but I wouldn’t call it science (or “exploration mode” either).
And, of course, there is the issue of intellectual honesty: saying “we found this weird thing that looks curious” is different from saying “we have conclusively demonstrated a statistically significant at the 0.0X level result”.
I suspect you’ll go off chasing butterflies and will never get anywhere, if we’re getting into hunter-gatherer metaphors.
That’s a terrible aesthetic experience. Your sense of aesthetics is supposed to do something
That’s a very reasonable thing to suspect. It’s a less reasonable thing to take as given, especially considering the size of the prize and the ease of asking a hunter “ever killed anything?”.
LOL. Besides the whole going-meta-on-aesthetics thing, wouldn’t that depend on how cool the shit it?
The hunter will proudly show you his collection of butterflies, all nicely pinned and displayed in proper boxes. Proper boxes are very important, dontcha know?
I have a feeling we have different images in mind. You have a vision of intrepid explorers deep in the jungle, too busy collecting specimens and fighting off piranhas and anacondas to suitably process all they see—the solid scientific work can wait until they return to the lab and can properly sort and classify all they brought back.
I see a medieval guild of piece workers, producing things. Some things are OK, some not really, but you must produce the pieces, otherwise you’ll starve and never make it from apprentice to the master. It would be, of course, very nice to craft a masterpiece, but if you can’t a steady flow of adequate (as determined by your peers who are not exactly unbiased judges) pieces will be sufficient and the more the better.
The point is that how “cool” something is is supposed to track the potential value there. In practice it doesn’t always (carbon fiber decals are a thing), but that just means they’re doing it wrong.
I’d find that very strange, but could happen. And if so, you can confirm your suspicion that they weren’t getting anything interesting done. Still seems worth asking to me.
It seems like you see me as implicitly asking “why do you guys keep making pieces instead of going on an adventure!?!?!” and answering with “you see epic adventure, but what they see is the necessity of making their pieces. If they didn’t have to get their pieces made, and if there actually was epic adventure to have, of course they’d do that instead. It’s that they don’t agree with you”.
I agree. That’s why they do what they do - ’twas never a mystery to me. I see room for that and epic and lucrative anaconda fighting adventures. Or for fools chasing that fantasy and running off into the jungle to starve. Or all three and more.
I have a couple points here even before getting into what happens when you quit and seek adventure.
1) “you must produce the pieces”. Really? You sure sure? What number do you put on that confidence? How you think you know?
Often people get caught up running from what seems like a “must” only for it to turn out to be not mission critical. Literal hunger makes for a perfect example. When people fast for a few days for the first time, it often really changes the way they think about the hunger signal. It’s no longer “You must eat” and instead becomes more of just a suggestion.
2) “I’m not convinced adventuring is worth it”. Of course not. You haven’t done your research.
And from your mindset—if you really must produce the pieces, then you didn’t need to. If I offer you a chance of a million dollars or a sure $500, but the mob is gonna kill you if you don’t pay off your $500 debt, there’s little point in asking what the chance is if you already know it isn’t “all but guaranteed”.
However, even if it’s only a 15% chance, you’re losing out on an expected $149,500. If there’s any chance that 1) not producing the pieces isn’t an immediate game ender or 2) it’s not completely impossible to sell your chance for much more than $500, then you should probably at least ask what your chance of winning the million is before settling for $500.
So what I see is not “and adventure that is sure to pay off in excess and yeah it might be uncomfortable, but it’s not like there’s any real downside so don’t be stupid”, but rather “these people aren’t being careful to consider their confidence levels when it’s crucial, and so they are going to end up stuck as pieceworkers even if there’s a way to have much much more”
Nope. How useful something is is supposed to track the potential value. If I were to go meta, I’d say that “cool” implies a particular kind of signaling to a specific social sub-group. There isn’t much “potential value” other than the value of the signal itself.
Still nope. Most people don’t want to go on a real adventure—it’s too risky, dangerous, uncomfortable. Most people—by far—prefer the predictable job of producing the pieces so that they can pay the mortgage on their suburban McMansion. In the case of academia, going for broke usually results in your being broke (and tenure-less) while a steady production of published papers gives you quite good chances of remaining in academia. Maybe not in the Ivies, but surely there is a college in South Dakota that wants you as a professor :-/
If you want tenure, yes. If you don’t want tenure, you can do whatever you want.
Sure. The answer is a shrug and if you want a verbalization, it will go along the lines of “Nobody knows”.
There is no way for all of them to “have much much more”. Whether you think the trade-off is acceptable depends, among other things, on your risk tolerance, but in any case the mode—the most likely outcome—is still of you losing.
From here it looks like you aren’t addressing what I’m actually saying and instead are responding to arguments you think I must be trying to get at.
Are you sure you’re being sufficiently careful and charitable in your reading of my comments?
Sufficiently? X-D Clearly not.
Heh, okay. I’ll try again from another angle.
To be clear I do see the whole “intrepid explorers” thing pretty much exactly how you said it. I went that way myself and I’m super glad I did. It has been fun and had large payoff for me.
At the same time though, I realize that this is not how everyone sees it. I realize that a lot of the payoffs I’ve gotten can be interpreted other ways or not believed. I realize that other people want other things. I realize that I am in a sense lucky to not only get anything out of it, but to even be able to afford trying. And I realize why many people wouldn’t even consider the possibility.
Given that, it’d be pretty stupid to run around saying “drop what you’re doing and go on an adventure!” (or anything like it) as if it weren’t that from their perspective not only is “adventure” almost certainly going to lead nowhere, but they must make the pieces. As if “adventure” actually is a good idea for them—for most people, all things considered, it probably isn’t.
My point is entirely on the meta level. It’s not even about this topic in particular. I frequently see people rounding “this is impossible within my current models” to “this is impossible”. Pointing this out is rarely a “woah!” moment for people, because people generally realize that they could be wrong and at some point you have to act on your models. If you’ve looked and don’t see any errors it doesn’t mean none exist, but knowing that errors might exist doesn’t exactly tell you where to look or what to do differently.
What I think people don’t realize is how important it is to think through how you’re making that decision—and what actually determines whether they round something off to impossible or not. I don’t think people take seriously the idea that taking negligible in-model probabilities seriously will pay off on net—since they’ve never seen it happen and it seems like a negligible probability thing.
And who knows, maybe it won’t pay off for them. Maybe I’m an outlier here too and even if people went through the same mental motions as me it’d be a waste. Personally though, I’ve noticed that not always but often enough those things that feel “impossible” aren’t. I find that if I look hard enough, I often find holes in my “proof of impossibility” and occasionally I’ll even find a way to exploit those holes and pull it off. And I see them all the time in other people—people being wrong where they don’t even track the possibility that they’re wrong and therefore there is no direct path to pointing out their error because they’ll round my message to something that can exist in their worldview. I have other things to say about what’s going on here that makes me really doubt they’re right here, but I think this is sufficient for now.
Given that, I am very hesitant to round p=epsilon down to p=0, and if the stakes are potentially high I make damn sure that my low probability is stable upon more reflection and assumption questioning. I won’t always find any holes in my “proof”, nor will I always succeed if I do. Nor will I always try, of course. But the motions of consciously tracking the stakes involved and value of an accurate estimate has been very worthwhile for me.
The point I’m making is in the abstract, but one that I see as applying very strongly here. Given that this is one of the examples that seems to have paid off for me, it’d take something pretty interesting (and dare I say “cool”?) to convince me that it was never worth even taking the decision seriously :)
Yes, I agree that people sometimes construct a box for themselves and then become terribly fearful of stepping outside this box (=”this is impossible”). This does lead to them either not considering at all the out-of-the-box options or assigning, um, unreasonable probabilities to what might happen once you step out.
The problem, I feel, is that there is no generally-useful advice that can be given. Sometimes your box is genuinely constricting and you’d do much better by getting out. But sometimes the box is really the best place (at least at the moment) and getting out just means you become lunch. Or you wander in the desert hoping for a vision but getting a heatstroke instead.
You say
but, well, should they? My “in-model probabilities” tell me that I’m not going to become rich by playing the lottery. Should I take the lottery idea seriously? Negligible probabilities are often (but not always) negligible for a good reason.
Sure. But things have costs. If the costs (in time, effort, money, opportunity) are high enough, you don’t care whether it’s epsilon or a true zero, the proposal fails the cost-benefit test anyway.
Yes. From the inside it can be very tough to tell, but from the outside they’re clearly they’re wrong about them all being low probability. They don’t check for potential problems with the model before trusting it without reservation, and that causes them to be wrong a lot. Even if your “might as well be 100%” is actually 97% - which is extremely generous, you’ll be wrong about these things on a regular basis. It’s a separate question of what—if anything—to do about it, but I’m not going to declare that I know there’s nothing for me to do about it until I’m equally sure of that.
I think one of the real big things that makes the answer feel like “no” is that even if you learn you’re wrong, if you can’t learn how you’re wrong and in which direction to update even after thinking about it, then there’s no real point in thinking about it. If you can’t figure it out (or figure out that you can trust that you’ve figured it out) even when it’s pointed out to you, then there’s less point in listening. I think “I don’t see what I can do here that would be helpful” often gets conflated with “it can’t happen”, and that’s a mistake. The proper way to handle those doesn’t involve actively calling them “zero”. It involves calling them “not worth thinking about” and the like. There is nothing to be gained by writing false confidences in mental stone and much to be lost.
Right. With the lottery, you have more than a vague intuitive “very low odds” of winning. You have a model that precisely describes the probability of winning and you have a vague intuitive but well backed “practically certain” odds of your model being correct. If I were to ask “how do you know that your odds are negligible?” you’d have an answer because you’ve already been there. If I were to ask you “well how do you know that your model of how the lottery works is right?” you could answer that too because you’ve been there too. You know how you know how the lottery works. Winning the lottery may be a very big win, but the expected value of thinking about it further is still very low because you have detailed models and metamodels that put firm bounds on things.
At the end of the day, I’m completely comfortable saying “it is possible that it would be a very costly mistake to not think harder about whether winning the lottery might be doable or how I’d go about doing it if it were AND I’m not going to think about it harder because I have better things to do”.
If I were gifted a lotto ticket and traded it for a burrito, I’d feel like it was a good trade. Even if the lottery ticket ended up winning the jackpot, I could stand there and say “I was right to trade that winning lotto ticket for a burrito” and not feel bad about it. It’d be a bit of a shock and I’d have to go back and make sure that I didn’t err, but ultimately I wouldn’t have any regrets.
If, say, it was given to me as a “lucky ticket” with a wink and a nod by some mob boss whose life I’d recently saved… and I traded it for a freaking burrito because “it’s probably 1 in 100 million, and 1 in 100 million isn’t worth taking seriously”… I’d be kicking myself real hard for not taking a moment to question the “probably” when I learned that I traded a winning ticket for a burrito.
And all those times the ticket from the mob boss didn’t win (or I didn’t realize it won because I traded it for a burrito) would still be tremendous mistakes. Just invisible mistakes if I don’t stop to think and it doesn’t happen to whack me in the head. The idea of making mistakes, not realizing, and then using that lack of realization as further evidence that I’m not overconfident is a trap I don’t want to fall into.
My brief attempt at “general advice” is to make sure you actually think it through and would be not just willing to but comfortable eating the loss if you’re wrong. If you’re not, there’s your little hint that maybe you’re ignoring something important.
When I point people to these considerations (“you say you’re sure, so you’d be comfortable eating that loss if it turns out not to be the case, the vast majority of the times when they stop deflecting and give a firm “yes” or “no”, the answer is “no”—and they rethink things. There are all sorts of caveats here, but the main point stands—when its important, most people conclude they’re sure without actually checking to their own standards.
That’s just not making bad decisions relative to your own best models/metamodels—you can still make bad decisions by more objective standards. This can’t save you from that but what it can do is make sure your errors stand out and don’t get dismissed prematurely. In the process of coming to say “yes, and I can eat the loss if I’m wrong” you end up figuring out what kinds of things you don’t expect to see and committing to the fact that your model predicts they shouldn’t happen. This makes it a lot easier to both notice the fact that your model is wrong and harder to let yourself get away with pretending it isn’t.
I don’t know about that. That clearly depends on the situation—and while you probably have something in mind where this is true, I am not sure this is true in the general case. I am also not sure of how would you recognize this type of situation without going circular or starting to mumble about Scotsmen.
What do you mean, can you give some examples? Normally, if people locked themselves in a box of their own making, they can learn that the box is not really there.
That’s a good point—I agree that if you don’t realize what opportunity costs you are incurring, your cost-benefit analysis might be wildly out of whack. But again, the issue is how do you reliably distinguish ex ante where you need to examine things very carefully and where you do not have to do this. I expect this distinguishing to be difficult.
“Actually thinking it through” is all well and good, but it basically boils down to “don’t be stupid” and while that’s excellent advice, it’s not terribly specific. And “can you eat the loss?” is not helping much. For example, let’s say one option is me going to China and doing a start-up there. My “internal model” says this is a stupid idea and I will fail badly. But the “loss” is not becoming a multimillionaire—can I eat that? Well, on the one hand I can, of course, otherwise I wouldn’t have a choice. On the other hand, would I be comfortable not becoming a multimillionaire? Um, let’s say I would much prefer to become one :-) So should I spend sleepless nights contemplating moving to China?
I mean about the whole group of things that any given person decides or would decide is “low probability”. I see plenty of “p=0″ cases being true, which is plenty to show that the group “p=0” as a whole is overconfident—I’m not trying to narrow it down to a group where they’re probably wrong, just overconfident.
It’s not that they can’t learn that the box isn’t really there, it’s that even if they know it’s not there they don’t know how to climb out of it.
There are a lot of things I know I might be wrong about (and care about) that I don’t look into further. It’s not that I think it’s unlikely that there’s anything for me to find, but that it’s unlikely for me to find it in the next unit of effort. Even if someone is working with an obviously broken model with no attempts to better their model, it doesn’t necessarily mean they haven’t seriously considered the possibility that they’re wrong. It might just mean that they don’t know in which direction to update and are stuck working with a shitty model.
Some things are like saying “check your shoelaces”. Others are like saying “check your shoelaces” to a kid too young to know how to tie his own shoes.
Heh. Yes, it is difficult and I expect that just comes with the territory. And yes, it kinda sorta just boils down to “don’t be stupid”. The funny thing is that when dealing with people who know me (and therefore get the affection and intent behind it) “don’t be stupid” is often advice I give, and it gets the intended results. The specificity of “you’re doing something stupid right now” is often enough.
I’d much prefer to be a multimillionaire too, yet I’m comfortable with choosing not to pursue a startup in china because I am sufficiently confident that it is not the best thing for me to pursue right now—and I’m sufficiently confident that I wouldn’t change my mind if I looked into it a little further. It’s not that I don’t care about millions of dollars, its that when multiplied by the intuitive chance that thinking one step further will lead to me having it, it rounds down to an acceptable loss.
If, on the other hand, when you look at it you hear this little voice that says “Eek! Millions of dollars is a lot! How do I know that I shouldn’t be pursuing a china startup!?”, then yes, I’d say you should think about it (or how you make those kinds of decisions) until you’re comfortable eating that potential loss instead of living your life by pushing it away.
You say “don’t be stupid” as if it’s something that we’re beyond as a general rule. I see it as something that takes a whole lot of thought to figure out how not to be stupid this way. Once I started paying attention to these signs of incongruity, I started to recognizing it everywhere. Even in places that used to be or still are outside my “box”.
Science itself is about the search for finding knowledge and not about sifting through cool shit. I also consider it okay that our society has academic psychologists who attempt to build reliable knowledge.
I think it’s worthwhile to have different communities of people persuing different strategies of knowledge generation.
I don’t disagree with any of the statements you made, and none of them are required to be false for my point to be valid.
I’m kinda getting the impression that you aren’t being very careful or charitable in your reading of my comments. Is that impression wrong?
I don’t think the point of a post is to show how another person is wrong or to only say things where who I’m talking about is likely to disagree.
Given the current scientific framework you don’t change a theory based on anecdotal evidence and single case studies. Especially when it comes to a person who’s known to be at least partly lying about the anecdotes he tells.
What do you mean with the phrase “explicit goal of science”? The goals that grand funding agencies set when they give out grants? To the extent that you think studying people with high abilities is good approach of advancing science, I wouldn’t pick a person who’s in the habit of lying and showmanship but a person who values epistemically true beliefs and who’s open about what they think they are doing.
I think the term pseudoscience doens’t really apply for Bandler. For me the term means a person who’s pretending to play with the rules of science but who doesn’t. Bandler isn’t playing with the rules or pretending to do so. That doesn’t mean that he’s wrong and what he teaches isn’t effective but at the same time it doesn’t bring his work into science.
It’s typical for New Atheists to reject everything that’s not part of the scientific mosaic as useless discredited pseudoscience. I don’t think that’s useful way of looking at how the world works. If you want to go further into that direction of thought, a nice talk was recently shared on the Facebook LW group: Scientific Pluralism and the Mission of History and Philosophy of Science
For full disclosure, I do have a decent amount of NLP training with Chris Mulzer who attended Bandlers trainer training program every year for a decade. I know multiple people who attended seminars with Bandler.
Oh, I see the problem now. You’re waiting for research to allow you to decide to do the research you’re waiting for. When the scientific framework tells you there isn’t enough research to reach a conclusion, doesn’t it also tell you to do more research? Picking a research topic should not be as rigorous a process as the research itself.
Even if all the anecdotal and single case studies are false, shouldn’t you at least be interested in why so many people believe in it? NLP is not a religion, you pick it up as an adult. Even if the entire NLP/hypnosis/seduction/whatever industry is just a giant crackpot convention, they still demonstrate enough persuasion techniques to convince people it’s real. Shouldn’t you be swarming over that with the idea of eliminating your suicide rate?
What do you mean when you say “you”?
I have more formal credentials with NLP then with academic psychology.
I have multiple friends who makes their living in that industry. One of my best friends worked for a while as a salesperson for Bandlers seminars. I don’t have friends who have as much friends who have degrees in academic psychology.
I just understands both sides well enough to tell you about the situation we have at the moment.