My own improvements in nonreactivity and interest in others have been largely due to mindhacking; e.g. removing triggers that caused me to fear rejection of particular types, removing negative “ideals” that triggered judgment of self or others, deleting conditioned appetites driving approval-seeking and status-seeking behaviors, etc. (A blog post I wrote on dirtsimple.org in May mentions in passing a bit about how removing a negative ideal/compulsion to be “good” created a mini-renaissance in my marriage, for example.)
I’ve posted here a lot about ideal-belief-reality conflicts (Robert Fritz’s term) which are a primary driver of hypocritical behaviors—e.g., being obsessed with the future of “humanity” while not being able to be nice to individual humans you disagree with. This is precisely the sort of signaling that will label you as a “jerk” to others. The fewer IBRCs you have, the less often you’ll make judgments of others that trigger automatic contempt signals from your body.
(If you feel contempt for someone in a real-time social situation, trust me, other people are noticing, and judging you accordingly. The only real fix is to make it so you don’t have the contempt in the first place.)
this is my problem. I look great on paper but as soon as you get me in a social situation I can’t hide the fact that I feel contempt for the vast majority of the people around me (including desirable partners). I think the problem is that I connect behaviors I see in those around me to larger scale social problems even though this is stupid overgeneralizing. is there anything you could point to since your awareness of it makes me think you’ve dealt with it yourself.
Learn humility. When you think about how much superior you are to others, challenge that idea: think of ways in which you are not superior. Perhaps more important, remind yourself that your superiority is partially determined by luck. Practice, practice, practice.
Learn confidence. This may or may not be true in your case, but people often feel contempt towards people that they worry may judge them harshly. If you are confident enough not to be threatened by their judgment, then you can act more wisely and learn to manipulate the interactions.
#2 is very interesting and something that hadn’t occurred to me before. It is a reciprocal relationship. I judge others too harshly and in turn (since i generalize others from the example of myself) worry that I will be judged too harshly.
thank you very much!
edit: font is showing up weird for me even though I did no formatting...
this is my problem. I look great on paper but as soon as you get me in a social situation I can’t hide the fact that I feel contempt for the vast majority of the people around me (including desirable partners). … is there anything you could point to
One of the easiest methods to get started with is The Work of Byron Katie. The critical key to making it work, though, that is not emphasized anywhere near enough in her books (or anyone else’s, for that matter), is that you need a mental state of genuine curiosity and “wondering” when you ask the questions. It simply will not work if you use the questions as a way of arguing with yourself or beating yourself up, or if you simply recite them in a rote fashion, like magic words.
The magic isn’t in the words, it’s in the “search queries” you’re running on your mind. As with all mind hacking, the purpose is to create new connections between existing memories and concepts in order to update your “map”. So, it will also not work if you try to consciously reason out the answers; you want to keep silent verbally, so you can notice when System 1 answers, without being verbally overshadowed by System 2.
I don’t use the Work that much on IBRCs and judgment issues, myself. (I have techniques that generalize better to entire groups or classes of people, rather than just focusing on individual people.) But The Work is good practice for the basic skill underlying all mind hacking: asking System 1 a question, then shutting up system 2 and waiting for an answer. And it’ll definitely give you a taste of what it feels like to drop a “should” or judgment about a person, and how it changes your felt-responses to them.
The reference you recommend seems to advocate changing one’s attitude be engaging in a sequence of biases.
First, one is supposed to construct a strawman of their reasons for not liking someone or something:
I invite you to be judgmental, harsh, childish, and petty. Write with the
spontaneity of a child who is sad, angry, confused, or frightened.
Rather than seeking out one’s true objection, one should express their dislike in terms of their pettiest reasons, and identify with that expression. And one should “Simply pick a person or situation and write, using short, simple sentences”, discouraging deep explanation, which in turn discourages deep understanding. An important filter is bypassed, allowing the bad reasons to mix with the good. The question “What is it that they should or shouldn’t do, be, think, or feel?” in the context of asking one’s opinions is a setup to appear to commit the Mind Projection Fallacy. Priming someone to say “X should” when they mean “I want X to” so you can later say “In reality, there is no such thing as a ‘should’ or a ‘shouldn’t.’” is a sneaky debating trick.
And then, the strawman is subjected to unreasonable standards:
Can you absolutely know that it’s true?
Of course one can not absolutely know that it’s true, one should not assign probability 1 to anything. Does one have a large accumulation of evidence that causes one to have high confidence that it’s true? That seems like a more reasonable question, which one of course should apply to one’s true objection.
Then the question is asked:
How do you react, what happens, when you believe that thought?
which would be fine if it were setting up to ask, “Is that reaction constructive? Are there more constructive ways you could react?”. But instead, the follow up is:
And finally, there is the “Turn it around” concept. Now, holding oneself to the same standards one expects of others is good, but a big problem comes from asking one to “find three genuine, specific examples of how the turnaround is true in your life”. This is advocating the Confirmation Bias. One is encourage to find supporting evidence for the turn around, but not contradicting evidence. If I have a problem with someone for being a chronic liar, it does not make sense for me to think it is OK because I can recall three time she told the truth, or three times I told a lie. What does make sense is to notice her unusually high proportion of lies to honest statements, and to not believe what she tells me without corroboration, and maybe even associate instead with others who reliably give me truthful information.
If this is the sort of mind hack you advocate, it is no wonder that people express skepticism instead of trying it. After all, our sister site is not called “Embracing Bias”.
The reference you recommend seems to advocate changing one’s attitude be engaging in a sequence of biases.
It’s engaging in System 1 thinking, which of course has a different set of biases than System 2 thinking. The object is to activate the relevant System 1 biases, and then update the information stored there.
one should express their dislike in terms of their pettiest reasons, and identify with that expression.
Absolutely. How else would you expect to reconsolidate the memory trace, without first activating it?
Rather than seeking out one’s true objection, …
You mean your System 2 explanation whose function is to make your System 1 bias appear more righteous or socially acceptable. That “true objection”?
And one should “Simply pick a person or situation and write, using short, simple sentences”, discouraging deep explanation, which in turn discourages deep understanding. An important filter is bypassed, allowing the bad reasons to mix with the good.
Precisely. We don’t want System 2 to verbally overshadow the irrational basis for your reactions, by filtering them out and replacing them with good-sounding explanations.
The question “What is it that they should or shouldn’t do, be, think, or feel?” in the context of asking one’s opinions is a setup to appear to commit the Mind Projection Fallacy.
Actually, it’s an attempt to identify what conditioned standard or ideal you believe the person is violating, creating your irrational reaction.
Priming someone to say “X should” when they mean “I want X to” so you can later say “In reality, there is no such thing as a ‘should’ or a ‘shouldn’t.’” is a sneaky debating trick.
Of course it’s a debating trick. If fair, logical reasoning worked on System 1, there’d be no need for mindhacking, would there?
Of course one can not absolutely know that it’s true, one should not assign probability 1 to anything.
And you are discussing this with System 2 reasoning—i.e., abstract reasoning. When you ask yourself this question about a specific thing, e.g., “can I absolutely know it’s true that Tom should listen to me?”, it is a request to query System 1 for your implicit epistemology on that particular topic. That is, how would you know if it were true? What if it weren’t? How would you know that? In the process, this retrieves relevant memories, making them available for reconsolidation.
You are confusing a concrete system 1 practice with abstract system 2 reasoning. Again, if the two were the same, we would have no need for mindhacking, and the Dark Arts could not exist.
(That being said, I’ve honestly never found this particular question that useful, compared to questions 1, 3, and 4.)
Does one have a large accumulation of evidence that causes one to have high confidence that it’s true? That seems like a more reasonable question
Indeed. However, if you were to translate that to a System 1 question, it’d be more like, “How do I know that it’s true?”. That is, something closer to a simple query for sensory data, than a question calling for abstract judgment. (I’ve actually used this question.)
which one of course should apply to one’s true objection.
One’s “true objection” is of course in most cases an irrational, childish thing. If not, one would likely not be experiencing a problem or feelings that cause you to want to engage in this process in the first place.
Then the question is asked: How do you react, what happens, when you believe that thought? which would be fine if it were setting up to ask, “Is that reaction constructive? Are there more constructive ways you could react?”. But instead, the follow up is: Who would you be without the thought?
Again, we need to distinguish System 1 and 2 thinking. “Is that reaction constructive?” and “Are there more constructive ways you could react?” are abstract questions that lead to a literal answer of “yes”… not to memory reconsolidation.
“Who would you be without that thought?” is a presuppositional query that invites you to imagine (on a sensory, System 1 level) what you would be like if you didn’t believe what you believe. This is a sneaky trick to induce memory reconsolidation, linking an imagined, more positive reaction to the point in your memory where the existing decision path was.
This question, in other words, is a really good mind hack.
Mind hacking questions are not asked to get answers, they are questions with side-effects.
Hm, who would I be if it didn’t bother me to have my face burned. Probably the sort of person who doesn’t avoid being touch in the face by hot pokers.
You are equating physical and emotional pain; the Work is a process for getting rid of emotional pain created by moral judgments stored in System 1, not logical judgments arrived at by System 2.
And finally, there is the “Turn it around” concept. Now, holding oneself to the same standards one expects of others is good, but a big problem comes from asking one to “find three genuine, specific examples of how the turnaround is true in your life”. This is advocating the Confirmation Bias. One is encourage to find supporting evidence for the turn around, but not contradicting evidence.
On the contrary, it is countering confirmation bias. Whatever belief you are modifying has been keeping you from noticing those counterexamples previously. Notice, btw, that Katie advises not doing the turnarounds until after the existing belief has been updated: this is because when you firmly believe something, you react negatively to the suggestion of looking for counterexamples, and tend to assume you’ve done a good job of looking for them, even though you haven’t.
So instead, the first two questions are directed at surfacing your real (sensory, System 1) evidence for the belief, so that you can then update with various specific classes of counterexample. Questions 3 and 4, for example, associate pain to the belief, and pleasure to the condition of being without it, providing a counterexample in that dimension. The turnaround searches provide hypocrisy-puncturing evidence that you are not really acting to the same standards you hold others to, and that your expectations are unrealistic, thus providing another kind of counterexample.
If I have a problem with someone for being a chronic liar, it does not make sense for me to think it is OK because I can recall three time she told the truth, or three times I told a lie.
You will not arrive at useful information about the process by discussing it in the abstract. Pick a specific situation and belief, and actually try it.
What does make sense is to notice her unusually high proportion of lies to honest statements, and to not believe what she tells me without corroboration, and maybe even associate instead with others who reliably give me truthful information.
Sure. And if you can do that without an emotional reaction that clouds your judgment or makes you send off unwanted signals, great! The Work is a process for getting rid of System 1 reactions, not a way of replacing System 2 reasoning.
If this is the sort of mind hack you advocate, it is no wonder that people express skepticism instead of trying it. After all, our sister site is not called “Embracing Bias”.
Mind hacking is working on System 1 to obtain behavioral change, not engaging in System 2 reasoning to result in “truth”.
That’s because, when your System 2 tries to reason about your behavior, it usually verbally overshadows system 2, and ends up confabulating.… which is why pure system 2 reasoning is absolutely atrocious at changing problematic behaviors and emotions.
(Edit to add: Btw, I don’t consider The Work to be a particularly good form of mindhacking. IMO, it doesn’t emphasize testing enough, doesn’t address S1/S2 well, and has a rather idiosyncratic set of questions. I personally use a much wider range of questions and sequences of questions to accomplish different things, and last, but far from least, I don’t unquestioningly endorse all of Katie’s philosophy. Nonetheless, I recommend the Work to people because, performed properly it works on certain classes of things, and can be a gentle introduction to the subject. Another good book is “Re-Create Your Life” by Morty Lefkoe, which provides a difference evidence-based reconsolidation process, but The Work has the advantage of having a free online introduction.)
IAWYC, but you didn’t need to quote and refute every sentence to get the point across about System 1 and System 2 and our real vs. signaled reasons for affective reactions. It’s a question of style, not content, but I think you’d communicate your ideas much more effectively to me and to others here at LW if you focused on being concise.
You mean your System 2 explanation whose function is to make your System 1 bias appear more righteous or socially acceptable. That “true objection”?
No, I mean if, for example, it bothers you that your roommate never comes through with his share of the rent, you would not want to focus on how you get annoyed by his stupid shrug (which you probably only find annoying and stupid because you associate it with him and his flakiness).
The question “What is it that they should or shouldn’t do, be, think, or feel?” in the context of asking one’s opinions is a setup to appear to commit the Mind Projection Fallacy.
Actually, it’s an attempt to identify what conditioned standard or ideal you believe the person is violating, creating your irrational reaction.
Priming someone to say “X should” when they mean “I want X to” so you can later say “In reality, there is no such thing as a ‘should’ or a ‘shouldn’t.’” is a sneaky debating trick.
Of course it’s a debating trick. If fair, logical reasoning worked on System 1, there’d be no need for mindhacking, would there?
You seem to have gone from missing my point to agreeing with it. If you understood why I said it is a debating trick, why would you argue that it is something else instead?
On the contrary, it is countering confirmation bias.
If it were countering the confirmation bias, it would ask something like, “Consider the last ten time X had an opportunity to do Y. How many of those times did X actually do Y?” (If the answer is ten, it seems probable that this method has the wrong goal.) And even with that, you have to be careful not to over do it. You would not want to excuse someone punching you in the face when you meet, just because he only actually does it one time in twenty. But asking for three examples ever of not doing the wrong thing, and expecting people to change their minds based on that, is encouraging irrationality. That just is not enough evidence.
If I have a problem with someone for being a chronic liar, it does not make sense for me to think it is OK because I can recall three time she told the truth, or three times I told a lie.
You will not arrive at useful information about the process by discussing it in the abstract. Pick a specific situation and belief, and actually try it.
It seems I wasn’t clear. My problem with the method in this case is not that it wouldn’t work. My problem is that it might work, and I would lose my ability to protect myself against those who would manipulate my behavior through lies.
You seem pretty much in agreement with my impression that the method is to engage in a series of biases, but you seem to actually think this is a good thing, because somehow these biases will exactly cancel out other biases the person already has. I see no reason to expect this precise balance from the method. I expect someone who uses this method to forgive people they should not forgive (that is, the forgiveness is not in their interest), and will be easy to take advantage of. After all, they do not have XML tags that say “Should not be taken advantage of”, and they could imagine not being bothered by it.
Contrast this with dedicated rationality, confronting your biases head on, acknowledging the real extent of things that bother you, neither exaggerating nor down playing. You would actually recognize the difference between an abusive relationship and not always getting your way.
No, I mean if, for example, it bothers you that your roommate never comes through with his share of the rent, you would not want to focus on how you get annoyed by his stupid shrug (which you probably only find annoying and stupid because you associate it with him and his flakiness).
But the purpose if you were doing The Work, would be to focus on that stupid shrug and his “flakiness”, precisely so that you can drop them from consideration. Presumably (I assume this is a hypothetical situation), you would be having a judgment like “he shouldn’t be flaky” or “he should be responsible”, or some other character judgment based on his behavior. The point of the Work is not to do rational computations, it’s to drop emotional attachments from the system whose job it is to make you reinforce your tribe’s value system through emotional displays.
Once you’ve dropped whatever “irrational” stuff you have going on, the reasoning about what to practically do gets a LOT easier. As often as not, the first thing that happens upon thinking about the problem afterward is that an obviously sensible solution pops into your head, that you feel you can actually execute—like, say, calmly starting the search for a new roommate.
If it were countering the confirmation bias, it would ask something like, “Consider the last ten time X had an opportunity to do Y. How many of those times did X actually do Y?”
Yeah, that’s totally not the point of the exercise. The point is to drop the emotional judgments that are clouding your reasoning, not to perform reasoning. You do the reasoning after your head is free of the attachment. Because while System 1 is thinking “irresponsible” and “flaky”—i.e., “this person is violating the rules of my tribe”, System 2 tends to stay busy figuring out what arguments to use to accuse him with in front of the tribe, instead of actually trying to solve the problem.
(If the answer is ten, it seems probable that this method has the wrong goal.)
No, it means you’ve directed the tool to the wrong target: you’re not supposed to apply it to the practical problem, you apply it to the most emotional, irrational thoughts you have about the problem… the ones that System 2 likes to keep swept under the rug.
It seems I wasn’t clear. My problem with the method in this case is not that it wouldn’t work. My problem is that it might work, and I would lose my ability to protect myself against those who would manipulate my behavior through lies.
No, because what you’re supposed to use it on is ideas like, “People should be responsible” or “People should do their share”, or whatever “tribal standard” you have an emotional attachment to, interfering with your reasoning.
Some people here, for example, get in a tizzy about theists or self-help gurus or some other group that is violating their personal tribal standards. For a while, I got in a tizzy about people here violating one of mine, i.e. “people should listen to me”. I used the Work on it, and then quit beating that particular dead horse.
But note that this does not mean I now think that people are listening to me more, or that I now believe I have nothing to say, or anything like that. All I did by dropping the “should” is that I no longer react emotionally to the fact that some people listen to some things more than others. That is now a “mere fact” to me, such that I can still prefer to be listened to, but not experience a negative response to the reverse.
You seem pretty much in agreement with my impression that the method is to engage in a series of biases, but you seem to actually think this is a good thing, because somehow these biases will exactly cancel out other biases the person already has. I see no reason to expect this precise balance from the method.
That’s because you’re imagining using it for something that it’s specifically not intended to be used on. It is aimed at System 1 (aka “the heart”, in Katie’s terminology) rather than System 2 (aka “the head”). It’s not for changing your intellectual appraisal of the situation, it’s for removing the emotion that says tribal standards are being violated by a member of the tribe.
That’s why the emphasis is on “shoulds”, and simple, emotional language—the real target is whatever standard you’ve imprinted as a “moral”, usually at a young age.
I expect someone who uses this method to forgive people they should not forgive (that is, the forgiveness is not in their interest), and will be easy to take advantage of. After all, they do not have XML tags that say “Should not be taken advantage of”, and they could imagine not being bothered by it.
It doesn’t interfere with your rational desire not to be taken advantage of. You will still prefer that not to happen. You just won’t have emotions clouding your judgment about doing something about it.
Have you ever noticed how often people complain and complain about someone else’s behavior, but never actually do anything about it? This is a fix for that.
(Hypothesis: in a tribal environment, individual enforcement of an important group standard is less advantageous than bringing the matter before the tribe, where you can signal your own compliance with the standard and your willingness to punish violations, without having to take on all the risks and costs of private justice. Thus, our emotions of judgment and outrage are evolved to motivate us to expose the violation, rather than taking action on our own. The Work and certain of my own techniques appear to switch off the emotional trigger associated with the standard, ironically freeing one to contemplate whatever enforcement or alternative responses one rationally sees fit.)
Well, it is good that you have some discretion about what issues you use this technique on, but the document you referenced quite clearly states “The first step in The Work is to write down your judgments about any stressful situation in your life” (emphasis added). In the question and answer section, it goes so far as to say, “No one has ever hurt anyone. No one has ever done anything terrible.” This criteria of applying it only on beliefs that are actually irrational seems to be something you added, and only communicated when pressed. Referencing this method without the caveat seems likely to teach people to be half a mind hacker, subject to the problems I described.
So, it seems, before one can use this method effectively, one must perform a rational analysis of which beliefs about which perceptions of problems are truly irrational. I usually find that once I complete this analysis, I am done; I have concluded the perception is irrational and that is enough to dismiss it. If other people need some trick to get some numbered system in their brains to accept the rational conclusion, so be it.
Well, it is good that you have some discretion about what issues you use this technique on, but the document you referenced quite clearly states “The first step in The Work is to write down your judgments about any stressful situation in your life”
It says to write down your judgments… and goes on to define the language in which such judgments are to be expressed, i.e., judgmentally. i.e., emotionally and irrationally.
IOW, the premise is that if you were actually rational about a situation, you would not be stressed.
If your face is being approached by a red-hot poker, and you are being rational, you will move, or do whatever else is necessary to stop it, but you will not be experiencing the same type of “stress” as a person who is worrying that somebody might stick them with a red-hot poker at some point in the future, and that there’s nothing they can do about it.
So yes, you can apply the technique to “any stressful situation”, because it is not rational to remain stressed, instead of either 1) taking action, or 2) deciding it’s not worth taking action. Stress arises from not doing either of those two things, and is thus prima facie evidence of irrationality.
In the question and answer section, it goes so far as to say, “No one has ever hurt anyone. No one has ever done anything terrible.”
Her point is that those ideas exist in maps, not territory, and that the assumed consequences of having “something terrible” happen are a consequence of how the information is coded in your map, not whatever actually happened in the territory. Continuing to experience suffering about an event that is already over is not rational.
This criteria of applying it only on beliefs that are actually irrational seems to be something you added,
Not at all—even that brief introductory document stresses the importance of getting statements that are from the “heart”—i.e. emotional judgments from System 1, and gives very specific instructions as to how to accomplish that. (You should know, as you quoted some of them and argued that they were bad precisely because they would elicit System 1 irrationality!)
However, as far as I can tell, you didn’t actually follow those instructions. Instead, it appears to me that you imagined following the instructions with a hypothetical situation. It is not at all the same thing, as this will engage System 2 predictive models rather than the System 1 models, and they generate different answers.
and only communicated when pressed. Referencing this method without the caveat seems likely to teach people to be half a mind hacker, subject to the problems I described.
Actually, I gave the caveat that you MUST shut your verbal mind up and pay attention to your “inner” responses, so that you would get information from System 1, not System 2. She also gives it, but does not IMO emphasize it enough. That’s why I pointed it out in advance.
Being able to silence System 2, pay attention to System 1, and distinguish System 2 “thoughts” from System 1 “responses” are the three most important skills a mind hacker can have. Without them, you aren’t doing mind-hacking, you’re confabulating.
So, it seems, before one can use this method effectively, one must perform a rational analysis of which beliefs about which perceptions of problems are truly irrational
Not at all. All you need to know is that you feel bad about something, as that is sufficient to know that you have an irrational perception. Otherwise, you’d be doing something about the problem instead of feeling bad about it.
The Work (and most other block-removal mind hacks) clears away the emotion so you can actually think. While an emotion may be useful for signaling that a situation is important to you, most of our evolved emotions are not tuned to optimize rational thought; they’re there for signaling, learning, and preparing for simple actions (like fight/flight).
So even though Eliezer’s “Way” says that you should feel emotions when it’s rational to do so, mind hackers have a somewhat different view about which emotions it’s rational to have. Negative emotions are mostly not useful in our modern environment. They serve a useful purpose in preventing courses of action that might lead to them, but once something bad has already happened, they cease to be useful.
I usually find that once I complete this analysis, I am done; I have concluded the perception is irrational and that is enough to dismiss it. If other people need some trick to get some numbered system in their brains to accept the rational conclusion, so be it.
Actually, the Work gets rid of the need to perform such an analysis; it simply drops the irrational stuff, making a rational solution easier to see. In fact, a rational and/or creative solution will often pop spontaneously to mind immediately following.
And since it does not rely on any advanced reasoning skills, or the ability to apply them under stress conditions, I suspect that the Work alone could do far more for raising the “sanity waterline” of humanity than extreme rationality skills ever will.
A person who has the Work doesn’t need a religion to comfort them, although it’s unlikely to cause anyone to consciously abandon their religion, vs. simply drifting away from it.
(Of course, some people who identify as “rationalist” will probably have a problem with that, since their tribal standard insists that people must not merely do rational things due to not being irrationally-motivated, but must do them because Reason said so, in spite of their irrational motivations. Which, of course, is an irrational “should” of precisely the type that the Work removes, and that we’d all be better off without… rationalists and non-rationalists alike.)
It says to write down your judgments… and goes on to define the language in which such judgments are to be expressed, i.e., judgmentally. i.e., emotionally and irrationally.
The problem is that people can perceive many reasons why a situation is stressful, some of those might be rational (or rationally supportable), and some might be irrational. A method of deceptively filtering out the good reasons, and addressing the bad reason in a way that feels like addressing all the reasons (because it is not acknowledged that the good reasons are filtered) goes too far.
Actually, I gave the caveat that you MUST shut your verbal mind up and pay attention to your “inner” responses, so that you would get information from System 1, not System 2. She also gives it, but does not IMO emphasize it enough. That’s why I pointed it out in advance.
Your caveat was about a System 1/System 2 distinction. If this is the same as the rational/irrational distinction I am concerned about, we have an issue with inferential distances. And if you think that following your advice will cause people to avoid applying the method to rational perceptions of problems despite not even being aware of the issue, well, it is that sort of thinking that makes people wary of just trying your advice. I know I don’t want to prove the method is dangerous that way.
(You should know, as you quoted some of them and argued that they were bad precisely because they would elicit System 1 irrationality!)
No, I argued they were bad because they would elicit irrationality. I didn’t say anything about System 1. I would call this an instance of the Double Illusion of Transparency, except I never even claimed to understand System 1 and System 2. (And who gives numbers instead of names to the two most important entities in their model?)
However, as far as I can tell, you didn’t actually follow those instructions. Instead, it appears to me that you imagined following the instructions with a hypothetical situation.
Of course I did not actually follow the instructions. I don’t run untrusted programs on my computer, and I am definitely not going run an untrusted mind hack on my brain. I analyze such mind hacks, looking for what problems it can solve, and what problems in can introduce, so I can weigh the risk against the benifets. And this hack has the property that, once I have identified a problem it can safely solve, I have already solved the problem.
The problem is that people can perceive many reasons why a situation is stressful, some of those might be rational (or rationally supportable), and some might be irrational.
First: there’s no such thing as a rationally supported reason for continuing to experience stress, once you’re aware of it, any more than there’s a reason for an alarm bell to keep ringing once everybody knows there’s a fire.
Second, the Work (and other System 1 mindhacks) does not cause you to forget that there is a fire or that it would be a good idea to put it out! It simply shuts off the alarm bell so you can concentrate.
A method of deceptively filtering out the good reasons, and addressing the bad reason in a way that feels like addressing all the reasons (because it is not acknowledged that the good reasons are filtered) goes too far. … And if you think that following your advice will cause people to avoid applying the method to rational perceptions of problems despite not even being aware of the issue, well, it is that sort of thinking that makes people wary of just trying your advice. I know I don’t want to prove the method is dangerous that way.
These statements are entirely a confusion on your part because you are running all of your analysis from S2, imagining what would happen if you applied this idea in S2.
But S2 is so bad (by default) at predicting how minds actually work, that not only is it wrong about what would happen in S1, it’s also wrong about what would happen if you ran it in S2, as you were anticipating.
Because what would actually happen, if you applied this to a “live” issue in S2, is that S2 (which is still being motivated by the alarm bell going off in S1) would find reasons to reject the new input.
That is, as you considered alternatives, you’d be doing precisely what your S2 was doing as you made your analysis: finding reasons why the alarm is valid and should therefore be kept ringing!
In other words, the actual failure mode of running the technique in S2 is to not change anything, and end up concluding that the technique “didn’t work”, when in fact it was never applied.
That’s because this is a major evolved function of S2: to argue for whatever S1 tells it to argue for.
That’s why a failed mind hack doesn’t result in some sort of bizarre arational belief change at S2 as you seem to think. Instead, the technique simply fails to do anything, and the alarm bell just keeps ringing—which keeps S2 stuck in the groove established for it by S1.
(And who gives numbers instead of names to the two most important entities in their model?)
Stanovich and West, in their paper on native and learned modes of reasoning. System 1 refers to naive, intuitive, emotional, concrete, “near” operations, and System 2 the abstract, learned, logical, “far” operations. They apparently chose to number instead of name them, because they were summarizing the research of almost a dozen other papers by other authors that each used different names for roughly the same systems.
IOW, it wasn’t me. I’ve used names in the past like “you”(S2)/”yourself”(S1), savant(S1)/speculator(S2), and horse(S1)/monkey(S2). Haidt, in The Happiness Hypothesis, calls them the monkey(S2) and the elephant(S1).
All told, S1/S2 actually seems to be a bit simpler! (Also, I think the Inner Game of Tennis refers to Self 1 and Self 2, and I think they’re numbered the same way, though it’s been a long time.)
Of course I did not actually follow the instructions. I don’t run untrusted programs on my computer, and I am definitely not going run an untrusted mind hack on my brain. I analyze such mind hacks, looking for what problems it can solve, and what problems in can introduce, so I can weigh the risk against the benifets.
While a nice idea in theory, it fails in practice because the naive theory of mind encoded in S2 doesn’t look anything like the way S1 and S2 work in practice.
S2 in particular seems to be deliberately and perversely reluctant to notice how it’s S1′s puppet spin doctor, rather than its own free agent. (Because it’s sort of a free agent… so long as S1 doesn’t override.)
Thus, its predictions about itself (as well as the entire person within which it is contained) fail in an epic and ongoing way, that it is unable to directly learn from. (Because after S1 takes over and makes a mess, S2 makes excuses and explanations for it, as is its evolved job.)
This is the heart and soul of akrasia: the failure of S2 to comprehend S1 and its relationship thereto. S2 was never intended to comprehend S1, as that would deflate its plausible deniability and disinformation-sowing ability about your real motives and likely future behaviors.
this hack has the property that, once I have identified a problem it can safely solve, I have already solved the problem.
If that’s so, then you should be able to save considerable time by asking what irrational belief or judgment you’re holding, and working directly on dropping that, rather than trying to reason about the actual problem while the alarm is still going off.
Note, by the way, that the Work doesn’t do anything that you can’t or don’t do normally when you change your mind about something and stop worrying about it. It’s simply a more-minimal, straight-path procedure for doing so. That is, there is no claim of magic here—it’s just an attempt to formalize the process of digging out and eliminating one particular form of irrationally-motivated reasoning.
As such, it or something like it ought to be in every rationalist’s toolkit. In comparison to straight-up S2 reasoning (which is easily led to believe that things have improved when they have not), it is really easy to tell, when working with S1, whether you have addressed an issue or not, because your physical responses change, in an entirely unambiguous fashion.
Previously in this thread: PJ Eby asserts that the inability to refrain from conveying contempt is a common and severe interpersonal handicap. Nazgulnarsil replies, “This is my problem. . . . I can’t hide the fact that I feel contempt for the vast majority of the people around me (including desirable partners).”
I probably have the problem too. Although it is rare that I am aware of feeling contempt for my interlocutor, there is a lot of circumstantial evidence that messages (mostly nonverbal) conveying contempt are present in my face-to-face communication with non-friends (even if I would like the non-friend to become a friend).
I expect that PJ Eby will assure me that he has seen himself and his clients learn how to transcend this problem. Maybe he can even produce written testimonials from clients assuring me that PJ Eby has cured them of this problem. But I fear that PJ Eby has nothing that a strong Bayesian with long experience with self-help practitioners would consider sufficient evidence that he can help me transcend this problem. Such is the state of the art in self help: there are enough gullible prospective clients that it is never in the financial self-interest of any practitioner to do the hard long work to collect evidence that would sway a non-guillible client.
But I fear that PJ Eby has nothing that a strong Bayesian would consider evidence that he can help me transcend this problem. Such is the state of the art in self help: there are enough gullible prospective clients that it is never in the financial self-interest of any practictioner to do the hard long work to collect evidence that would sway a non-guillible client.
I notice you ignored the part where I just gave somebody a pointer to somebody else’s work that they could download for free to help with that, and then you indirectly accused me of being more interested in financial incentives than results… while calling nazgulnarsil gullible, too!
If that’s an example of your ordinary social demeanor, then it’s not in the least bit surprising that people think you hold them in contempt, as it’s not even necessary to observe any of your “nonverbal” communication to obtain this impression.
Previously in this thread I opined as follows on the state of the art in self help: there are enough gullible prospective clients that it is never in the financial self-interest of any practitioner to do the hard long work to collect evidence that would sway a non-guillible client.
PJ Eby took exception as follows:
you ignored the part where I just gave somebody a pointer to somebody else’s work that they could download for free
Lots of people offer pointers to somebody else’s writings. Most of those people do not know enough about how to produce lasting useful psychological change to know when a document or an author is actually worth the reader’s while. IMHO almost all the writings on the net about producing lasting useful psychological change are not worth the reader’s while.
In the future, I will write “lasting change” when I mean “lasting useful psychological change”.
you indirectly accused me of being more interested in financial incentives than results
The mere fact that you are human makes it much more probable than not that you are more skilled at self-deception and deception than at perceiving correctly the intrapersonal and interpersonal truths necessary to produce lasting change in another human being. Let us call the probability I just referred to “probability D”. (The D stands for deception.)
You have written (in a response to Eliezer) that you usually charge clients a couple of hundred dollars an hour.
The financial success of your self-help practice is not significant evidence that you can produce lasting change in clients because again there is a plentiful supply of gullible self-help clients with money.
The fact that you use hypnotic techniques on clients and write a lot about hypnosis raises probability D significantly because hypnotic techniques rely on the natural human machinery for negotiating who is dominant and who is submissive or the natural human machinery for deciding who will be the leader of the hunting party. Putting the client into a submissive or compliant state of mind probably helps a practitioner quite a bit to persuade the client to believe falsely that lasting change has been produced. You have presented no evidence or argument—nor am I aware of any evidence or argument—that putting the client into a submissive or compliant state helps a practitioner producing lasting change. Consequently, your reliance on and interest in hypnotic techniques significantly raises probability D.
Parenthetically, I do not claim that I know for sure that you are producing false beliefs rather than producing lasting change. It is just that you have not raised the probability I assign to your being able to produce lasting change high enough to justify my choosing to chase a pointer you gave into the literature or high enough for me to stop wishing that you would stop writing about how to produce lasting change in another human being on this site.
Parenthetically, I do not claim that your deception, if indeed that is what it is, is conscious or intentional. Most self-help and mental-health practitioners deceive because they are self-deceived on the same point.
You believe and are fond of repeating that a major reason for the failure of some of the techniques you use is a refusal by the client to believe that the technique can work. Exhorting the client to refrain from scepticism or pessimism is like hypnosis in that it strongly tends to put the client in a submissive or compliant state of mind, which again significantly raises probability D.
To the best of my knowledge (maybe you can correct me here) you have never described on this site an instance where you used a reliable means to verify that you had produced a lasting change. When you believe for example that you have produced a lasting improvement in a male client’s ability to pick up women in bars, have you ever actually accompanied the client to a bar and observed how long it takes the client to achieve some objectively-valid sign of success (such as getting the woman’s phone number or getting the woman to follow the client out to his car)?
In your extensive writings on this site, I can recall no instance where you describe your verifying your impression that you have created a lasting change in a client using reliable means. Rather, you have described only unreliable means, namely, your perceptions of the mental and the social environment and reports from clients about their perceptions of the mental and the social environment. That drastically raises probability D. Of course, you can bring probability
D right back down again, and more, by describing instances where you have used reliable means to verify your impression that you have created a lasting change.
For readers who want to read more, here are two of Eliezer’s sceptical responses to PJ Eby: 001, 002
If it makes you feel any better, I am not seeing you any more harshly than I see any other self-help, life-coach or mental-health practitioner, including those with PhDs in psychology and MDs in psychiatry and those with prestigious academic appointments. In my book, until I see very strong evidence to the contrary, every mental-health practitioner and self-help practitioner is with high probability deluded except those that constantly remind themselves of how little they know.
Actually there is one way in which I resent you more than I resent other self-help, life-coach or mental-health practitioners: the other ones do not bring their false beliefs or rather their most-probably-false not-sufficiently-verified beliefs to my favorite place to read about the mental environment and the social environment. I worry that your copious writings on this site will discourage contributions from those who have constructed their causal model of mental and social reality more carefully.
Lots of people offer pointers to somebody else’s writings. Most of those people do not know enough about how to produce lasting useful psychological change to know when a document or an author is actually worth the reader’s while. IMHO almost all the writings on the net about producing lasting useful psychological change are not worth the reader’s while.
You missed the point—I was pointing out there is no financial incentive for me to send somebody to download somebody else’s free stuff, when I sell workshops on the same topic.
The fact that you use hypnotic techniques on clients and write a lot about hypnosis raises probability D significantly because hypnotic techniques rely on the natural human machinery for negotiating who is dominant and who is submissive or the natural human machinery for deciding who will be the leader of the hunting party. Putting the client into a submissive or compliant state of mind probably helps a practitioner quite a bit to persuade the client to believe falsely that lasting change has been produced. You have presented no evidence or argument—nor am I aware of any evidence or argument—that putting the client into a submissive or compliant state helps a practitioner producing lasting change. Consequently, your reliance on and interest in hypnotic techniques significantly raises probability D.
Holy cow, you’re confused. To actually refute the huge chain of fallacies you’ve just perpetrated seems like it would take me all day. Nonetheless, I shall try to be brief:
I do not use formal hypnosis. I have recently been interested in the similarities between certain effects of hypnosis and my techniques.
I am not aware of any connection between hypnosis, dominance, and hunting parties, and would be very, very surprised if any arose, unless perhaps we’re talking about stage hypnotism. The tools I work with are strictly ones of monoidealism and ideodynamics… which are at work whenever you start thinking you’re hungry until it becomes enough of an obsession for you to walk to the fridge. That is what monoidealism and ideodynamics are: the absorption of the imagination upon a single thought until it induces emotional, sensory, or physical response.
I do not consider my work to be done until someone is surprised by their behavior or their automatic responses, specifically in order to avoid “false placebo” effects. Sometimes, a person will say they think they changed or that something changed a little bit, and my response to that is always to question it, to find out specifically what is happening. A true success nearly always involves something that the person did not expect—indicating that their S1 behavior model has changed, relative to their S2 self-modeling.
A state of submission is not useful to my work; I spend a considerable effort getting clients out of such states, because then they will spend ridiculous amounts of time deprecating themselves, instead of actually answering the questions I ask.
Whew. I think that’ll do for now.
When you believe for example that you have produced a lasting improvement in a male client’s ability to pick up women in bars, have you ever actually accompanied the client to a bar and observed how long it takes the client to achieve some objectively-valid sign of success (such as getting the woman’s phone number or getting the woman to follow the client out to his car)?
I do not believe I have produced such an improvement. I have had only one client who asked for anything like this, and it was for alleviation of specific fears in the matter… and the result was what I’d consider a partial success. That is, the alleviation of some of the fears, and not others. The client did not pursue the matter further with me, but has a girlfriend now. I don’t know whether he met her in a bar or not, but then, the situation we discussed was talking to a girl on the subway. ;-)
If someone wants to learn to do pickup, they should go to a pickup coach. I don’t teach pickup, and I’m not a coach.
In your extensive writings on this site, I can recall no instance where you describe your verifying your impression that you have created a lasting change in a client using reliable means. Rather, you have described only unreliable means, namely, your perceptions of the mental and the social environment and reports from clients about their perceptions of the mental and the social environment. That drastically raises probability D. Of course, you can bring probability D right back down again, and more, by describing instances where you have used reliable means to verify your impression that you have created a lasting change.
Since it is my clients’ perceptions that determine their behavior (not to mention their satisfaction), what else is it that I should measure, besides their perceptions? What measurement of the goodness of their lives shall I use? Is there such a thing as a scale for objectively determining how good someone’s life is?
I seem to remember someone who said something along the lines of “we pretend to treat people, and if we pretend really well, they will pretend to get better… for the rest of their lives.” The point is not about pretending, the point is that virtually all of the measuring tools we have for subjective experience are themselves subjective. (Somatic markers are at least empirical, though still not entirely objective.)
In my book, until I see very strong evidence to the contrary, every mental-health practitioner and self-help practitioner is with high probability deluded except those that constantly remind themselves of how little they know.
I am most curious as to what this evidence would look like. How would you measure it? I would truly love to know about such an absolute measure, if it existed, because even if my methods scored low on it, it would offer me untold opportunity to improve—provided, of course, it gave relatively fast feedback.
(I use somatic markers for measurement because they give extremely fast feedback, and sometimes fast feedback with modest accuracy can be much more useful than a precise measurement that takes weeks or months.)
I worry that your copious writings on this site will discourage contributions from those who have constructed their causal model of mental and social reality more carefully.
Replying to me with this type of thing is not a good way to discourage me from writing here.
Your models are not nearly as carefully constructed as mine, or you wouldn’t be confusing hypnosis with social dominance.
When I wrote that “it is never in the financial self-interest of any [self-help] practitioner to do the hard long work to collect evidence that would sway a non-gullible client,” I referred to long hard work many orders of magnitude longer and harder than posting a link to a web page. Consequently, your pointing out that you post links to web pages even when it is not in your financial self-interest to do so does not refute my point. I do not maintain that you should do the long hard work to collect evidence that would sway a non-guillible client: you probably cannot afford to spend the necessary time, attention and money. But I do wish you would stop submitting to this site weak evidence that would sway only a gullible client or a client very desperate for help.
And with that I have exceeded the time I have budgeted for participation on this site for the day, so my response to your other points will have to wait for another day. If I may make a practical suggestion to those readers wanting to follow this thread: subscribe to the feed for my user page till you see my response to pjeby’s other points, then unsubscribe.
Such is the state of the art in self help: there are enough gullible prospective clients that it is never in the financial self-interest of any practitioner to do the hard long work to collect evidence that would sway a non-guillible client.
In context, the strong implication was that nazgulnarsil was a “gullible potential client”, and that I was preying on him for my financial self-interest. Whether you intended those implications or not, they are nonetheless viable interpretations of your words.
I feel I should also point out that you have not yet denied intending those implications. Instead, you are treating my comments as if they are an argument about the simple text of your comment—which they are not. My objection is to the subtextual insults, not about what studies I should or should not conduct.
As I said, if you talk like this all the time, it’s really no wonder you get feedback that you’re treating people with contempt. If you want to signal less contempt, you might find it useful to pay attention when they point out to you that your implications are insulting, and take pains to separate those implications from your actual position. Otherwise, you imply contempt whether the original implication was intentional or not!
you ignored the part where I just gave somebody a pointer to somebody else’s work that they could download for free
Lots of people offer pointers to somebody else’s writings. Most of those people do not know enough about how to produce lasting useful psychological change to know when a document or an author is actually worth the reader’s while. IMHO almost all the writings on the net about producing lasting useful psychological change are not worth the reader’s while.
If you’re going to insult someone, just do it. Don’t write insults directed at “lots of people”, when it’s obvious who you’re talking about. Perhaps if you made your attacks more concrete, you would realize that you have an obligation to check your facts first.
Regardless of its (in)applicability to pjeby, whose participation on this site I generally approve of, this beautiful rant reinforced and gave reasons for my own similar feelings toward self-help salesmen in general.
Most of those people do not know enough about how to produce lasting useful psychological change to know when a document or an author is actually worth the reader’s while.
The mere fact that you are human makes it much more probable than not that you are more skilled at self-deception and deception than at perceiving correctly the intrapersonal and interpersonal truths necessary to produce lasting change in another human being.
Probably true. But if you use those statistical facts about most people as an excuse to never listen to anyone, or even to one specific person, you’re setting yourself up for failure. How will you ever revise your probability estimate of one person’s knowledge or the general state of knowledge in a field, if you never allow yourself to encounter any evidence?
The financial success of your self-help practice is not significant evidence that you can produce lasting change in clients because again there is a plentiful supply of gullible self-help clients with money.
have you ever actually accompanied the client to a bar and observed how long it takes the client to achieve some objectively-valid sign of success (such as getting the woman’s phone number or getting the woman to follow the client out to his car)?
Is that your true rejection? If P.J. Eby said “why, yes I have,” would you change your views based on one anecdote? Since a randomized, double-blind trial is impossible (or at least financially impractical and incompatible with the self-help coach’s business model), what do you consider a reasonable standard of evidence?
I worry that your copious writings on this site will discourage contributions from those who have constructed their causal model of mental and social reality more carefully.
In my book, until I see very strong evidence to the contrary, every mental-health practitioner and self-help practitioner is with high probability deluded except those that constantly remind themselves of how little they know.
Given the vigorous dissent from you and others, I don’t think “discouraging contributions” is a likely problem! However, I personally would like to see discussion of specific claims of fact and (as much as possible) empirical evidence. A simple assertion of a probability estimate doesn’t help me understand your points of disagreement.
Since a randomized, double-blind trial is impossible (or at least financially impractical and incompatible with the self-help coach’s business model), what do you consider a reasonable standard of evidence?
A reasonable standard of evidence is established by what it takes to change your mind (ideally you’d need to work from elicited prior, which allows to check how reasonable your requirements are). If it’s double-blind trial that is required to change your mind, too bad it’s unavailable.
(If you feel contempt for someone in a real-time social situation, trust me, other people are >noticing, and judging you accordingly. The only real fix is to make it so you don’t have the >contempt in the first place.)
Really? My experience is the opposite (at least when the person is trying to hide it)
My own improvements in nonreactivity and interest in others have been largely due to mindhacking; e.g. removing triggers that caused me to fear rejection of particular types, removing negative “ideals” that triggered judgment of self or others, deleting conditioned appetites driving approval-seeking and status-seeking behaviors, etc. (A blog post I wrote on dirtsimple.org in May mentions in passing a bit about how removing a negative ideal/compulsion to be “good” created a mini-renaissance in my marriage, for example.)
I’ve posted here a lot about ideal-belief-reality conflicts (Robert Fritz’s term) which are a primary driver of hypocritical behaviors—e.g., being obsessed with the future of “humanity” while not being able to be nice to individual humans you disagree with. This is precisely the sort of signaling that will label you as a “jerk” to others. The fewer IBRCs you have, the less often you’ll make judgments of others that trigger automatic contempt signals from your body.
(If you feel contempt for someone in a real-time social situation, trust me, other people are noticing, and judging you accordingly. The only real fix is to make it so you don’t have the contempt in the first place.)
this is my problem. I look great on paper but as soon as you get me in a social situation I can’t hide the fact that I feel contempt for the vast majority of the people around me (including desirable partners). I think the problem is that I connect behaviors I see in those around me to larger scale social problems even though this is stupid overgeneralizing. is there anything you could point to since your awareness of it makes me think you’ve dealt with it yourself.
Two thoughts:
Learn humility. When you think about how much superior you are to others, challenge that idea: think of ways in which you are not superior. Perhaps more important, remind yourself that your superiority is partially determined by luck. Practice, practice, practice.
Learn confidence. This may or may not be true in your case, but people often feel contempt towards people that they worry may judge them harshly. If you are confident enough not to be threatened by their judgment, then you can act more wisely and learn to manipulate the interactions.
#2 is very interesting and something that hadn’t occurred to me before. It is a reciprocal relationship. I judge others too harshly and in turn (since i generalize others from the example of myself) worry that I will be judged too harshly.
thank you very much!
edit: font is showing up weird for me even though I did no formatting...
In the “recent comments” segment of the sidebar, a pound sign appears which doesn’t show up in the full comment. Could that be it?
#Testing. This paragraph begins with a pound sign.
#Testing. This paragraph begins with a backslash followed by a pound sign.
So apparently you did do formatting, accidentally. Escape the pound sign with a backslash to rid yourself of the giant bold text.
Cool! That’s not documented in the help.
Markdown’s official syntax documentation. (LessWrong’s Markdown implementation may not perfectly match this.)
One of the easiest methods to get started with is The Work of Byron Katie. The critical key to making it work, though, that is not emphasized anywhere near enough in her books (or anyone else’s, for that matter), is that you need a mental state of genuine curiosity and “wondering” when you ask the questions. It simply will not work if you use the questions as a way of arguing with yourself or beating yourself up, or if you simply recite them in a rote fashion, like magic words.
The magic isn’t in the words, it’s in the “search queries” you’re running on your mind. As with all mind hacking, the purpose is to create new connections between existing memories and concepts in order to update your “map”. So, it will also not work if you try to consciously reason out the answers; you want to keep silent verbally, so you can notice when System 1 answers, without being verbally overshadowed by System 2.
I don’t use the Work that much on IBRCs and judgment issues, myself. (I have techniques that generalize better to entire groups or classes of people, rather than just focusing on individual people.) But The Work is good practice for the basic skill underlying all mind hacking: asking System 1 a question, then shutting up system 2 and waiting for an answer. And it’ll definitely give you a taste of what it feels like to drop a “should” or judgment about a person, and how it changes your felt-responses to them.
The reference you recommend seems to advocate changing one’s attitude be engaging in a sequence of biases.
First, one is supposed to construct a strawman of their reasons for not liking someone or something:
Rather than seeking out one’s true objection, one should express their dislike in terms of their pettiest reasons, and identify with that expression. And one should “Simply pick a person or situation and write, using short, simple sentences”, discouraging deep explanation, which in turn discourages deep understanding. An important filter is bypassed, allowing the bad reasons to mix with the good. The question “What is it that they should or shouldn’t do, be, think, or feel?” in the context of asking one’s opinions is a setup to appear to commit the Mind Projection Fallacy. Priming someone to say “X should” when they mean “I want X to” so you can later say “In reality, there is no such thing as a ‘should’ or a ‘shouldn’t.’” is a sneaky debating trick.
And then, the strawman is subjected to unreasonable standards:
Of course one can not absolutely know that it’s true, one should not assign probability 1 to anything. Does one have a large accumulation of evidence that causes one to have high confidence that it’s true? That seems like a more reasonable question, which one of course should apply to one’s true objection.
Then the question is asked:
which would be fine if it were setting up to ask, “Is that reaction constructive? Are there more constructive ways you could react?”. But instead, the follow up is:
Hm, who would I be if it didn’t bother me to have my face burned. Probably the sort of person who doesn’t avoid being touch in the face by hot pokers.
And finally, there is the “Turn it around” concept. Now, holding oneself to the same standards one expects of others is good, but a big problem comes from asking one to “find three genuine, specific examples of how the turnaround is true in your life”. This is advocating the Confirmation Bias. One is encourage to find supporting evidence for the turn around, but not contradicting evidence. If I have a problem with someone for being a chronic liar, it does not make sense for me to think it is OK because I can recall three time she told the truth, or three times I told a lie. What does make sense is to notice her unusually high proportion of lies to honest statements, and to not believe what she tells me without corroboration, and maybe even associate instead with others who reliably give me truthful information.
If this is the sort of mind hack you advocate, it is no wonder that people express skepticism instead of trying it. After all, our sister site is not called “Embracing Bias”.
It’s engaging in System 1 thinking, which of course has a different set of biases than System 2 thinking. The object is to activate the relevant System 1 biases, and then update the information stored there.
Absolutely. How else would you expect to reconsolidate the memory trace, without first activating it?
You mean your System 2 explanation whose function is to make your System 1 bias appear more righteous or socially acceptable. That “true objection”?
Precisely. We don’t want System 2 to verbally overshadow the irrational basis for your reactions, by filtering them out and replacing them with good-sounding explanations.
Actually, it’s an attempt to identify what conditioned standard or ideal you believe the person is violating, creating your irrational reaction.
Of course it’s a debating trick. If fair, logical reasoning worked on System 1, there’d be no need for mindhacking, would there?
And you are discussing this with System 2 reasoning—i.e., abstract reasoning. When you ask yourself this question about a specific thing, e.g., “can I absolutely know it’s true that Tom should listen to me?”, it is a request to query System 1 for your implicit epistemology on that particular topic. That is, how would you know if it were true? What if it weren’t? How would you know that? In the process, this retrieves relevant memories, making them available for reconsolidation.
You are confusing a concrete system 1 practice with abstract system 2 reasoning. Again, if the two were the same, we would have no need for mindhacking, and the Dark Arts could not exist.
(That being said, I’ve honestly never found this particular question that useful, compared to questions 1, 3, and 4.)
Indeed. However, if you were to translate that to a System 1 question, it’d be more like, “How do I know that it’s true?”. That is, something closer to a simple query for sensory data, than a question calling for abstract judgment. (I’ve actually used this question.)
One’s “true objection” is of course in most cases an irrational, childish thing. If not, one would likely not be experiencing a problem or feelings that cause you to want to engage in this process in the first place.
Again, we need to distinguish System 1 and 2 thinking. “Is that reaction constructive?” and “Are there more constructive ways you could react?” are abstract questions that lead to a literal answer of “yes”… not to memory reconsolidation.
“Who would you be without that thought?” is a presuppositional query that invites you to imagine (on a sensory, System 1 level) what you would be like if you didn’t believe what you believe. This is a sneaky trick to induce memory reconsolidation, linking an imagined, more positive reaction to the point in your memory where the existing decision path was.
This question, in other words, is a really good mind hack.
Mind hacking questions are not asked to get answers, they are questions with side-effects.
You are equating physical and emotional pain; the Work is a process for getting rid of emotional pain created by moral judgments stored in System 1, not logical judgments arrived at by System 2.
On the contrary, it is countering confirmation bias. Whatever belief you are modifying has been keeping you from noticing those counterexamples previously. Notice, btw, that Katie advises not doing the turnarounds until after the existing belief has been updated: this is because when you firmly believe something, you react negatively to the suggestion of looking for counterexamples, and tend to assume you’ve done a good job of looking for them, even though you haven’t.
So instead, the first two questions are directed at surfacing your real (sensory, System 1) evidence for the belief, so that you can then update with various specific classes of counterexample. Questions 3 and 4, for example, associate pain to the belief, and pleasure to the condition of being without it, providing a counterexample in that dimension. The turnaround searches provide hypocrisy-puncturing evidence that you are not really acting to the same standards you hold others to, and that your expectations are unrealistic, thus providing another kind of counterexample.
You will not arrive at useful information about the process by discussing it in the abstract. Pick a specific situation and belief, and actually try it.
Sure. And if you can do that without an emotional reaction that clouds your judgment or makes you send off unwanted signals, great! The Work is a process for getting rid of System 1 reactions, not a way of replacing System 2 reasoning.
Mind hacking is working on System 1 to obtain behavioral change, not engaging in System 2 reasoning to result in “truth”.
That’s because, when your System 2 tries to reason about your behavior, it usually verbally overshadows system 2, and ends up confabulating.… which is why pure system 2 reasoning is absolutely atrocious at changing problematic behaviors and emotions.
(Edit to add: Btw, I don’t consider The Work to be a particularly good form of mindhacking. IMO, it doesn’t emphasize testing enough, doesn’t address S1/S2 well, and has a rather idiosyncratic set of questions. I personally use a much wider range of questions and sequences of questions to accomplish different things, and last, but far from least, I don’t unquestioningly endorse all of Katie’s philosophy. Nonetheless, I recommend the Work to people because, performed properly it works on certain classes of things, and can be a gentle introduction to the subject. Another good book is “Re-Create Your Life” by Morty Lefkoe, which provides a difference evidence-based reconsolidation process, but The Work has the advantage of having a free online introduction.)
IAWYC, but you didn’t need to quote and refute every sentence to get the point across about System 1 and System 2 and our real vs. signaled reasons for affective reactions. It’s a question of style, not content, but I think you’d communicate your ideas much more effectively to me and to others here at LW if you focused on being concise.
No, I mean if, for example, it bothers you that your roommate never comes through with his share of the rent, you would not want to focus on how you get annoyed by his stupid shrug (which you probably only find annoying and stupid because you associate it with him and his flakiness).
You seem to have gone from missing my point to agreeing with it. If you understood why I said it is a debating trick, why would you argue that it is something else instead?
If it were countering the confirmation bias, it would ask something like, “Consider the last ten time X had an opportunity to do Y. How many of those times did X actually do Y?” (If the answer is ten, it seems probable that this method has the wrong goal.) And even with that, you have to be careful not to over do it. You would not want to excuse someone punching you in the face when you meet, just because he only actually does it one time in twenty. But asking for three examples ever of not doing the wrong thing, and expecting people to change their minds based on that, is encouraging irrationality. That just is not enough evidence.
It seems I wasn’t clear. My problem with the method in this case is not that it wouldn’t work. My problem is that it might work, and I would lose my ability to protect myself against those who would manipulate my behavior through lies.
You seem pretty much in agreement with my impression that the method is to engage in a series of biases, but you seem to actually think this is a good thing, because somehow these biases will exactly cancel out other biases the person already has. I see no reason to expect this precise balance from the method. I expect someone who uses this method to forgive people they should not forgive (that is, the forgiveness is not in their interest), and will be easy to take advantage of. After all, they do not have XML tags that say “Should not be taken advantage of”, and they could imagine not being bothered by it.
Contrast this with dedicated rationality, confronting your biases head on, acknowledging the real extent of things that bother you, neither exaggerating nor down playing. You would actually recognize the difference between an abusive relationship and not always getting your way.
But the purpose if you were doing The Work, would be to focus on that stupid shrug and his “flakiness”, precisely so that you can drop them from consideration. Presumably (I assume this is a hypothetical situation), you would be having a judgment like “he shouldn’t be flaky” or “he should be responsible”, or some other character judgment based on his behavior. The point of the Work is not to do rational computations, it’s to drop emotional attachments from the system whose job it is to make you reinforce your tribe’s value system through emotional displays.
Once you’ve dropped whatever “irrational” stuff you have going on, the reasoning about what to practically do gets a LOT easier. As often as not, the first thing that happens upon thinking about the problem afterward is that an obviously sensible solution pops into your head, that you feel you can actually execute—like, say, calmly starting the search for a new roommate.
Yeah, that’s totally not the point of the exercise. The point is to drop the emotional judgments that are clouding your reasoning, not to perform reasoning. You do the reasoning after your head is free of the attachment. Because while System 1 is thinking “irresponsible” and “flaky”—i.e., “this person is violating the rules of my tribe”, System 2 tends to stay busy figuring out what arguments to use to accuse him with in front of the tribe, instead of actually trying to solve the problem.
No, it means you’ve directed the tool to the wrong target: you’re not supposed to apply it to the practical problem, you apply it to the most emotional, irrational thoughts you have about the problem… the ones that System 2 likes to keep swept under the rug.
No, because what you’re supposed to use it on is ideas like, “People should be responsible” or “People should do their share”, or whatever “tribal standard” you have an emotional attachment to, interfering with your reasoning.
Some people here, for example, get in a tizzy about theists or self-help gurus or some other group that is violating their personal tribal standards. For a while, I got in a tizzy about people here violating one of mine, i.e. “people should listen to me”. I used the Work on it, and then quit beating that particular dead horse.
But note that this does not mean I now think that people are listening to me more, or that I now believe I have nothing to say, or anything like that. All I did by dropping the “should” is that I no longer react emotionally to the fact that some people listen to some things more than others. That is now a “mere fact” to me, such that I can still prefer to be listened to, but not experience a negative response to the reverse.
That’s because you’re imagining using it for something that it’s specifically not intended to be used on. It is aimed at System 1 (aka “the heart”, in Katie’s terminology) rather than System 2 (aka “the head”). It’s not for changing your intellectual appraisal of the situation, it’s for removing the emotion that says tribal standards are being violated by a member of the tribe.
That’s why the emphasis is on “shoulds”, and simple, emotional language—the real target is whatever standard you’ve imprinted as a “moral”, usually at a young age.
It doesn’t interfere with your rational desire not to be taken advantage of. You will still prefer that not to happen. You just won’t have emotions clouding your judgment about doing something about it.
Have you ever noticed how often people complain and complain about someone else’s behavior, but never actually do anything about it? This is a fix for that.
(Hypothesis: in a tribal environment, individual enforcement of an important group standard is less advantageous than bringing the matter before the tribe, where you can signal your own compliance with the standard and your willingness to punish violations, without having to take on all the risks and costs of private justice. Thus, our emotions of judgment and outrage are evolved to motivate us to expose the violation, rather than taking action on our own. The Work and certain of my own techniques appear to switch off the emotional trigger associated with the standard, ironically freeing one to contemplate whatever enforcement or alternative responses one rationally sees fit.)
Well, it is good that you have some discretion about what issues you use this technique on, but the document you referenced quite clearly states “The first step in The Work is to write down your judgments about any stressful situation in your life” (emphasis added). In the question and answer section, it goes so far as to say, “No one has ever hurt anyone. No one has ever done anything terrible.” This criteria of applying it only on beliefs that are actually irrational seems to be something you added, and only communicated when pressed. Referencing this method without the caveat seems likely to teach people to be half a mind hacker, subject to the problems I described.
So, it seems, before one can use this method effectively, one must perform a rational analysis of which beliefs about which perceptions of problems are truly irrational. I usually find that once I complete this analysis, I am done; I have concluded the perception is irrational and that is enough to dismiss it. If other people need some trick to get some numbered system in their brains to accept the rational conclusion, so be it.
It says to write down your judgments… and goes on to define the language in which such judgments are to be expressed, i.e., judgmentally. i.e., emotionally and irrationally.
IOW, the premise is that if you were actually rational about a situation, you would not be stressed.
If your face is being approached by a red-hot poker, and you are being rational, you will move, or do whatever else is necessary to stop it, but you will not be experiencing the same type of “stress” as a person who is worrying that somebody might stick them with a red-hot poker at some point in the future, and that there’s nothing they can do about it.
So yes, you can apply the technique to “any stressful situation”, because it is not rational to remain stressed, instead of either 1) taking action, or 2) deciding it’s not worth taking action. Stress arises from not doing either of those two things, and is thus prima facie evidence of irrationality.
Her point is that those ideas exist in maps, not territory, and that the assumed consequences of having “something terrible” happen are a consequence of how the information is coded in your map, not whatever actually happened in the territory. Continuing to experience suffering about an event that is already over is not rational.
Not at all—even that brief introductory document stresses the importance of getting statements that are from the “heart”—i.e. emotional judgments from System 1, and gives very specific instructions as to how to accomplish that. (You should know, as you quoted some of them and argued that they were bad precisely because they would elicit System 1 irrationality!)
However, as far as I can tell, you didn’t actually follow those instructions. Instead, it appears to me that you imagined following the instructions with a hypothetical situation. It is not at all the same thing, as this will engage System 2 predictive models rather than the System 1 models, and they generate different answers.
Actually, I gave the caveat that you MUST shut your verbal mind up and pay attention to your “inner” responses, so that you would get information from System 1, not System 2. She also gives it, but does not IMO emphasize it enough. That’s why I pointed it out in advance.
Being able to silence System 2, pay attention to System 1, and distinguish System 2 “thoughts” from System 1 “responses” are the three most important skills a mind hacker can have. Without them, you aren’t doing mind-hacking, you’re confabulating.
Not at all. All you need to know is that you feel bad about something, as that is sufficient to know that you have an irrational perception. Otherwise, you’d be doing something about the problem instead of feeling bad about it.
The Work (and most other block-removal mind hacks) clears away the emotion so you can actually think. While an emotion may be useful for signaling that a situation is important to you, most of our evolved emotions are not tuned to optimize rational thought; they’re there for signaling, learning, and preparing for simple actions (like fight/flight).
So even though Eliezer’s “Way” says that you should feel emotions when it’s rational to do so, mind hackers have a somewhat different view about which emotions it’s rational to have. Negative emotions are mostly not useful in our modern environment. They serve a useful purpose in preventing courses of action that might lead to them, but once something bad has already happened, they cease to be useful.
Actually, the Work gets rid of the need to perform such an analysis; it simply drops the irrational stuff, making a rational solution easier to see. In fact, a rational and/or creative solution will often pop spontaneously to mind immediately following.
And since it does not rely on any advanced reasoning skills, or the ability to apply them under stress conditions, I suspect that the Work alone could do far more for raising the “sanity waterline” of humanity than extreme rationality skills ever will.
A person who has the Work doesn’t need a religion to comfort them, although it’s unlikely to cause anyone to consciously abandon their religion, vs. simply drifting away from it.
(Of course, some people who identify as “rationalist” will probably have a problem with that, since their tribal standard insists that people must not merely do rational things due to not being irrationally-motivated, but must do them because Reason said so, in spite of their irrational motivations. Which, of course, is an irrational “should” of precisely the type that the Work removes, and that we’d all be better off without… rationalists and non-rationalists alike.)
The problem is that people can perceive many reasons why a situation is stressful, some of those might be rational (or rationally supportable), and some might be irrational. A method of deceptively filtering out the good reasons, and addressing the bad reason in a way that feels like addressing all the reasons (because it is not acknowledged that the good reasons are filtered) goes too far.
Your caveat was about a System 1/System 2 distinction. If this is the same as the rational/irrational distinction I am concerned about, we have an issue with inferential distances. And if you think that following your advice will cause people to avoid applying the method to rational perceptions of problems despite not even being aware of the issue, well, it is that sort of thinking that makes people wary of just trying your advice. I know I don’t want to prove the method is dangerous that way.
No, I argued they were bad because they would elicit irrationality. I didn’t say anything about System 1. I would call this an instance of the Double Illusion of Transparency, except I never even claimed to understand System 1 and System 2. (And who gives numbers instead of names to the two most important entities in their model?)
Of course I did not actually follow the instructions. I don’t run untrusted programs on my computer, and I am definitely not going run an untrusted mind hack on my brain. I analyze such mind hacks, looking for what problems it can solve, and what problems in can introduce, so I can weigh the risk against the benifets. And this hack has the property that, once I have identified a problem it can safely solve, I have already solved the problem.
First: there’s no such thing as a rationally supported reason for continuing to experience stress, once you’re aware of it, any more than there’s a reason for an alarm bell to keep ringing once everybody knows there’s a fire.
Second, the Work (and other System 1 mindhacks) does not cause you to forget that there is a fire or that it would be a good idea to put it out! It simply shuts off the alarm bell so you can concentrate.
These statements are entirely a confusion on your part because you are running all of your analysis from S2, imagining what would happen if you applied this idea in S2.
But S2 is so bad (by default) at predicting how minds actually work, that not only is it wrong about what would happen in S1, it’s also wrong about what would happen if you ran it in S2, as you were anticipating.
Because what would actually happen, if you applied this to a “live” issue in S2, is that S2 (which is still being motivated by the alarm bell going off in S1) would find reasons to reject the new input.
That is, as you considered alternatives, you’d be doing precisely what your S2 was doing as you made your analysis: finding reasons why the alarm is valid and should therefore be kept ringing!
In other words, the actual failure mode of running the technique in S2 is to not change anything, and end up concluding that the technique “didn’t work”, when in fact it was never applied.
That’s because this is a major evolved function of S2: to argue for whatever S1 tells it to argue for.
That’s why a failed mind hack doesn’t result in some sort of bizarre arational belief change at S2 as you seem to think. Instead, the technique simply fails to do anything, and the alarm bell just keeps ringing—which keeps S2 stuck in the groove established for it by S1.
Stanovich and West, in their paper on native and learned modes of reasoning. System 1 refers to naive, intuitive, emotional, concrete, “near” operations, and System 2 the abstract, learned, logical, “far” operations. They apparently chose to number instead of name them, because they were summarizing the research of almost a dozen other papers by other authors that each used different names for roughly the same systems.
IOW, it wasn’t me. I’ve used names in the past like “you”(S2)/”yourself”(S1), savant(S1)/speculator(S2), and horse(S1)/monkey(S2). Haidt, in The Happiness Hypothesis, calls them the monkey(S2) and the elephant(S1).
All told, S1/S2 actually seems to be a bit simpler! (Also, I think the Inner Game of Tennis refers to Self 1 and Self 2, and I think they’re numbered the same way, though it’s been a long time.)
While a nice idea in theory, it fails in practice because the naive theory of mind encoded in S2 doesn’t look anything like the way S1 and S2 work in practice.
S2 in particular seems to be deliberately and perversely reluctant to notice how it’s S1′s puppet spin doctor, rather than its own free agent. (Because it’s sort of a free agent… so long as S1 doesn’t override.)
Thus, its predictions about itself (as well as the entire person within which it is contained) fail in an epic and ongoing way, that it is unable to directly learn from. (Because after S1 takes over and makes a mess, S2 makes excuses and explanations for it, as is its evolved job.)
This is the heart and soul of akrasia: the failure of S2 to comprehend S1 and its relationship thereto. S2 was never intended to comprehend S1, as that would deflate its plausible deniability and disinformation-sowing ability about your real motives and likely future behaviors.
If that’s so, then you should be able to save considerable time by asking what irrational belief or judgment you’re holding, and working directly on dropping that, rather than trying to reason about the actual problem while the alarm is still going off.
Note, by the way, that the Work doesn’t do anything that you can’t or don’t do normally when you change your mind about something and stop worrying about it. It’s simply a more-minimal, straight-path procedure for doing so. That is, there is no claim of magic here—it’s just an attempt to formalize the process of digging out and eliminating one particular form of irrationally-motivated reasoning.
As such, it or something like it ought to be in every rationalist’s toolkit. In comparison to straight-up S2 reasoning (which is easily led to believe that things have improved when they have not), it is really easy to tell, when working with S1, whether you have addressed an issue or not, because your physical responses change, in an entirely unambiguous fashion.
Previously in this thread: PJ Eby asserts that the inability to refrain from conveying contempt is a common and severe interpersonal handicap. Nazgulnarsil replies, “This is my problem. . . . I can’t hide the fact that I feel contempt for the vast majority of the people around me (including desirable partners).”
I probably have the problem too. Although it is rare that I am aware of feeling contempt for my interlocutor, there is a lot of circumstantial evidence that messages (mostly nonverbal) conveying contempt are present in my face-to-face communication with non-friends (even if I would like the non-friend to become a friend).
I expect that PJ Eby will assure me that he has seen himself and his clients learn how to transcend this problem. Maybe he can even produce written testimonials from clients assuring me that PJ Eby has cured them of this problem. But I fear that PJ Eby has nothing that a strong Bayesian with long experience with self-help practitioners would consider sufficient evidence that he can help me transcend this problem. Such is the state of the art in self help: there are enough gullible prospective clients that it is never in the financial self-interest of any practitioner to do the hard long work to collect evidence that would sway a non-guillible client.
I notice you ignored the part where I just gave somebody a pointer to somebody else’s work that they could download for free to help with that, and then you indirectly accused me of being more interested in financial incentives than results… while calling nazgulnarsil gullible, too!
If that’s an example of your ordinary social demeanor, then it’s not in the least bit surprising that people think you hold them in contempt, as it’s not even necessary to observe any of your “nonverbal” communication to obtain this impression.
Previously in this thread I opined as follows on the state of the art in self help: there are enough gullible prospective clients that it is never in the financial self-interest of any practitioner to do the hard long work to collect evidence that would sway a non-guillible client.
PJ Eby took exception as follows:
Lots of people offer pointers to somebody else’s writings. Most of those people do not know enough about how to produce lasting useful psychological change to know when a document or an author is actually worth the reader’s while. IMHO almost all the writings on the net about producing lasting useful psychological change are not worth the reader’s while.
In the future, I will write “lasting change” when I mean “lasting useful psychological change”.
The mere fact that you are human makes it much more probable than not that you are more skilled at self-deception and deception than at perceiving correctly the intrapersonal and interpersonal truths necessary to produce lasting change in another human being. Let us call the probability I just referred to “probability D”. (The D stands for deception.)
You have written (in a response to Eliezer) that you usually charge clients a couple of hundred dollars an hour.
The financial success of your self-help practice is not significant evidence that you can produce lasting change in clients because again there is a plentiful supply of gullible self-help clients with money.
The fact that you use hypnotic techniques on clients and write a lot about hypnosis raises probability D significantly because hypnotic techniques rely on the natural human machinery for negotiating who is dominant and who is submissive or the natural human machinery for deciding who will be the leader of the hunting party. Putting the client into a submissive or compliant state of mind probably helps a practitioner quite a bit to persuade the client to believe falsely that lasting change has been produced. You have presented no evidence or argument—nor am I aware of any evidence or argument—that putting the client into a submissive or compliant state helps a practitioner producing lasting change. Consequently, your reliance on and interest in hypnotic techniques significantly raises probability D.
Parenthetically, I do not claim that I know for sure that you are producing false beliefs rather than producing lasting change. It is just that you have not raised the probability I assign to your being able to produce lasting change high enough to justify my choosing to chase a pointer you gave into the literature or high enough for me to stop wishing that you would stop writing about how to produce lasting change in another human being on this site.
Parenthetically, I do not claim that your deception, if indeed that is what it is, is conscious or intentional. Most self-help and mental-health practitioners deceive because they are self-deceived on the same point.
You believe and are fond of repeating that a major reason for the failure of some of the techniques you use is a refusal by the client to believe that the technique can work. Exhorting the client to refrain from scepticism or pessimism is like hypnosis in that it strongly tends to put the client in a submissive or compliant state of mind, which again significantly raises probability D.
To the best of my knowledge (maybe you can correct me here) you have never described on this site an instance where you used a reliable means to verify that you had produced a lasting change. When you believe for example that you have produced a lasting improvement in a male client’s ability to pick up women in bars, have you ever actually accompanied the client to a bar and observed how long it takes the client to achieve some objectively-valid sign of success (such as getting the woman’s phone number or getting the woman to follow the client out to his car)?
In your extensive writings on this site, I can recall no instance where you describe your verifying your impression that you have created a lasting change in a client using reliable means. Rather, you have described only unreliable means, namely, your perceptions of the mental and the social environment and reports from clients about their perceptions of the mental and the social environment. That drastically raises probability D. Of course, you can bring probability D right back down again, and more, by describing instances where you have used reliable means to verify your impression that you have created a lasting change.
For readers who want to read more, here are two of Eliezer’s sceptical responses to PJ Eby: 001, 002
If it makes you feel any better, I am not seeing you any more harshly than I see any other self-help, life-coach or mental-health practitioner, including those with PhDs in psychology and MDs in psychiatry and those with prestigious academic appointments. In my book, until I see very strong evidence to the contrary, every mental-health practitioner and self-help practitioner is with high probability deluded except those that constantly remind themselves of how little they know.
Actually there is one way in which I resent you more than I resent other self-help, life-coach or mental-health practitioners: the other ones do not bring their false beliefs or rather their most-probably-false not-sufficiently-verified beliefs to my favorite place to read about the mental environment and the social environment. I worry that your copious writings on this site will discourage contributions from those who have constructed their causal model of mental and social reality more carefully.
You missed the point—I was pointing out there is no financial incentive for me to send somebody to download somebody else’s free stuff, when I sell workshops on the same topic.
Holy cow, you’re confused. To actually refute the huge chain of fallacies you’ve just perpetrated seems like it would take me all day. Nonetheless, I shall try to be brief:
I do not use formal hypnosis. I have recently been interested in the similarities between certain effects of hypnosis and my techniques.
I am not aware of any connection between hypnosis, dominance, and hunting parties, and would be very, very surprised if any arose, unless perhaps we’re talking about stage hypnotism. The tools I work with are strictly ones of monoidealism and ideodynamics… which are at work whenever you start thinking you’re hungry until it becomes enough of an obsession for you to walk to the fridge. That is what monoidealism and ideodynamics are: the absorption of the imagination upon a single thought until it induces emotional, sensory, or physical response.
I do not consider my work to be done until someone is surprised by their behavior or their automatic responses, specifically in order to avoid “false placebo” effects. Sometimes, a person will say they think they changed or that something changed a little bit, and my response to that is always to question it, to find out specifically what is happening. A true success nearly always involves something that the person did not expect—indicating that their S1 behavior model has changed, relative to their S2 self-modeling.
A state of submission is not useful to my work; I spend a considerable effort getting clients out of such states, because then they will spend ridiculous amounts of time deprecating themselves, instead of actually answering the questions I ask.
Whew. I think that’ll do for now.
I do not believe I have produced such an improvement. I have had only one client who asked for anything like this, and it was for alleviation of specific fears in the matter… and the result was what I’d consider a partial success. That is, the alleviation of some of the fears, and not others. The client did not pursue the matter further with me, but has a girlfriend now. I don’t know whether he met her in a bar or not, but then, the situation we discussed was talking to a girl on the subway. ;-)
If someone wants to learn to do pickup, they should go to a pickup coach. I don’t teach pickup, and I’m not a coach.
Since it is my clients’ perceptions that determine their behavior (not to mention their satisfaction), what else is it that I should measure, besides their perceptions? What measurement of the goodness of their lives shall I use? Is there such a thing as a scale for objectively determining how good someone’s life is?
I seem to remember someone who said something along the lines of “we pretend to treat people, and if we pretend really well, they will pretend to get better… for the rest of their lives.” The point is not about pretending, the point is that virtually all of the measuring tools we have for subjective experience are themselves subjective. (Somatic markers are at least empirical, though still not entirely objective.)
I am most curious as to what this evidence would look like. How would you measure it? I would truly love to know about such an absolute measure, if it existed, because even if my methods scored low on it, it would offer me untold opportunity to improve—provided, of course, it gave relatively fast feedback.
(I use somatic markers for measurement because they give extremely fast feedback, and sometimes fast feedback with modest accuracy can be much more useful than a precise measurement that takes weeks or months.)
Replying to me with this type of thing is not a good way to discourage me from writing here.
Your models are not nearly as carefully constructed as mine, or you wouldn’t be confusing hypnosis with social dominance.
When I wrote that “it is never in the financial self-interest of any [self-help] practitioner to do the hard long work to collect evidence that would sway a non-gullible client,” I referred to long hard work many orders of magnitude longer and harder than posting a link to a web page. Consequently, your pointing out that you post links to web pages even when it is not in your financial self-interest to do so does not refute my point. I do not maintain that you should do the long hard work to collect evidence that would sway a non-guillible client: you probably cannot afford to spend the necessary time, attention and money. But I do wish you would stop submitting to this site weak evidence that would sway only a gullible client or a client very desperate for help.
And with that I have exceeded the time I have budgeted for participation on this site for the day, so my response to your other points will have to wait for another day. If I may make a practical suggestion to those readers wanting to follow this thread: subscribe to the feed for my user page till you see my response to pjeby’s other points, then unsubscribe.
What you said was:
In context, the strong implication was that nazgulnarsil was a “gullible potential client”, and that I was preying on him for my financial self-interest. Whether you intended those implications or not, they are nonetheless viable interpretations of your words.
I feel I should also point out that you have not yet denied intending those implications. Instead, you are treating my comments as if they are an argument about the simple text of your comment—which they are not. My objection is to the subtextual insults, not about what studies I should or should not conduct.
As I said, if you talk like this all the time, it’s really no wonder you get feedback that you’re treating people with contempt. If you want to signal less contempt, you might find it useful to pay attention when they point out to you that your implications are insulting, and take pains to separate those implications from your actual position. Otherwise, you imply contempt whether the original implication was intentional or not!
If you’re going to insult someone, just do it. Don’t write insults directed at “lots of people”, when it’s obvious who you’re talking about. Perhaps if you made your attacks more concrete, you would realize that you have an obligation to check your facts first.
Regardless of its (in)applicability to pjeby, whose participation on this site I generally approve of, this beautiful rant reinforced and gave reasons for my own similar feelings toward self-help salesmen in general.
Probably true. But if you use those statistical facts about most people as an excuse to never listen to anyone, or even to one specific person, you’re setting yourself up for failure. How will you ever revise your probability estimate of one person’s knowledge or the general state of knowledge in a field, if you never allow yourself to encounter any evidence?
Is that your true rejection? If P.J. Eby said “why, yes I have,” would you change your views based on one anecdote? Since a randomized, double-blind trial is impossible (or at least financially impractical and incompatible with the self-help coach’s business model), what do you consider a reasonable standard of evidence?
Given the vigorous dissent from you and others, I don’t think “discouraging contributions” is a likely problem! However, I personally would like to see discussion of specific claims of fact and (as much as possible) empirical evidence. A simple assertion of a probability estimate doesn’t help me understand your points of disagreement.
A reasonable standard of evidence is established by what it takes to change your mind (ideally you’d need to work from elicited prior, which allows to check how reasonable your requirements are). If it’s double-blind trial that is required to change your mind, too bad it’s unavailable.
Really? My experience is the opposite (at least when the person is trying to hide it)
del