The reference you recommend seems to advocate changing one’s attitude be engaging in a sequence of biases.
First, one is supposed to construct a strawman of their reasons for not liking someone or something:
I invite you to be judgmental, harsh, childish, and petty. Write with the
spontaneity of a child who is sad, angry, confused, or frightened.
Rather than seeking out one’s true objection, one should express their dislike in terms of their pettiest reasons, and identify with that expression. And one should “Simply pick a person or situation and write, using short, simple sentences”, discouraging deep explanation, which in turn discourages deep understanding. An important filter is bypassed, allowing the bad reasons to mix with the good. The question “What is it that they should or shouldn’t do, be, think, or feel?” in the context of asking one’s opinions is a setup to appear to commit the Mind Projection Fallacy. Priming someone to say “X should” when they mean “I want X to” so you can later say “In reality, there is no such thing as a ‘should’ or a ‘shouldn’t.’” is a sneaky debating trick.
And then, the strawman is subjected to unreasonable standards:
Can you absolutely know that it’s true?
Of course one can not absolutely know that it’s true, one should not assign probability 1 to anything. Does one have a large accumulation of evidence that causes one to have high confidence that it’s true? That seems like a more reasonable question, which one of course should apply to one’s true objection.
Then the question is asked:
How do you react, what happens, when you believe that thought?
which would be fine if it were setting up to ask, “Is that reaction constructive? Are there more constructive ways you could react?”. But instead, the follow up is:
And finally, there is the “Turn it around” concept. Now, holding oneself to the same standards one expects of others is good, but a big problem comes from asking one to “find three genuine, specific examples of how the turnaround is true in your life”. This is advocating the Confirmation Bias. One is encourage to find supporting evidence for the turn around, but not contradicting evidence. If I have a problem with someone for being a chronic liar, it does not make sense for me to think it is OK because I can recall three time she told the truth, or three times I told a lie. What does make sense is to notice her unusually high proportion of lies to honest statements, and to not believe what she tells me without corroboration, and maybe even associate instead with others who reliably give me truthful information.
If this is the sort of mind hack you advocate, it is no wonder that people express skepticism instead of trying it. After all, our sister site is not called “Embracing Bias”.
The reference you recommend seems to advocate changing one’s attitude be engaging in a sequence of biases.
It’s engaging in System 1 thinking, which of course has a different set of biases than System 2 thinking. The object is to activate the relevant System 1 biases, and then update the information stored there.
one should express their dislike in terms of their pettiest reasons, and identify with that expression.
Absolutely. How else would you expect to reconsolidate the memory trace, without first activating it?
Rather than seeking out one’s true objection, …
You mean your System 2 explanation whose function is to make your System 1 bias appear more righteous or socially acceptable. That “true objection”?
And one should “Simply pick a person or situation and write, using short, simple sentences”, discouraging deep explanation, which in turn discourages deep understanding. An important filter is bypassed, allowing the bad reasons to mix with the good.
Precisely. We don’t want System 2 to verbally overshadow the irrational basis for your reactions, by filtering them out and replacing them with good-sounding explanations.
The question “What is it that they should or shouldn’t do, be, think, or feel?” in the context of asking one’s opinions is a setup to appear to commit the Mind Projection Fallacy.
Actually, it’s an attempt to identify what conditioned standard or ideal you believe the person is violating, creating your irrational reaction.
Priming someone to say “X should” when they mean “I want X to” so you can later say “In reality, there is no such thing as a ‘should’ or a ‘shouldn’t.’” is a sneaky debating trick.
Of course it’s a debating trick. If fair, logical reasoning worked on System 1, there’d be no need for mindhacking, would there?
Of course one can not absolutely know that it’s true, one should not assign probability 1 to anything.
And you are discussing this with System 2 reasoning—i.e., abstract reasoning. When you ask yourself this question about a specific thing, e.g., “can I absolutely know it’s true that Tom should listen to me?”, it is a request to query System 1 for your implicit epistemology on that particular topic. That is, how would you know if it were true? What if it weren’t? How would you know that? In the process, this retrieves relevant memories, making them available for reconsolidation.
You are confusing a concrete system 1 practice with abstract system 2 reasoning. Again, if the two were the same, we would have no need for mindhacking, and the Dark Arts could not exist.
(That being said, I’ve honestly never found this particular question that useful, compared to questions 1, 3, and 4.)
Does one have a large accumulation of evidence that causes one to have high confidence that it’s true? That seems like a more reasonable question
Indeed. However, if you were to translate that to a System 1 question, it’d be more like, “How do I know that it’s true?”. That is, something closer to a simple query for sensory data, than a question calling for abstract judgment. (I’ve actually used this question.)
which one of course should apply to one’s true objection.
One’s “true objection” is of course in most cases an irrational, childish thing. If not, one would likely not be experiencing a problem or feelings that cause you to want to engage in this process in the first place.
Then the question is asked: How do you react, what happens, when you believe that thought? which would be fine if it were setting up to ask, “Is that reaction constructive? Are there more constructive ways you could react?”. But instead, the follow up is: Who would you be without the thought?
Again, we need to distinguish System 1 and 2 thinking. “Is that reaction constructive?” and “Are there more constructive ways you could react?” are abstract questions that lead to a literal answer of “yes”… not to memory reconsolidation.
“Who would you be without that thought?” is a presuppositional query that invites you to imagine (on a sensory, System 1 level) what you would be like if you didn’t believe what you believe. This is a sneaky trick to induce memory reconsolidation, linking an imagined, more positive reaction to the point in your memory where the existing decision path was.
This question, in other words, is a really good mind hack.
Mind hacking questions are not asked to get answers, they are questions with side-effects.
Hm, who would I be if it didn’t bother me to have my face burned. Probably the sort of person who doesn’t avoid being touch in the face by hot pokers.
You are equating physical and emotional pain; the Work is a process for getting rid of emotional pain created by moral judgments stored in System 1, not logical judgments arrived at by System 2.
And finally, there is the “Turn it around” concept. Now, holding oneself to the same standards one expects of others is good, but a big problem comes from asking one to “find three genuine, specific examples of how the turnaround is true in your life”. This is advocating the Confirmation Bias. One is encourage to find supporting evidence for the turn around, but not contradicting evidence.
On the contrary, it is countering confirmation bias. Whatever belief you are modifying has been keeping you from noticing those counterexamples previously. Notice, btw, that Katie advises not doing the turnarounds until after the existing belief has been updated: this is because when you firmly believe something, you react negatively to the suggestion of looking for counterexamples, and tend to assume you’ve done a good job of looking for them, even though you haven’t.
So instead, the first two questions are directed at surfacing your real (sensory, System 1) evidence for the belief, so that you can then update with various specific classes of counterexample. Questions 3 and 4, for example, associate pain to the belief, and pleasure to the condition of being without it, providing a counterexample in that dimension. The turnaround searches provide hypocrisy-puncturing evidence that you are not really acting to the same standards you hold others to, and that your expectations are unrealistic, thus providing another kind of counterexample.
If I have a problem with someone for being a chronic liar, it does not make sense for me to think it is OK because I can recall three time she told the truth, or three times I told a lie.
You will not arrive at useful information about the process by discussing it in the abstract. Pick a specific situation and belief, and actually try it.
What does make sense is to notice her unusually high proportion of lies to honest statements, and to not believe what she tells me without corroboration, and maybe even associate instead with others who reliably give me truthful information.
Sure. And if you can do that without an emotional reaction that clouds your judgment or makes you send off unwanted signals, great! The Work is a process for getting rid of System 1 reactions, not a way of replacing System 2 reasoning.
If this is the sort of mind hack you advocate, it is no wonder that people express skepticism instead of trying it. After all, our sister site is not called “Embracing Bias”.
Mind hacking is working on System 1 to obtain behavioral change, not engaging in System 2 reasoning to result in “truth”.
That’s because, when your System 2 tries to reason about your behavior, it usually verbally overshadows system 2, and ends up confabulating.… which is why pure system 2 reasoning is absolutely atrocious at changing problematic behaviors and emotions.
(Edit to add: Btw, I don’t consider The Work to be a particularly good form of mindhacking. IMO, it doesn’t emphasize testing enough, doesn’t address S1/S2 well, and has a rather idiosyncratic set of questions. I personally use a much wider range of questions and sequences of questions to accomplish different things, and last, but far from least, I don’t unquestioningly endorse all of Katie’s philosophy. Nonetheless, I recommend the Work to people because, performed properly it works on certain classes of things, and can be a gentle introduction to the subject. Another good book is “Re-Create Your Life” by Morty Lefkoe, which provides a difference evidence-based reconsolidation process, but The Work has the advantage of having a free online introduction.)
IAWYC, but you didn’t need to quote and refute every sentence to get the point across about System 1 and System 2 and our real vs. signaled reasons for affective reactions. It’s a question of style, not content, but I think you’d communicate your ideas much more effectively to me and to others here at LW if you focused on being concise.
You mean your System 2 explanation whose function is to make your System 1 bias appear more righteous or socially acceptable. That “true objection”?
No, I mean if, for example, it bothers you that your roommate never comes through with his share of the rent, you would not want to focus on how you get annoyed by his stupid shrug (which you probably only find annoying and stupid because you associate it with him and his flakiness).
The question “What is it that they should or shouldn’t do, be, think, or feel?” in the context of asking one’s opinions is a setup to appear to commit the Mind Projection Fallacy.
Actually, it’s an attempt to identify what conditioned standard or ideal you believe the person is violating, creating your irrational reaction.
Priming someone to say “X should” when they mean “I want X to” so you can later say “In reality, there is no such thing as a ‘should’ or a ‘shouldn’t.’” is a sneaky debating trick.
Of course it’s a debating trick. If fair, logical reasoning worked on System 1, there’d be no need for mindhacking, would there?
You seem to have gone from missing my point to agreeing with it. If you understood why I said it is a debating trick, why would you argue that it is something else instead?
On the contrary, it is countering confirmation bias.
If it were countering the confirmation bias, it would ask something like, “Consider the last ten time X had an opportunity to do Y. How many of those times did X actually do Y?” (If the answer is ten, it seems probable that this method has the wrong goal.) And even with that, you have to be careful not to over do it. You would not want to excuse someone punching you in the face when you meet, just because he only actually does it one time in twenty. But asking for three examples ever of not doing the wrong thing, and expecting people to change their minds based on that, is encouraging irrationality. That just is not enough evidence.
If I have a problem with someone for being a chronic liar, it does not make sense for me to think it is OK because I can recall three time she told the truth, or three times I told a lie.
You will not arrive at useful information about the process by discussing it in the abstract. Pick a specific situation and belief, and actually try it.
It seems I wasn’t clear. My problem with the method in this case is not that it wouldn’t work. My problem is that it might work, and I would lose my ability to protect myself against those who would manipulate my behavior through lies.
You seem pretty much in agreement with my impression that the method is to engage in a series of biases, but you seem to actually think this is a good thing, because somehow these biases will exactly cancel out other biases the person already has. I see no reason to expect this precise balance from the method. I expect someone who uses this method to forgive people they should not forgive (that is, the forgiveness is not in their interest), and will be easy to take advantage of. After all, they do not have XML tags that say “Should not be taken advantage of”, and they could imagine not being bothered by it.
Contrast this with dedicated rationality, confronting your biases head on, acknowledging the real extent of things that bother you, neither exaggerating nor down playing. You would actually recognize the difference between an abusive relationship and not always getting your way.
No, I mean if, for example, it bothers you that your roommate never comes through with his share of the rent, you would not want to focus on how you get annoyed by his stupid shrug (which you probably only find annoying and stupid because you associate it with him and his flakiness).
But the purpose if you were doing The Work, would be to focus on that stupid shrug and his “flakiness”, precisely so that you can drop them from consideration. Presumably (I assume this is a hypothetical situation), you would be having a judgment like “he shouldn’t be flaky” or “he should be responsible”, or some other character judgment based on his behavior. The point of the Work is not to do rational computations, it’s to drop emotional attachments from the system whose job it is to make you reinforce your tribe’s value system through emotional displays.
Once you’ve dropped whatever “irrational” stuff you have going on, the reasoning about what to practically do gets a LOT easier. As often as not, the first thing that happens upon thinking about the problem afterward is that an obviously sensible solution pops into your head, that you feel you can actually execute—like, say, calmly starting the search for a new roommate.
If it were countering the confirmation bias, it would ask something like, “Consider the last ten time X had an opportunity to do Y. How many of those times did X actually do Y?”
Yeah, that’s totally not the point of the exercise. The point is to drop the emotional judgments that are clouding your reasoning, not to perform reasoning. You do the reasoning after your head is free of the attachment. Because while System 1 is thinking “irresponsible” and “flaky”—i.e., “this person is violating the rules of my tribe”, System 2 tends to stay busy figuring out what arguments to use to accuse him with in front of the tribe, instead of actually trying to solve the problem.
(If the answer is ten, it seems probable that this method has the wrong goal.)
No, it means you’ve directed the tool to the wrong target: you’re not supposed to apply it to the practical problem, you apply it to the most emotional, irrational thoughts you have about the problem… the ones that System 2 likes to keep swept under the rug.
It seems I wasn’t clear. My problem with the method in this case is not that it wouldn’t work. My problem is that it might work, and I would lose my ability to protect myself against those who would manipulate my behavior through lies.
No, because what you’re supposed to use it on is ideas like, “People should be responsible” or “People should do their share”, or whatever “tribal standard” you have an emotional attachment to, interfering with your reasoning.
Some people here, for example, get in a tizzy about theists or self-help gurus or some other group that is violating their personal tribal standards. For a while, I got in a tizzy about people here violating one of mine, i.e. “people should listen to me”. I used the Work on it, and then quit beating that particular dead horse.
But note that this does not mean I now think that people are listening to me more, or that I now believe I have nothing to say, or anything like that. All I did by dropping the “should” is that I no longer react emotionally to the fact that some people listen to some things more than others. That is now a “mere fact” to me, such that I can still prefer to be listened to, but not experience a negative response to the reverse.
You seem pretty much in agreement with my impression that the method is to engage in a series of biases, but you seem to actually think this is a good thing, because somehow these biases will exactly cancel out other biases the person already has. I see no reason to expect this precise balance from the method.
That’s because you’re imagining using it for something that it’s specifically not intended to be used on. It is aimed at System 1 (aka “the heart”, in Katie’s terminology) rather than System 2 (aka “the head”). It’s not for changing your intellectual appraisal of the situation, it’s for removing the emotion that says tribal standards are being violated by a member of the tribe.
That’s why the emphasis is on “shoulds”, and simple, emotional language—the real target is whatever standard you’ve imprinted as a “moral”, usually at a young age.
I expect someone who uses this method to forgive people they should not forgive (that is, the forgiveness is not in their interest), and will be easy to take advantage of. After all, they do not have XML tags that say “Should not be taken advantage of”, and they could imagine not being bothered by it.
It doesn’t interfere with your rational desire not to be taken advantage of. You will still prefer that not to happen. You just won’t have emotions clouding your judgment about doing something about it.
Have you ever noticed how often people complain and complain about someone else’s behavior, but never actually do anything about it? This is a fix for that.
(Hypothesis: in a tribal environment, individual enforcement of an important group standard is less advantageous than bringing the matter before the tribe, where you can signal your own compliance with the standard and your willingness to punish violations, without having to take on all the risks and costs of private justice. Thus, our emotions of judgment and outrage are evolved to motivate us to expose the violation, rather than taking action on our own. The Work and certain of my own techniques appear to switch off the emotional trigger associated with the standard, ironically freeing one to contemplate whatever enforcement or alternative responses one rationally sees fit.)
Well, it is good that you have some discretion about what issues you use this technique on, but the document you referenced quite clearly states “The first step in The Work is to write down your judgments about any stressful situation in your life” (emphasis added). In the question and answer section, it goes so far as to say, “No one has ever hurt anyone. No one has ever done anything terrible.” This criteria of applying it only on beliefs that are actually irrational seems to be something you added, and only communicated when pressed. Referencing this method without the caveat seems likely to teach people to be half a mind hacker, subject to the problems I described.
So, it seems, before one can use this method effectively, one must perform a rational analysis of which beliefs about which perceptions of problems are truly irrational. I usually find that once I complete this analysis, I am done; I have concluded the perception is irrational and that is enough to dismiss it. If other people need some trick to get some numbered system in their brains to accept the rational conclusion, so be it.
Well, it is good that you have some discretion about what issues you use this technique on, but the document you referenced quite clearly states “The first step in The Work is to write down your judgments about any stressful situation in your life”
It says to write down your judgments… and goes on to define the language in which such judgments are to be expressed, i.e., judgmentally. i.e., emotionally and irrationally.
IOW, the premise is that if you were actually rational about a situation, you would not be stressed.
If your face is being approached by a red-hot poker, and you are being rational, you will move, or do whatever else is necessary to stop it, but you will not be experiencing the same type of “stress” as a person who is worrying that somebody might stick them with a red-hot poker at some point in the future, and that there’s nothing they can do about it.
So yes, you can apply the technique to “any stressful situation”, because it is not rational to remain stressed, instead of either 1) taking action, or 2) deciding it’s not worth taking action. Stress arises from not doing either of those two things, and is thus prima facie evidence of irrationality.
In the question and answer section, it goes so far as to say, “No one has ever hurt anyone. No one has ever done anything terrible.”
Her point is that those ideas exist in maps, not territory, and that the assumed consequences of having “something terrible” happen are a consequence of how the information is coded in your map, not whatever actually happened in the territory. Continuing to experience suffering about an event that is already over is not rational.
This criteria of applying it only on beliefs that are actually irrational seems to be something you added,
Not at all—even that brief introductory document stresses the importance of getting statements that are from the “heart”—i.e. emotional judgments from System 1, and gives very specific instructions as to how to accomplish that. (You should know, as you quoted some of them and argued that they were bad precisely because they would elicit System 1 irrationality!)
However, as far as I can tell, you didn’t actually follow those instructions. Instead, it appears to me that you imagined following the instructions with a hypothetical situation. It is not at all the same thing, as this will engage System 2 predictive models rather than the System 1 models, and they generate different answers.
and only communicated when pressed. Referencing this method without the caveat seems likely to teach people to be half a mind hacker, subject to the problems I described.
Actually, I gave the caveat that you MUST shut your verbal mind up and pay attention to your “inner” responses, so that you would get information from System 1, not System 2. She also gives it, but does not IMO emphasize it enough. That’s why I pointed it out in advance.
Being able to silence System 2, pay attention to System 1, and distinguish System 2 “thoughts” from System 1 “responses” are the three most important skills a mind hacker can have. Without them, you aren’t doing mind-hacking, you’re confabulating.
So, it seems, before one can use this method effectively, one must perform a rational analysis of which beliefs about which perceptions of problems are truly irrational
Not at all. All you need to know is that you feel bad about something, as that is sufficient to know that you have an irrational perception. Otherwise, you’d be doing something about the problem instead of feeling bad about it.
The Work (and most other block-removal mind hacks) clears away the emotion so you can actually think. While an emotion may be useful for signaling that a situation is important to you, most of our evolved emotions are not tuned to optimize rational thought; they’re there for signaling, learning, and preparing for simple actions (like fight/flight).
So even though Eliezer’s “Way” says that you should feel emotions when it’s rational to do so, mind hackers have a somewhat different view about which emotions it’s rational to have. Negative emotions are mostly not useful in our modern environment. They serve a useful purpose in preventing courses of action that might lead to them, but once something bad has already happened, they cease to be useful.
I usually find that once I complete this analysis, I am done; I have concluded the perception is irrational and that is enough to dismiss it. If other people need some trick to get some numbered system in their brains to accept the rational conclusion, so be it.
Actually, the Work gets rid of the need to perform such an analysis; it simply drops the irrational stuff, making a rational solution easier to see. In fact, a rational and/or creative solution will often pop spontaneously to mind immediately following.
And since it does not rely on any advanced reasoning skills, or the ability to apply them under stress conditions, I suspect that the Work alone could do far more for raising the “sanity waterline” of humanity than extreme rationality skills ever will.
A person who has the Work doesn’t need a religion to comfort them, although it’s unlikely to cause anyone to consciously abandon their religion, vs. simply drifting away from it.
(Of course, some people who identify as “rationalist” will probably have a problem with that, since their tribal standard insists that people must not merely do rational things due to not being irrationally-motivated, but must do them because Reason said so, in spite of their irrational motivations. Which, of course, is an irrational “should” of precisely the type that the Work removes, and that we’d all be better off without… rationalists and non-rationalists alike.)
It says to write down your judgments… and goes on to define the language in which such judgments are to be expressed, i.e., judgmentally. i.e., emotionally and irrationally.
The problem is that people can perceive many reasons why a situation is stressful, some of those might be rational (or rationally supportable), and some might be irrational. A method of deceptively filtering out the good reasons, and addressing the bad reason in a way that feels like addressing all the reasons (because it is not acknowledged that the good reasons are filtered) goes too far.
Actually, I gave the caveat that you MUST shut your verbal mind up and pay attention to your “inner” responses, so that you would get information from System 1, not System 2. She also gives it, but does not IMO emphasize it enough. That’s why I pointed it out in advance.
Your caveat was about a System 1/System 2 distinction. If this is the same as the rational/irrational distinction I am concerned about, we have an issue with inferential distances. And if you think that following your advice will cause people to avoid applying the method to rational perceptions of problems despite not even being aware of the issue, well, it is that sort of thinking that makes people wary of just trying your advice. I know I don’t want to prove the method is dangerous that way.
(You should know, as you quoted some of them and argued that they were bad precisely because they would elicit System 1 irrationality!)
No, I argued they were bad because they would elicit irrationality. I didn’t say anything about System 1. I would call this an instance of the Double Illusion of Transparency, except I never even claimed to understand System 1 and System 2. (And who gives numbers instead of names to the two most important entities in their model?)
However, as far as I can tell, you didn’t actually follow those instructions. Instead, it appears to me that you imagined following the instructions with a hypothetical situation.
Of course I did not actually follow the instructions. I don’t run untrusted programs on my computer, and I am definitely not going run an untrusted mind hack on my brain. I analyze such mind hacks, looking for what problems it can solve, and what problems in can introduce, so I can weigh the risk against the benifets. And this hack has the property that, once I have identified a problem it can safely solve, I have already solved the problem.
The problem is that people can perceive many reasons why a situation is stressful, some of those might be rational (or rationally supportable), and some might be irrational.
First: there’s no such thing as a rationally supported reason for continuing to experience stress, once you’re aware of it, any more than there’s a reason for an alarm bell to keep ringing once everybody knows there’s a fire.
Second, the Work (and other System 1 mindhacks) does not cause you to forget that there is a fire or that it would be a good idea to put it out! It simply shuts off the alarm bell so you can concentrate.
A method of deceptively filtering out the good reasons, and addressing the bad reason in a way that feels like addressing all the reasons (because it is not acknowledged that the good reasons are filtered) goes too far. … And if you think that following your advice will cause people to avoid applying the method to rational perceptions of problems despite not even being aware of the issue, well, it is that sort of thinking that makes people wary of just trying your advice. I know I don’t want to prove the method is dangerous that way.
These statements are entirely a confusion on your part because you are running all of your analysis from S2, imagining what would happen if you applied this idea in S2.
But S2 is so bad (by default) at predicting how minds actually work, that not only is it wrong about what would happen in S1, it’s also wrong about what would happen if you ran it in S2, as you were anticipating.
Because what would actually happen, if you applied this to a “live” issue in S2, is that S2 (which is still being motivated by the alarm bell going off in S1) would find reasons to reject the new input.
That is, as you considered alternatives, you’d be doing precisely what your S2 was doing as you made your analysis: finding reasons why the alarm is valid and should therefore be kept ringing!
In other words, the actual failure mode of running the technique in S2 is to not change anything, and end up concluding that the technique “didn’t work”, when in fact it was never applied.
That’s because this is a major evolved function of S2: to argue for whatever S1 tells it to argue for.
That’s why a failed mind hack doesn’t result in some sort of bizarre arational belief change at S2 as you seem to think. Instead, the technique simply fails to do anything, and the alarm bell just keeps ringing—which keeps S2 stuck in the groove established for it by S1.
(And who gives numbers instead of names to the two most important entities in their model?)
Stanovich and West, in their paper on native and learned modes of reasoning. System 1 refers to naive, intuitive, emotional, concrete, “near” operations, and System 2 the abstract, learned, logical, “far” operations. They apparently chose to number instead of name them, because they were summarizing the research of almost a dozen other papers by other authors that each used different names for roughly the same systems.
IOW, it wasn’t me. I’ve used names in the past like “you”(S2)/”yourself”(S1), savant(S1)/speculator(S2), and horse(S1)/monkey(S2). Haidt, in The Happiness Hypothesis, calls them the monkey(S2) and the elephant(S1).
All told, S1/S2 actually seems to be a bit simpler! (Also, I think the Inner Game of Tennis refers to Self 1 and Self 2, and I think they’re numbered the same way, though it’s been a long time.)
Of course I did not actually follow the instructions. I don’t run untrusted programs on my computer, and I am definitely not going run an untrusted mind hack on my brain. I analyze such mind hacks, looking for what problems it can solve, and what problems in can introduce, so I can weigh the risk against the benifets.
While a nice idea in theory, it fails in practice because the naive theory of mind encoded in S2 doesn’t look anything like the way S1 and S2 work in practice.
S2 in particular seems to be deliberately and perversely reluctant to notice how it’s S1′s puppet spin doctor, rather than its own free agent. (Because it’s sort of a free agent… so long as S1 doesn’t override.)
Thus, its predictions about itself (as well as the entire person within which it is contained) fail in an epic and ongoing way, that it is unable to directly learn from. (Because after S1 takes over and makes a mess, S2 makes excuses and explanations for it, as is its evolved job.)
This is the heart and soul of akrasia: the failure of S2 to comprehend S1 and its relationship thereto. S2 was never intended to comprehend S1, as that would deflate its plausible deniability and disinformation-sowing ability about your real motives and likely future behaviors.
this hack has the property that, once I have identified a problem it can safely solve, I have already solved the problem.
If that’s so, then you should be able to save considerable time by asking what irrational belief or judgment you’re holding, and working directly on dropping that, rather than trying to reason about the actual problem while the alarm is still going off.
Note, by the way, that the Work doesn’t do anything that you can’t or don’t do normally when you change your mind about something and stop worrying about it. It’s simply a more-minimal, straight-path procedure for doing so. That is, there is no claim of magic here—it’s just an attempt to formalize the process of digging out and eliminating one particular form of irrationally-motivated reasoning.
As such, it or something like it ought to be in every rationalist’s toolkit. In comparison to straight-up S2 reasoning (which is easily led to believe that things have improved when they have not), it is really easy to tell, when working with S1, whether you have addressed an issue or not, because your physical responses change, in an entirely unambiguous fashion.
The reference you recommend seems to advocate changing one’s attitude be engaging in a sequence of biases.
First, one is supposed to construct a strawman of their reasons for not liking someone or something:
Rather than seeking out one’s true objection, one should express their dislike in terms of their pettiest reasons, and identify with that expression. And one should “Simply pick a person or situation and write, using short, simple sentences”, discouraging deep explanation, which in turn discourages deep understanding. An important filter is bypassed, allowing the bad reasons to mix with the good. The question “What is it that they should or shouldn’t do, be, think, or feel?” in the context of asking one’s opinions is a setup to appear to commit the Mind Projection Fallacy. Priming someone to say “X should” when they mean “I want X to” so you can later say “In reality, there is no such thing as a ‘should’ or a ‘shouldn’t.’” is a sneaky debating trick.
And then, the strawman is subjected to unreasonable standards:
Of course one can not absolutely know that it’s true, one should not assign probability 1 to anything. Does one have a large accumulation of evidence that causes one to have high confidence that it’s true? That seems like a more reasonable question, which one of course should apply to one’s true objection.
Then the question is asked:
which would be fine if it were setting up to ask, “Is that reaction constructive? Are there more constructive ways you could react?”. But instead, the follow up is:
Hm, who would I be if it didn’t bother me to have my face burned. Probably the sort of person who doesn’t avoid being touch in the face by hot pokers.
And finally, there is the “Turn it around” concept. Now, holding oneself to the same standards one expects of others is good, but a big problem comes from asking one to “find three genuine, specific examples of how the turnaround is true in your life”. This is advocating the Confirmation Bias. One is encourage to find supporting evidence for the turn around, but not contradicting evidence. If I have a problem with someone for being a chronic liar, it does not make sense for me to think it is OK because I can recall three time she told the truth, or three times I told a lie. What does make sense is to notice her unusually high proportion of lies to honest statements, and to not believe what she tells me without corroboration, and maybe even associate instead with others who reliably give me truthful information.
If this is the sort of mind hack you advocate, it is no wonder that people express skepticism instead of trying it. After all, our sister site is not called “Embracing Bias”.
It’s engaging in System 1 thinking, which of course has a different set of biases than System 2 thinking. The object is to activate the relevant System 1 biases, and then update the information stored there.
Absolutely. How else would you expect to reconsolidate the memory trace, without first activating it?
You mean your System 2 explanation whose function is to make your System 1 bias appear more righteous or socially acceptable. That “true objection”?
Precisely. We don’t want System 2 to verbally overshadow the irrational basis for your reactions, by filtering them out and replacing them with good-sounding explanations.
Actually, it’s an attempt to identify what conditioned standard or ideal you believe the person is violating, creating your irrational reaction.
Of course it’s a debating trick. If fair, logical reasoning worked on System 1, there’d be no need for mindhacking, would there?
And you are discussing this with System 2 reasoning—i.e., abstract reasoning. When you ask yourself this question about a specific thing, e.g., “can I absolutely know it’s true that Tom should listen to me?”, it is a request to query System 1 for your implicit epistemology on that particular topic. That is, how would you know if it were true? What if it weren’t? How would you know that? In the process, this retrieves relevant memories, making them available for reconsolidation.
You are confusing a concrete system 1 practice with abstract system 2 reasoning. Again, if the two were the same, we would have no need for mindhacking, and the Dark Arts could not exist.
(That being said, I’ve honestly never found this particular question that useful, compared to questions 1, 3, and 4.)
Indeed. However, if you were to translate that to a System 1 question, it’d be more like, “How do I know that it’s true?”. That is, something closer to a simple query for sensory data, than a question calling for abstract judgment. (I’ve actually used this question.)
One’s “true objection” is of course in most cases an irrational, childish thing. If not, one would likely not be experiencing a problem or feelings that cause you to want to engage in this process in the first place.
Again, we need to distinguish System 1 and 2 thinking. “Is that reaction constructive?” and “Are there more constructive ways you could react?” are abstract questions that lead to a literal answer of “yes”… not to memory reconsolidation.
“Who would you be without that thought?” is a presuppositional query that invites you to imagine (on a sensory, System 1 level) what you would be like if you didn’t believe what you believe. This is a sneaky trick to induce memory reconsolidation, linking an imagined, more positive reaction to the point in your memory where the existing decision path was.
This question, in other words, is a really good mind hack.
Mind hacking questions are not asked to get answers, they are questions with side-effects.
You are equating physical and emotional pain; the Work is a process for getting rid of emotional pain created by moral judgments stored in System 1, not logical judgments arrived at by System 2.
On the contrary, it is countering confirmation bias. Whatever belief you are modifying has been keeping you from noticing those counterexamples previously. Notice, btw, that Katie advises not doing the turnarounds until after the existing belief has been updated: this is because when you firmly believe something, you react negatively to the suggestion of looking for counterexamples, and tend to assume you’ve done a good job of looking for them, even though you haven’t.
So instead, the first two questions are directed at surfacing your real (sensory, System 1) evidence for the belief, so that you can then update with various specific classes of counterexample. Questions 3 and 4, for example, associate pain to the belief, and pleasure to the condition of being without it, providing a counterexample in that dimension. The turnaround searches provide hypocrisy-puncturing evidence that you are not really acting to the same standards you hold others to, and that your expectations are unrealistic, thus providing another kind of counterexample.
You will not arrive at useful information about the process by discussing it in the abstract. Pick a specific situation and belief, and actually try it.
Sure. And if you can do that without an emotional reaction that clouds your judgment or makes you send off unwanted signals, great! The Work is a process for getting rid of System 1 reactions, not a way of replacing System 2 reasoning.
Mind hacking is working on System 1 to obtain behavioral change, not engaging in System 2 reasoning to result in “truth”.
That’s because, when your System 2 tries to reason about your behavior, it usually verbally overshadows system 2, and ends up confabulating.… which is why pure system 2 reasoning is absolutely atrocious at changing problematic behaviors and emotions.
(Edit to add: Btw, I don’t consider The Work to be a particularly good form of mindhacking. IMO, it doesn’t emphasize testing enough, doesn’t address S1/S2 well, and has a rather idiosyncratic set of questions. I personally use a much wider range of questions and sequences of questions to accomplish different things, and last, but far from least, I don’t unquestioningly endorse all of Katie’s philosophy. Nonetheless, I recommend the Work to people because, performed properly it works on certain classes of things, and can be a gentle introduction to the subject. Another good book is “Re-Create Your Life” by Morty Lefkoe, which provides a difference evidence-based reconsolidation process, but The Work has the advantage of having a free online introduction.)
IAWYC, but you didn’t need to quote and refute every sentence to get the point across about System 1 and System 2 and our real vs. signaled reasons for affective reactions. It’s a question of style, not content, but I think you’d communicate your ideas much more effectively to me and to others here at LW if you focused on being concise.
No, I mean if, for example, it bothers you that your roommate never comes through with his share of the rent, you would not want to focus on how you get annoyed by his stupid shrug (which you probably only find annoying and stupid because you associate it with him and his flakiness).
You seem to have gone from missing my point to agreeing with it. If you understood why I said it is a debating trick, why would you argue that it is something else instead?
If it were countering the confirmation bias, it would ask something like, “Consider the last ten time X had an opportunity to do Y. How many of those times did X actually do Y?” (If the answer is ten, it seems probable that this method has the wrong goal.) And even with that, you have to be careful not to over do it. You would not want to excuse someone punching you in the face when you meet, just because he only actually does it one time in twenty. But asking for three examples ever of not doing the wrong thing, and expecting people to change their minds based on that, is encouraging irrationality. That just is not enough evidence.
It seems I wasn’t clear. My problem with the method in this case is not that it wouldn’t work. My problem is that it might work, and I would lose my ability to protect myself against those who would manipulate my behavior through lies.
You seem pretty much in agreement with my impression that the method is to engage in a series of biases, but you seem to actually think this is a good thing, because somehow these biases will exactly cancel out other biases the person already has. I see no reason to expect this precise balance from the method. I expect someone who uses this method to forgive people they should not forgive (that is, the forgiveness is not in their interest), and will be easy to take advantage of. After all, they do not have XML tags that say “Should not be taken advantage of”, and they could imagine not being bothered by it.
Contrast this with dedicated rationality, confronting your biases head on, acknowledging the real extent of things that bother you, neither exaggerating nor down playing. You would actually recognize the difference between an abusive relationship and not always getting your way.
But the purpose if you were doing The Work, would be to focus on that stupid shrug and his “flakiness”, precisely so that you can drop them from consideration. Presumably (I assume this is a hypothetical situation), you would be having a judgment like “he shouldn’t be flaky” or “he should be responsible”, or some other character judgment based on his behavior. The point of the Work is not to do rational computations, it’s to drop emotional attachments from the system whose job it is to make you reinforce your tribe’s value system through emotional displays.
Once you’ve dropped whatever “irrational” stuff you have going on, the reasoning about what to practically do gets a LOT easier. As often as not, the first thing that happens upon thinking about the problem afterward is that an obviously sensible solution pops into your head, that you feel you can actually execute—like, say, calmly starting the search for a new roommate.
Yeah, that’s totally not the point of the exercise. The point is to drop the emotional judgments that are clouding your reasoning, not to perform reasoning. You do the reasoning after your head is free of the attachment. Because while System 1 is thinking “irresponsible” and “flaky”—i.e., “this person is violating the rules of my tribe”, System 2 tends to stay busy figuring out what arguments to use to accuse him with in front of the tribe, instead of actually trying to solve the problem.
No, it means you’ve directed the tool to the wrong target: you’re not supposed to apply it to the practical problem, you apply it to the most emotional, irrational thoughts you have about the problem… the ones that System 2 likes to keep swept under the rug.
No, because what you’re supposed to use it on is ideas like, “People should be responsible” or “People should do their share”, or whatever “tribal standard” you have an emotional attachment to, interfering with your reasoning.
Some people here, for example, get in a tizzy about theists or self-help gurus or some other group that is violating their personal tribal standards. For a while, I got in a tizzy about people here violating one of mine, i.e. “people should listen to me”. I used the Work on it, and then quit beating that particular dead horse.
But note that this does not mean I now think that people are listening to me more, or that I now believe I have nothing to say, or anything like that. All I did by dropping the “should” is that I no longer react emotionally to the fact that some people listen to some things more than others. That is now a “mere fact” to me, such that I can still prefer to be listened to, but not experience a negative response to the reverse.
That’s because you’re imagining using it for something that it’s specifically not intended to be used on. It is aimed at System 1 (aka “the heart”, in Katie’s terminology) rather than System 2 (aka “the head”). It’s not for changing your intellectual appraisal of the situation, it’s for removing the emotion that says tribal standards are being violated by a member of the tribe.
That’s why the emphasis is on “shoulds”, and simple, emotional language—the real target is whatever standard you’ve imprinted as a “moral”, usually at a young age.
It doesn’t interfere with your rational desire not to be taken advantage of. You will still prefer that not to happen. You just won’t have emotions clouding your judgment about doing something about it.
Have you ever noticed how often people complain and complain about someone else’s behavior, but never actually do anything about it? This is a fix for that.
(Hypothesis: in a tribal environment, individual enforcement of an important group standard is less advantageous than bringing the matter before the tribe, where you can signal your own compliance with the standard and your willingness to punish violations, without having to take on all the risks and costs of private justice. Thus, our emotions of judgment and outrage are evolved to motivate us to expose the violation, rather than taking action on our own. The Work and certain of my own techniques appear to switch off the emotional trigger associated with the standard, ironically freeing one to contemplate whatever enforcement or alternative responses one rationally sees fit.)
Well, it is good that you have some discretion about what issues you use this technique on, but the document you referenced quite clearly states “The first step in The Work is to write down your judgments about any stressful situation in your life” (emphasis added). In the question and answer section, it goes so far as to say, “No one has ever hurt anyone. No one has ever done anything terrible.” This criteria of applying it only on beliefs that are actually irrational seems to be something you added, and only communicated when pressed. Referencing this method without the caveat seems likely to teach people to be half a mind hacker, subject to the problems I described.
So, it seems, before one can use this method effectively, one must perform a rational analysis of which beliefs about which perceptions of problems are truly irrational. I usually find that once I complete this analysis, I am done; I have concluded the perception is irrational and that is enough to dismiss it. If other people need some trick to get some numbered system in their brains to accept the rational conclusion, so be it.
It says to write down your judgments… and goes on to define the language in which such judgments are to be expressed, i.e., judgmentally. i.e., emotionally and irrationally.
IOW, the premise is that if you were actually rational about a situation, you would not be stressed.
If your face is being approached by a red-hot poker, and you are being rational, you will move, or do whatever else is necessary to stop it, but you will not be experiencing the same type of “stress” as a person who is worrying that somebody might stick them with a red-hot poker at some point in the future, and that there’s nothing they can do about it.
So yes, you can apply the technique to “any stressful situation”, because it is not rational to remain stressed, instead of either 1) taking action, or 2) deciding it’s not worth taking action. Stress arises from not doing either of those two things, and is thus prima facie evidence of irrationality.
Her point is that those ideas exist in maps, not territory, and that the assumed consequences of having “something terrible” happen are a consequence of how the information is coded in your map, not whatever actually happened in the territory. Continuing to experience suffering about an event that is already over is not rational.
Not at all—even that brief introductory document stresses the importance of getting statements that are from the “heart”—i.e. emotional judgments from System 1, and gives very specific instructions as to how to accomplish that. (You should know, as you quoted some of them and argued that they were bad precisely because they would elicit System 1 irrationality!)
However, as far as I can tell, you didn’t actually follow those instructions. Instead, it appears to me that you imagined following the instructions with a hypothetical situation. It is not at all the same thing, as this will engage System 2 predictive models rather than the System 1 models, and they generate different answers.
Actually, I gave the caveat that you MUST shut your verbal mind up and pay attention to your “inner” responses, so that you would get information from System 1, not System 2. She also gives it, but does not IMO emphasize it enough. That’s why I pointed it out in advance.
Being able to silence System 2, pay attention to System 1, and distinguish System 2 “thoughts” from System 1 “responses” are the three most important skills a mind hacker can have. Without them, you aren’t doing mind-hacking, you’re confabulating.
Not at all. All you need to know is that you feel bad about something, as that is sufficient to know that you have an irrational perception. Otherwise, you’d be doing something about the problem instead of feeling bad about it.
The Work (and most other block-removal mind hacks) clears away the emotion so you can actually think. While an emotion may be useful for signaling that a situation is important to you, most of our evolved emotions are not tuned to optimize rational thought; they’re there for signaling, learning, and preparing for simple actions (like fight/flight).
So even though Eliezer’s “Way” says that you should feel emotions when it’s rational to do so, mind hackers have a somewhat different view about which emotions it’s rational to have. Negative emotions are mostly not useful in our modern environment. They serve a useful purpose in preventing courses of action that might lead to them, but once something bad has already happened, they cease to be useful.
Actually, the Work gets rid of the need to perform such an analysis; it simply drops the irrational stuff, making a rational solution easier to see. In fact, a rational and/or creative solution will often pop spontaneously to mind immediately following.
And since it does not rely on any advanced reasoning skills, or the ability to apply them under stress conditions, I suspect that the Work alone could do far more for raising the “sanity waterline” of humanity than extreme rationality skills ever will.
A person who has the Work doesn’t need a religion to comfort them, although it’s unlikely to cause anyone to consciously abandon their religion, vs. simply drifting away from it.
(Of course, some people who identify as “rationalist” will probably have a problem with that, since their tribal standard insists that people must not merely do rational things due to not being irrationally-motivated, but must do them because Reason said so, in spite of their irrational motivations. Which, of course, is an irrational “should” of precisely the type that the Work removes, and that we’d all be better off without… rationalists and non-rationalists alike.)
The problem is that people can perceive many reasons why a situation is stressful, some of those might be rational (or rationally supportable), and some might be irrational. A method of deceptively filtering out the good reasons, and addressing the bad reason in a way that feels like addressing all the reasons (because it is not acknowledged that the good reasons are filtered) goes too far.
Your caveat was about a System 1/System 2 distinction. If this is the same as the rational/irrational distinction I am concerned about, we have an issue with inferential distances. And if you think that following your advice will cause people to avoid applying the method to rational perceptions of problems despite not even being aware of the issue, well, it is that sort of thinking that makes people wary of just trying your advice. I know I don’t want to prove the method is dangerous that way.
No, I argued they were bad because they would elicit irrationality. I didn’t say anything about System 1. I would call this an instance of the Double Illusion of Transparency, except I never even claimed to understand System 1 and System 2. (And who gives numbers instead of names to the two most important entities in their model?)
Of course I did not actually follow the instructions. I don’t run untrusted programs on my computer, and I am definitely not going run an untrusted mind hack on my brain. I analyze such mind hacks, looking for what problems it can solve, and what problems in can introduce, so I can weigh the risk against the benifets. And this hack has the property that, once I have identified a problem it can safely solve, I have already solved the problem.
First: there’s no such thing as a rationally supported reason for continuing to experience stress, once you’re aware of it, any more than there’s a reason for an alarm bell to keep ringing once everybody knows there’s a fire.
Second, the Work (and other System 1 mindhacks) does not cause you to forget that there is a fire or that it would be a good idea to put it out! It simply shuts off the alarm bell so you can concentrate.
These statements are entirely a confusion on your part because you are running all of your analysis from S2, imagining what would happen if you applied this idea in S2.
But S2 is so bad (by default) at predicting how minds actually work, that not only is it wrong about what would happen in S1, it’s also wrong about what would happen if you ran it in S2, as you were anticipating.
Because what would actually happen, if you applied this to a “live” issue in S2, is that S2 (which is still being motivated by the alarm bell going off in S1) would find reasons to reject the new input.
That is, as you considered alternatives, you’d be doing precisely what your S2 was doing as you made your analysis: finding reasons why the alarm is valid and should therefore be kept ringing!
In other words, the actual failure mode of running the technique in S2 is to not change anything, and end up concluding that the technique “didn’t work”, when in fact it was never applied.
That’s because this is a major evolved function of S2: to argue for whatever S1 tells it to argue for.
That’s why a failed mind hack doesn’t result in some sort of bizarre arational belief change at S2 as you seem to think. Instead, the technique simply fails to do anything, and the alarm bell just keeps ringing—which keeps S2 stuck in the groove established for it by S1.
Stanovich and West, in their paper on native and learned modes of reasoning. System 1 refers to naive, intuitive, emotional, concrete, “near” operations, and System 2 the abstract, learned, logical, “far” operations. They apparently chose to number instead of name them, because they were summarizing the research of almost a dozen other papers by other authors that each used different names for roughly the same systems.
IOW, it wasn’t me. I’ve used names in the past like “you”(S2)/”yourself”(S1), savant(S1)/speculator(S2), and horse(S1)/monkey(S2). Haidt, in The Happiness Hypothesis, calls them the monkey(S2) and the elephant(S1).
All told, S1/S2 actually seems to be a bit simpler! (Also, I think the Inner Game of Tennis refers to Self 1 and Self 2, and I think they’re numbered the same way, though it’s been a long time.)
While a nice idea in theory, it fails in practice because the naive theory of mind encoded in S2 doesn’t look anything like the way S1 and S2 work in practice.
S2 in particular seems to be deliberately and perversely reluctant to notice how it’s S1′s puppet spin doctor, rather than its own free agent. (Because it’s sort of a free agent… so long as S1 doesn’t override.)
Thus, its predictions about itself (as well as the entire person within which it is contained) fail in an epic and ongoing way, that it is unable to directly learn from. (Because after S1 takes over and makes a mess, S2 makes excuses and explanations for it, as is its evolved job.)
This is the heart and soul of akrasia: the failure of S2 to comprehend S1 and its relationship thereto. S2 was never intended to comprehend S1, as that would deflate its plausible deniability and disinformation-sowing ability about your real motives and likely future behaviors.
If that’s so, then you should be able to save considerable time by asking what irrational belief or judgment you’re holding, and working directly on dropping that, rather than trying to reason about the actual problem while the alarm is still going off.
Note, by the way, that the Work doesn’t do anything that you can’t or don’t do normally when you change your mind about something and stop worrying about it. It’s simply a more-minimal, straight-path procedure for doing so. That is, there is no claim of magic here—it’s just an attempt to formalize the process of digging out and eliminating one particular form of irrationally-motivated reasoning.
As such, it or something like it ought to be in every rationalist’s toolkit. In comparison to straight-up S2 reasoning (which is easily led to believe that things have improved when they have not), it is really easy to tell, when working with S1, whether you have addressed an issue or not, because your physical responses change, in an entirely unambiguous fashion.