Internalizing Internal Double Crux
In sciences such as psychology and sociology, internalization involves the integration of attitudes, values, standards and the opinions of others into one’s own identity or sense of self.
Internal Double Crux is one of the most important skills I’ve ever learned. In the last two weeks, I’ve solved some serious, long-standing problems with IDC (permanently, as far as I can tell, and often in less than 5 minutes), a small sample of which includes:
Belief that I have intrinsically less worth than others
Belief that others are intrinsically less likely to want to talk to me
Belief that attendance at events I host is directly tied to my worth
Disproportionately negative reaction to being stood up
Long-standing phobia of bees and flies
I feel great, and I love it. Actually, most of the time I don’t feel amazingly confident—I just feel not bad in lots of situations. Apparently this level of success with IDC across such a wide range of problems is unusual. Some advice, and then an example.
The emotional texture of the dialogue is of paramount importance. There should be a warm feeling between the two sides, as if they were two best friends who are upset with each other, but also secretly appreciate each other and want to make things right.
Each response should start with a sincere and emotional validation of some aspect of the other side’s concern. In my experience, this feels like emotional ping pong.
For me, resolution of the issue is accompanied by a warm feeling that rises to my throat in a bubble-ish way. My heart also feels full. This is similar to (but distinct from) the ‘aww’ feeling you may experience when you see cute animals.
Focusing is an important (and probably necessary) sub-skill.
Don’t interrupt or otherwise obstruct one of your voices because it’s “stupid” or “talked long enough”—be respectful. The outcome should not feel pre-ordained—you should be having two of your sub-agents / identities sharing their emotional and mental models to come to a fixed point of harmonious agreement.
Some beliefs aren’t explicitly advocated by any part of you, and are instead propped up by certain memories. You can use Focusing to hone in on the memories, and then employ IDC to resolve your ongoing reaction to it.
Most importantly, the arguments being made should be emotionally salient and not just detached, “empty” words. In my experience, if I’m totally “in my head”, any modification of my System 1 feelings is impossible.
Note: this entire exchange took place internally over the course of 2 minutes, via a 50-50 mix of words and emotions. Unpacking it took significantly longer.
I may write more of these if this is helpful for people
Dialogue
If I don’t get this CHAI internship, I’m going to feel terrible, because that means I don’t have much promise as an AI safety researcher.
Realist: Not getting the internship suggests you’re miscalibrated on your potential. Someone promising enough to eventually become a MIRI researcher would be able to snag this, no problem. I feel worried that we’re poorly calibrated and setting ourselves up for disappointment when we fall short.
Fire: I agree that not getting the internship would be evidence that there are others who are more promising right now. I think, however, that you’re missing a few key points here:
We’ve made important connections at CHAI / MIRI.
Your main point is a total buckets error. There is no ontologically-basic and immutable “promising-individual” property. Granted, there are biological and environmental factors outside our control here, but I think we score high enough on these metrics to be able to succeed through effort, passion, and increased mastery of instrumental rationality.
We’ve been studying AI safety for just a few months (in our free time, no less); most of the studying has been dedicated towards building up foundational skills (and not reviewing the literature itself). The applicants who are chosen may have a year or more of familiarity with the literature / relevant math on us (or perhaps not), and this should be included in the model.
One of the main sticking points raised during my final interview has since been fixed, but I couldn’t signal that afterwards without seeming overbearing.
I guess the main thrust here is that although that would be a data point against our being able to have a tectonic impact right now, we simply don’t have enough evidence to responsibly generalize. I’m worried that you’re overly pessimistic, and it’s pulling down our chances of actually being able to do something.
Realist: I definitely hear you that we’ve made lots of great progress, but is it enough? I’m so nervous about timelines, and the universe isn’t magically calibrated to what we can do now.* We either succeed, or we don’t—and pay the price. Do we really have time to tolerate almost being extraordinary? How is that going to do the impossible? I’m scared.
Fire: Yup. I’m definitely scared too (in a sense), but also excited. This is a great chance to learn, grow, have fun, and work with people we really admire and appreciate! Let’s detach the grim-o-meter, since that’s better than being worried and insecure about whether we’re doing enough.
Realist: I agree that detaching the grim-o-meter is the right thing to do, but… it makes me feel guilty.* I guess there’s a part of me that believes that feeling bad when things could go really wrong is important.
Concern: Hey, that’s me! Yeah, I’m really worried that if we detach that grim-o-meter, we’ll become callous and flippant and carefree. I don’t know if that’s a reasonable concern, but the prospect makes me feel really queasy. Shouldn’t we be really worried?
Realist: Actually, I don’t know. Fire made a good point—the world will probably end up slightly better if we don’t care about the grim-o-meter…
Fire: Hell yeah it will! What are we optimizing for here—an arbitrary deontological rule about feeling bad, or the actual world? We aren’t discarding morality – we’re discarding the idea that we should worry when the world is in a probably precarious position. We’ll still fight just as hard.
* Notice how related cruxes can (and should) be resolved in the same session. Resolution cannot happen if any part of you isn’t fully on board with whatever agreement you’ve come to—this feels like a small emptiness in the pit of my stomach, in my experience.
ETA 2020: retouched word choice in some places.
- Book summary: Unlocking the Emotional Brain by Oct 8, 2019, 7:11 PM; 327 points) (
- Integrating disagreeing subagents by May 14, 2019, 2:06 PM; 147 points) (
- Swimming Upstream: A Case Study in Instrumental Rationality by Jun 3, 2018, 3:16 AM; 76 points) (
- Robust Agency for People and Organizations by Jul 19, 2019, 1:18 AM; 65 points) (
- Policy-Based vs Willpower-Based Intentions by Feb 28, 2019, 5:17 AM; 56 points) (
- Confounded No Longer: Insights from ‘All of Statistics’ by May 3, 2018, 10:56 PM; 21 points) (
- Is LW making progress? by Aug 24, 2019, 12:32 AM; 21 points) (
- Any layperson-accessible reference posts on how to operationalize beliefs ? by Feb 5, 2021, 7:26 AM; 18 points) (
- Nov 3, 2021, 2:13 AM; 12 points) 's comment on Transcript: “You Should Read HPMOR” by (
- Day Retreat Amsterdam: Resolving Internal Conflict by Mar 29, 2024, 8:49 AM; 2 points) (EA Forum;
I’ve had a similar experience in that confidence doesn’t feel like anything in particular.
I had actually read that and was trying to remember the name so I could link, thanks!
I really liked this post :)
I haven’t attended the CFAR unit myself but I have talked with multiple people who did. On thing I wasn’t sure of was how to go about naming the parts.
From NLP Six Step reframing I’m used to giving internal parts simply one word names. On the other hand, I heard from the people who attended the workshop that in the workshop it was recommended to give the parts names that contain more information and that short descriptions work well as part names.
Do you think it matters whether the part is named “Realist” or “Not getting the internship means no promise” (or something similar)?
That’s a good point! I abbreviated the names for ease of reading. Also, “realist” can seem to imply “the other side isn’t being reasonable”. The emotional texture was different, so it worked out fine, but that’s a failure mode to avoid (names should be affirming the position they represent, without demeaning the other side).
Perhaps
Want to avoid disappointment
We’re awesome and can do important things if we set ourselves up properly
Preserving natural parts of human experience
I don’t know how helpful these names are. When Duncan taught it, the example he gave was (paraphrased):
I think finding good names is important, but in my experience, I just get names that are loosely representative of the sides’ affirmative positions and then go from there. Again, the emotional texture is important, and I think one of the reasons the names are emphasized is that it’s a good way of making sure the texture is conducive to resolution. If you can get a respectful emotional texture and clear communication of models between the sides, maybe it works out even without super-defensible names.
Thanks for mentioning Duncan’s I want to be healthy, and I deserve rest. That one resonated with me, so I immediately did it with Hardcore Comet King and I’m a human too who deserve’s comfort. Situation: Taking a cold shower to be more focused when meditating.
===============================================
Comfort: *inner scream* cold showers suck!! I don’t like it at all.
Comet King: Yes it sucks, but it’s only a temporary discomfort until ai takeoff, and then you can have all the comfort you could want.
Comfort: …
Me: Comfort could you summarize what was said?
Comfort: Cold showers suck, but once ai takeoff happens, I’ll have a lot of comfort.
[Then I remembered death]
Comfort: *inner scream*
Comet King: If you’ll allow these smaller discomforts, we’ll have a greater chance at avoiding the greater discomforts.
===============================================
And then I took cold shower.
(I don’t feel like I fully captured the conversation, and I feel it had some more dialogue)
I’m not too sure about how to mesh this idea (IDC / fusion) & meditation, specifically noticing intentions. Like I can notice “aversion of taking a cold shower” and focus on it until it fades and goes away, OR I can do IDC/fusion where those aversions/thoughts won’t show up in the first place.
I would say the second one is better, but I’m a novice in both of those so I might be mis-representing them. There might also be different relationships between those ideas that I’ve completely missed.
Glad to hear that that resonated!
I have separate thoughts on cold showers, but I do find doing IDC to permanently deal with aversions to be far superior.
I’m not sure that CFAR has been through enough iterations to know what works better.
Does that mean that the above names (Realist/ Fire / Concern ) are the one’s you actually used?
If IDC works for you better than average and you use names like that, that would be some Bayesian evidence that those kinds of names are better.
I do have some priors that a one word name creates more emotional valence from doing parts work in other paradigms, so I think it’s worth exploring the case deeper.
Now that I think about it, I realize that I usually don’t bother explicitly naming the subagents. Rather, I have each subagent iterate on how they feel—and about what—until it’s clear what’s going on and what the concerns are. This may or may not involve actual names for the agents.
I also do IDC exclusively in my head, substituting feelings and thoughts for explicit mental verbalizations when convenient.
I wouldn’t recommend this to people just starting out—having a format in which you name the sides and write out the positions seems helpful to quantify your process. As you become more familiar with what the right emotional textures feel like, try streamlining.
For me, naming the subagents is ~85% of the work. Once that’s happened, they usually iterate back-and-forth a bunch of times (e.g. 5) and then it’s solved in a matter of seconds (i.e. <5 mins).
Do you use one word names or more descriptive ones?
“Ally-building” vs “Risk-neutrality” was a recent one, where the former thought that a low probability high reward strategy was bad and so I felt bad when I failed. Once I realised this was the debate, it was easy to let risk-neutrality ask the right questions and bring ally-building around to the true position (and no longer feel bad).
It sounds like your naming process is actually focusing. For me, the names don’t matter as much, and I just have a conversation involving focusing to figure out what the parts want.