I’m actually kind of surprised that IFS seems so popular in rationalist-space, as I would’ve thought rationalists more likely to bite the bullet and accept the existence of their unendorsed desires as a simple matter of fact.
Some reasons for the popularity of IFS which seem true to me, and independent of whether you accept your desires:
It’s the main modality that rationalists happen to know which lets you do this kind of thing at all. The other popular one is Focusing, which isn’t always framed in terms of subagents, but in terms of the memory reconsolidation model it basically only does accessing; de- and reconsolidation will only happen to the extent that the accessing happens to trigger the brain’s spontaneous mismatch detection systems. (Also the Bio-Emotive Framework has gotten somewhat popular of late, but that’s a very recent development.)
Rationalists tend to really like reductionism, in the sense of breaking complex systems down into simpler parts that you can reason about. IFS is good at giving you various gears about how minds operate, and e.g. turning previously incomprehensible emotional reactions into a completely sensible chain of parts triggering each other. (And this doesn’t feel substantially different than thinking in terms of e.g. schemas the way Coherence Therapy does; one is subagent-framed and the other isn’t, but one’s predictions seem to be essentially the same regardless of whether you think of schemas setting off each other or IFS-parts doing it.)
Many people have natural experiences of multiplicity, e.g. having the experience of an internal critic which communicates in internal speech; if your mind tends to natively represent things as subagents already, then it’s natural to be drawn to an approach which lets you use the existing interface. On the other hand, even if someone doesn’t experience natural multiplicity, especially if they’ve dealt with severely traumatized people, they are likely to have experienced something like part-switching in others.
IFS seems to offer some advantages that non-subagent ones don’t; as an example, I noticed a bug earlier today and used Coherence Therapy’s “what does my brain predict would happen if I acted differently” technique to access a schema’s prediction… but then I noticed that I was starting to get too impatient to disprove that belief before I had established sufficient access to it, so I switched to treating the schema as a subagent that I could experience compassion and curiosity towards, and that helped deal with the sense of urgency. In general, it feels like the “internal compassion” frame seems to help with a lot of things such as just wanting to rush into solutions, or figuring that some particular bug isn’t so important to fix; and knowing about the qualities of Self and having a procedure for getting there is often helpful for putting those kinds of meta-problems to the side.
That said, I do agree that sometimes simulating subagents seems to get in the way; I’ve had some IFS sessions where I did make progress, but it felt like the process wasn’t quite cutting reality at the joints, and I suspect that something like Coherence Therapy might have produced results quicker… and I also agree that
and that the kind of people drawn to rationalism might be extra-likely to want to disavow all their “irrational”-seeming desires!
is a thing. In my IFS training, it was said that “Self-like parts” (parts which pretend to be Self, and which mostly care about making the mind-system stable and bringing it under control) tend to be really strongly attracted towards any ideology or system which claims to offer a sense of control. I suspect that a many of the people who are drawn to rationalism are indeed driven by a strong part/schema which strongly dislikes uncertainty, and likes the promise of e.g. objectively correct methods of thinking and reasoning that you can just adopt. This would go hand in hand with wanting to reject some of your desires entirely.
I don’t think IFS is good reductionism, though. That is, presupposing subagents in general is not a reduction in complexity from “you’re an agent”. That’s not actually reducing anything! It’s just multiplying entities contra Occam.
Now, if IFS said, “these are specific subagents that basically everyone has, that bias towards learning specific types of evolution-advanged behavior”, then that would actually be a reduction.
If IFS said, “brains have modules for these types of mental behavior”, (e.g. hiding, firefighting, etc.), then that would also be a reduction.
But dividing people into lots of mini-people isn’t a reduction.
The way I reduce the same landscape of things is to group functional categories of mental behavior as standard modules, and treat the specific things people are reacting to and the actual behaviors as data those modules operate on. This model doesn’t require any sort of agency, because it’s just rules and triggers. (And things like “critical voices” are just triggered mental behavior or memories, not actual agents, which is why they can often be disrupted by changing the way they sound—e.g. making them seductive or squeaky—while keeping the content the same. If there were an “agent” desiring to criticize, this technique wouldn’t make any sense.)
As for compassion, the equivalent in what I’m doing would be the connect stage in collect/connect/correct:
“Collect” is getting information about the problem, determining a specific trigger and automatic emotional response (the thing that we will test post-reconsolidation to ensure we got it)
“Connect” is surfacing the inner experience and memory or belief that drives the response, either as a prediction or learned evaluation
“Correct” is the reconsolidation part: establishing contradiction and generating new predictions, before checking if the automatic response from “Collect” changed
All of these require one to be able to objectively observe and communicate inner experience without any meta-level processing (e.g. judging, objecting, explaining, justifying, etc.), but compassion towards a “part” is not really necessary for that, just that one suppress commentary. (There are some specific techniques that involve compassion in the “Correct” phase, but that’s really part of creating a contradiction, not part of eliciting information.)
With respect to trauma and DID, I will only say that again, the subagent model is not reduction, because it doesn’t break things down into simpler elements. (In contrast, the concept of state-specific memory is a simpler element that can be used in modeling trauma and DID.)
dividing people into lots of mini-people isn’t a reduction.
And like, the post you’re responding to just spent several thousand words building up a version of IFS which explicitly doesn’t have “mini-people” and where the subagents are much closer to something like reinforcement learning agents which just try to prevent/achieve something by sending different objects to consciousness, and learn based on their success in doing so...
And like, the post you’re responding to just spent several thousand words building up a version of IFS
The presented model of Exiles, Managers, Firefighters, etc. all describes “parts” doing things, but the same ideas can be expressed without using the idea of “parts”, which makes that idea redundant.
For example, here is a simpler description of the same categories of behavior:
Everyone experiences things that are so painful, we never want to experience them again, even as a possibility. Since we’re not willing to experience them, behaviors that allow us to keep those experiences from consciousness are negatively reinforced. What gets reinforced varies depending on our previous experience, but typically we will learn to deny, deflect, rationalize, distract, or come up with long term goals (e.g. “I will be so perfect that nobody will ever reject me again”) in order to avoid the painful experience being even a theoretical possibility.
Voila! The same three things (Exile, Firefighter, Manager), described in less text and without the need for a concept of “parts”. I’m not saying this model is right and the IFS model is wrong, just that IFS isn’t very good at reductionism and fails Occam’s razor because it literally multiplies entities beyond necessity.
From this discussion and the one on reconsolidation, I would hazard a guess that to the extent IFS is more useful than some non-parts-based (non-partisan?) approach, it is because one’s treatment of the “parts” (e.g. with compassion) can potentially trigger a contradiction and therefore reconsolidation. (I would hypothesize, though, that in most cases this is a considerably less efficient way to do it than directly going after the actual reconsolidation.)
Also, as I mentioned earlier, there are times when the UTE (thing we’re Unwilling To Experience) is better kept conceptually dissociated rather than brought into the open, and in such a case the model of “parts” is a useful therapeutic metaphor.
But “therapeutic metaphor” and “reductionist model” are not the same thing. IFS has a useful metaphor—in some contexts—but AFAICT it is not a very good model of behavior, in the reductionist sense of modeling.
a version of IFS which explicitly doesn’t have “mini-people” and where the subagents are much closer to something like reinforcement learning agents which just try to prevent/achieve something by sending different objects to consciousness, and learn based on their success in doing so...
If I try to steelman this argument, I have to taboo “agent”, since otherwise the definition of subagent is recursive and non-reductionistic. I can taboo it to “thing”, in which case I get “things which just try to prevent/achieve something”, and now I have to figure out how to reduce “try”… is that try iteratively? When do they try? How do they know what to try?
As far as I can tell, the answers to all the important questions for actual understanding are pure handwavium here. And the numerical argument still stands, since new “things” are proposed for each group of things to prevent or achieve, rather than (say) a single “thing” whose purpose is to “prevent” other things, and one whose purpose is to “achieve” them.
Voila! The same three things (Exile, Firefighter, Manager), described in less text and without the need for a concept of “parts”.
If it was just that brief description, then sure, the parts metaphor would be unnecessary. But the IFS model contains all kinds of additional predictions and applications which make further use of those concepts.
For example, firefighters are called that because “they are willing to let the house burn down to contain the fire”; that is, when they are triggered, they typically act to make the pain stop, without any regard for consequences (such as loss of social standing). At the same time, managers tend to be terrified of exactly the kind of lack of control that’s involved with a typical firefighter response. This makes firefighters and managers typically polarized—mutually opposed—with each other.
Now, it’s true that you don’t need to use the “part” expression for explaining this. But if we only talked about various behaviors getting reinforced, we wouldn’t predict that the system simultaneously considers a loss of a social standing to be a bad thing, and that it also keeps reinforcing behaviors which cause exactly that thing. Now, obviously it can still be explained in a more sophisticated reinforcement model, in which you talk about e.g. differing prioritizations in different situations, and some behavioral routines kicking in at different situations...
...but if at the end, this comes down to there being two distinct kinds of responses depending on whether you are trying to avoid a situation or are already in it, then you need names for those two categories anyway. So why not go with “manager” and “firefighter” while you’re at it?
And sure, you could call it, say, “a response pattern” instead of “part”—but the response pattern is still physically instantiated in some collection of neurons, so it’s not like “part” would be any less correct, or worse at reductionism. Either way, you still get a useful model of how those patterns interact to cause different kinds of behavior.
From this discussion and the one on reconsolidation, I would hazard a guess that to the extent IFS is more useful than some non-parts-based (non-partisan?) approach, it is because one’s treatment of the “parts” (e.g. with compassion) can potentially trigger a contradiction and therefore reconsolidation. [...] But “therapeutic metaphor” and “reductionist model” are not the same thing. IFS has a useful metaphor—in some contexts—but AFAICT it is not a very good model of behavior, in the reductionist sense of modeling.
I agree that the practical usefulness of IFS is distinct from the question of whether it’s a good model of behavior.
That said, if we are also discussing the benefits of IFS as a therapeutic method, then what you said is one aspect of what I think makes it powerful. Another is its conception of Self and unblending from parts.
I have had situations where for instance, several conflicting thoughts are going around my head, and identifying with all of them at the same time feels like I’m being torn into several different directions. However, then I have been able to unblend from each part, go into Self, and experience myself as listening to the concerns of the parts while being personally in Self; in some situations, I have been able to facilitate a dialogue and then feel fine.
IFS also has the general thing of “fostering Self-Leadership”, where parts are gradually convinced to remain slightly on the side as advisors, while keeping Self in control of things at all times. The narrative is something like, this can only happen if the Self is willing to take the concerns of _all_ the parts into account. The system learns to increasingly give the Self leadership, not because they would agree that the Self’s values would be better than theirs, but because they come to trust the Self as a leader which does its best to fulfill the values of all the parts. And this trust is only possible because the Self is the only part of the system which doesn’t have its own agenda, except for making sure that every part gets what it wants.
This is further facilitated by there being distinctive qualities of being in Self, and IFS users developing a “parts detector” which lets them notice when parts have been triggered, helping them unblend and return back to Self.
I’m not saying that you couldn’t express unblending in a non-partisan way. But I’m not sure how you would use it if you didn’t take the frame of parts and unblending from them. To be more explicit, by “use it” here I mean “be able to notice when you have been emotionally triggered, and then get some distance from that emotional reaction in the very moment when you are triggered, being able to see the belief in the underlying schema but neither needing to buy into it nor needing to reject it”.
(But of course, as you said, this is a digression to whether IFS is a useful mindhacking tool, which is distinct from the question of whether it’s good reductionism.)
If I try to steelman this argument, I have to taboo “agent”, since otherwise the definition of subagent is recursive and non-reductionistic. I can taboo it to “thing”, in which case I get “things which just try to prevent/achieve something”, and now I have to figure out how to reduce “try”...
I said a few words about my initial definition of agent in the sequence introduction:
One particular family of models that I will be discussing, will be that of multi-agent theories of mind. Here the claim is not that we would literally have multiple personalities. Rather, my approach will be similar in spirit to the one in Subagents Are Not A Metaphor:
Here’s are the parts composing my technical definition of an agent:
1. Values
This could be anything from literally a utility function to highly framing-dependent. Degenerate case: embedded in lookup table from world model to actions.
2. World-Model
Degenerate case: stateless world model consisting of just sense inputs.
3. Search Process
Causal decision theory is a search process. “From a fixed list of actions, pick the most positively reinforced” is another. Degenerate case: lookup table from world model to actions.
Note: this says a thermostat is an agent. Not figuratively an agent. Literally technically an agent. Feature not bug.
This is a model that can be applied naturally to a wide range of entities, as seen from the fact that thermostats qualify. And the reason why we tend to automatically think of people—or thermostats—as agents, is that our brains have evolved to naturally model things in terms of this kind of an intentional stance; it’s a way of thought that comes natively to us.
Given that we want to learn to think about humans in a new way, we should look for ways to map the new way of thinking into a native mode of thought. One of my tactics will be to look for parts of the mind that look like they could literally be agents (as in the above technical definition of an agent), so that we can replace our intuitive one-agent model with intuitive multi-agent models without needing to make trade-offs between intuitiveness and truth. This will still be a leaky simplification, but hopefully it will be a more fine-grained leaky simplification, so that overall we’ll be more accurate.
I don’t think that the distinction between “agent” and “rule-based process” really cuts reality at joints; an agent is just any set of rules that we can meaningfully model by taking an intentional stance. A thermostat can be called a set of rules which adjusts the heating up when the temperature is below a certain value, and adjusts the heating down when the temperature is above a certain value; or it can be called an agent which tries to maintain a target temperature by adjusting the heating. Both make the same predictions, they’re just different ways of describing the same thing.
The frame that I’ve had so far is that of the brain being composed of different subagents with conflicting beliefs. On the other hand, one could argue that the subagent interpretation isn’t strictly necessary for many of the examples that I bring up in this post. One could just as well view my examples as talking about a single agent with conflicting beliefs.
The distinction between these two frames isn’t always entirely clear. In “Complex Behavior from Simple (Sub)Agents”, mordinamael presents a toy model where an agent has different goals. Moving to different locations will satisfy the different goals to a varying extent. The agent will generate a list of possible moves and picks the move which will bring some goal the closest to being satisfied.
Is this a unified agent, or one made up of several subagents?
One could argue for either interpretation. On the other hand, mordinamael’s post frames the goals as subagents, and they are in a sense competing with each other. On the other hand, the subagents arguably don’t make the final decision themselves: they just report expected outcomes, and then a central mechanism picks a move based on their reports.
This resembles the neuroscience model I discussed in my last post, where different subsystems in the brain submit various action “bids” to the basal ganglia. Various mechanisms then pick a winning bid based on various criteria—such as how relevant the subsystem’s concerns are for the current situation, and how accurate the different subsystems have historically been in their predictions.
Likewise, in extending the model from Consciousness and the Brain for my toy version of the Internal Family Systems model, I postulated a system where various subagents vote for different objects to become the content of consciousness. In that model, the winner was determined by a system which adjusted the vote weights of the different subagents based on various factors.
So, subagents, or just an agent with different goals?
Here I would draw an analogy to parliamentary decision-making. In a sense, a parliament as a whole is an agent. Various members of parliament cast their votes, with “the voting system” then “making the final choice” based on the votes that have been cast. That reflects the overall judgment of the parliament as a whole. On the other hand, for understanding and predicting how the parliament will actually vote in different situations, it is important to model how the individual MPs influence and broker deals with each other.
Likewise, the subagent frame seems most useful when a person’s goals interact in such a way that applying the intentional stance—thinking in terms of the beliefs and goals of the individual subagents—is useful for modeling the overall interactions of the subagents.
For example, in my toy Internal Family Systems model, I noted that reinforcement learning subagents might end up forming something like alliances. Suppose that a robot has a choice between making cookies, poking its finger at a hot stove, or daydreaming. It has three subagents: “cook” wants the robot to make cookies, “masochist” wants to poke the robot’s finger at the stove, and “safety” wants the robot to not poke its finger at the stove.
By default, “safety” is indifferent between “make cookies” and “daydream”, and might cast its votes at random. But when it votes for “make cookies”, then that tends to avert “poke at stove” more reliably than voting for “daydream” does, as “make cookies” is also being voted for by “cook”. Thus its tendency to vote for “make cookies” in this situation gets reinforced.
We can now apply the intentional stance to this situation, and say that “safety” has “formed an alliance” with “cook”, as it correctly “believes” that this will avert masochistic actions. If the subagents are also aware of each other and can predict each other’s actions, then the intentional stance gets even more useful.
Of course, we could just as well apply the purely mechanistic explanation and end up with the same predictions. But the intentional explanation often seems easier for humans to reason with, and helps highlight salient considerations.
For example, firefighters are called that because “they are willing to let the house burn down to contain the fire”; that is, when they are triggered, they typically act to make the pain stop, without any regard for consequences (such as loss of social standing). At the same time, managers tend to be terrified of exactly the kind of lack of control that’s involved with a typical firefighter response. This makes firefighters and managers typically polarized—mutually opposed—with each other.
In my experience, this distinction merely looks like normal reinforcement: you can be short-term reinforced to do things that are against your interests in the long-term. This happens with virtually every addictive behavior; in fact, Dodes’ theory of addiction is that people feel better the moment they decide to drink, gamble, etc., and it is that decision that is immediately reinforced, while the downsides of the action are still distant. (Indeed, he notes that people often make that decision hours in advance of the actual behavior.)
If we only talked about various behaviors getting reinforced, we wouldn’t predict that the system simultaneously considers a loss of a social standing to be a bad thing, and that it also keeps reinforcing behaviors which cause exactly that thing.
On the contrary, contradictions in reinforced behavior are quite normal and expected. Timing and certainty are quite powerful influencers of reinforcement. Also, imitation learning is a thing: we learn from caretakers what to monitor ourselves about and when to punish ourselves… but this has no bearing on what we’ve also been reinforced to actually do. (Think “belief in belief” vs “actual belief”. We profess beliefs verbally about what’s important that are distinct from what we actually, implicitly value or reward.)
So, you can easily get a person who keeps doing something they think is bad and punish themselves for, because they learned from their parents that punishing themselves was a good thing to do. Not because the punishment has any impact on their actual behavior, but because the act of self-punishing is reinforced, either because it reduces the frequency of outside punishment, or because we have hardware whose job it is to learn what our group punishes, so we can punish everyone else for it.
Anyway, it sounds like you think reinforcement has to have some kind of global coherence, but state-dependent memory and context-specific conditioning show that reinforcement learning doesn’t have any notion of global coherence. If you were designing a machine to act like a human, you might try to build in such coherence, but evolution isn’t required to be coherent.
(Indeed, reconsolidation theory shows that coherence testing can only happen with local contradiction, as there’s no global automatic system checking for contradictions!)
you need names for those two categories anyway. So why not go with “manager” and “firefighter” while you’re at it? And sure, you could call it, say, “a response pattern” instead of “part”—but the response pattern is still physically instantiated in some collection of neurons, so it’s not like “part” would be any less correct, or worse at reductionism.
Because the categories are not of two classes of things, but two classes of behavior. If we assume the brain has machinery for them, it is more parsimonious to assume that the brain has two modules or modes that bias behavior in a particular direction based on a specific class of stimuli, with the specific triggers being mediated through the general-purpose learning machinery of the cortex. To assume that there is dedicated neural machinery for each instance of these patterns is not consistent with the ability to wipe them out via reconsolidation.
That is, I’m pretty sure you can’t wipe out physical skills or procedural memory by trivial reconsolidation, but these other types of behavior pattern can be. That suggests that there is not individual hardwired machinery for each instance of a “part” in the IFS model, such that parts do not have physically dedicated storage or true parallel operation like motor skills have.
Compare to say, Satir’s parts model, where the parts were generic roles like Blamer, Placater, etc. We can easily imagine dedicated machinery evolved to perform these functions (and present in other species besides us), with the selection criteria and behavioral details being individually learned. In such a model, one only needs one “manager” module, one “firefighter” module, and so on, to the extent that the behaviors are actually an evolved pattern and not merely an emergent property of reinforced behavior.
I personally believe we have dedicated systems for punishing, protesting, idealistic virtue signalling, ego defense, and so on. These are not “parts” in the sense that they weren’t grown to cope with specific situations, but are more like mental muscles that sit there waiting to be trained as to when and how to act—much like the primate circuitry for learning whether to fear snakes, and which ones, and how to respond to their presence.
An important difference, then, between a model that treats parts as real, vs one that treats “parts” as triggers wired to pre-existing mental muscles, is that in the mental muscles model, you cannot universally prevent that pattern from occurring. There is always a part of you ready to punish or protest, to virtue signal or ego-defend. In addition, it is not possible in such a model for that muscle to ever learn to do something else. All you can do is learn to not use that muscle, and use another one instead!
This distinction alone is huge when you look at IFS’ Exiles. If you have an “exile” that is struggling to be capable of doing something, but only knows how to be in distress, it’s helpful to realize that it’s just the built-in mental muscle of “seeking care via distress”, and that it will never be capable of doing anything else. It’s not the distressed “part”’s job to do things or be capable of things, and never was. That’s the job of the “everyday self”—the set of mental muscles for actual autonomy and action. But as long as someone’s reinforced pattern is to activate the “distress” muscle, then they will feel horrible and helpless and not able to do anything about it.
Resolving this challenge doesn’t require that one “fix” or “heal” a specific “part”, and this is actually a situation where it’s therapeutically helpful to realize there is no such thing as a “part”, and therefore nothing to be healed or fixed! Signaling distress is just something brains do, and it’s not possible for the part of your brain that signals distress to do anything else. You have to use a different part of the brain to do anything else.
The same thing goes for inner criticism: thinking of it as originating from a “part” suggests the idea that perhaps one can somehow placate this part to make it stop criticizing, when it is fact just triggering the mental muscle of social punishment, aimed at one’s self. The hardware for criticizing and put-downs will always be there, and can’t be gotten rid of. But one can reconsolidate the memories that tell it who’s an acceptable target! (And as a side effect, you’ll become less critical of people doing things similar to you, and less triggered by the behavior in others. Increased compassion comes about automatically, not as a practiced, “fake-it-till-you-make-it” process!)
I’m not saying that you couldn’t express unblending in a non-partisan way. But I’m not sure how you would use it if you didn’t take the frame of parts and unblending from them. To be more explicit, by “use it” here I mean “be able to notice when you have been emotionally triggered, and then get some distance from that emotional reaction in the very moment when you are triggered, being able to see the belief in the underlying schema but neither needing to buy into it nor needing to reject it”.
I think I’ve just presented such an expression. Unblending doesn’t require that you have an individual part for every possible occurrence of behavior, only that you realize that your brain has dedicated machinery for specific classes of behavior. Indeed, I think this is a cleaner way to unblend, since it does not lend itself to stereotyped thoughts of agent-like behavior, such as trying to make an exile feel better or convince a manager you have things under control. It’s validating to realize that as long as you are using the mental muscles of distress or self-punishment or self-promotion to try to accomplish something, it never would have worked, beacuse those muscles do not do anything except the preprogrammed thing they do.
When you try to negotiate with parts, you’re playacting a complicated way to do something that’s much simpler, and hoping that you’ll hit the right combination of stimuli to accidentally accomplish a reconsolidation you could’ve targeted directly in a lot less time.
In IFS, you’re basically trying to provide new models of effective caretaker behavior, in the hope that the person’s brain figures out what rules this new behavior contradicts, and then reconsolidate. But if you directly reconsolidate the situationally-relevant memories of their actual caretaker’s behaviors, you can create an immediate change in how it feels to be one’s self, instead of painstakingly building up a set of rote behaviors and trying to make them feel natural.
I don’t think that the distinction between “agent” and “rule-based process” really cuts reality at joints; an agent is just any set of rules that we can meaningfully model by taking an intentional stance
Except that if you actually want to predict how a thermostat behaves, using the brain’s built-in model of “thing with intentional stance”, you’re making your model worse. If you model the thermostat as, “thing that ‘wants’ the house a certain temperature”, then you’ll be confused when somebody sticks an ice cube or teapot underneath it, or when the temperature sensor breaks.
That’s why the IFS model is bad reductionism: calling things agents brings in connotations that are detrimental to its use as an actual predictive model. To the extent that IFS works, it’s actually accidental side-effects of the therapeutic behavior, rather than directly targeting reconsolidation on the underlying rules.
For example, when you try to do “self-leadership”, what you’re doing is trying to model that behavior through practice while counter-reinforcement is still in place. It’s far more efficient to delete the rules that trigger conflicting behavior before you try to learn self-leadership, so that you aren’t fighting your reinforced behaviors to do so.
So, at least in my experience, the failure of IFS to carve reality at the actual joint of “preprogrammed modules + individually-learned triggers”, makes it more complex, more time-consuming, less effective, and more likely to have unintended side-effects than approaches that directly target the “individually-learned triggers”.
In my own approach, rather than nominalizing behavior into parts, I try to elicit rules—“when this, then that”—and then go after emotionally significant examples, to break down implicit predictions and evaluations in the memory.
For example, my mother yelling at me that I need to take things seriously when I was being too relaxed (for her standards) about something important I needed to do. Though not explicitly stated, the presuppositions of my mother in this memory are that “taking things seriously” requires being stressed about them, and that further, if you don’t do this, then you won’t accomplish your goal or be a responsible person, because obviously, if you cared at all, you would be freaked out.
To reconsolidate, I first establish that these things aren’t actually true, and then predict a mother who realizes these things aren’t true, and ask how she would have behaved if she didn’t believe those things. In my imagination, I realize that she probably would’ve told me she wanted me to work on the thing for an hour a day before dinner, and that she wanted me to show her what I did, so she can track my progress. Then I imagine how my life would’ve been different, growing up with that example.
Boom! Reconsolidation. My whole outlook on accomplishing long-term goals changes instantly from “be stressed until it’s done, or else you’re a bad person” to “work on it regularly and keep an eye on your progress”. I don’t have to practice “self-leadership”, because I now feel different when I think about long-term goals than I did before. Instead of triggering the less-useful muscles of self-punishment, the ones I need are triggered instead.
But if I had tried to model the above pattern as parts… I’m not sure how that would have gone. Probably would’ve made little progress trying to persuade a “manager” based on my mother to act differently if I couldn’t surface the assumptions involved, because any solution that didn’t involve me being stressed would mean I was a bad person.
Sure, in the case of IFS, we can assume that it’s the therapist’s job to be aware of these things and surface the assumptions. But that makes the process dependent on the experiences (and assumptions!) of the therapist… and presumably, a sufficiently-good therapist could use any modality and still get the result they’re after, eventually. So what is IFS adding in that case?
Further, when compared to reconsolidation targeting specific schemas, the IFS process is really indirect. You’re trying to get the brain to learn a new implicit pattern alongside a broken one, hoping the new example(s) won’t simply be filtered into meaninglessness or non-existence when processed through the existing schemas. In contrast, direct reconsolidation goes directly to the source of the issue, and replaces the old implicit pattern with a new one, rather than just giving examples and hoping the brain picks up on the pattern.
(Also notice that in practice, a lot of things IFS calls “parts” as if they were aspects of the client, are in fact mental models of other people, i.e. “what would mom or dad do in this situation?”, as a proxy for “what should I do in this situation?”. Changing the model of what the other people would do or should have done then immediately changes one’s sense of what “I” should do also.)
Anyway, the main part of IFS that I have found useful is merely knowing which behaviors are a good idea for caregivers to exemplify, as this is valuable in knowing what parts of one’s schemas are broken and what they should be changed to. But the actual process of changing them in IFS is really suboptimal compared to directly targeting those schema… which is more evidence suggesting that IFS as a theory is incorrect, in spite of its successes.
The content of this and the other comment thread seems to be overlapping, so I’ll consolidate (pun intended) my responses to this one. Before we go on, let me check that I’ve correctly understood what I take to be your points.
Does the following seem like a fair summary of what you are saying?
Re: IFS as a reductionist model:
Good reductionism involves breaking down complex things into simpler parts. IFS “breaks down” behavior into mini-people inside our heads, each mini-person being equally complex as a full psyche. This isn’t simplifying anything.
Talking about subagents/parts or using intentional language causes people to assign things properties that they actually don’t have. If you say that a thermostat “wants” the temperature to be something in particular, or that a part “wants” to keep you safe, then you will predict its behavior to be more flexible and strategic than it really is.
The real mechanisms behind emotional issues aren’t really doing anything agentic, such as strategically planning ahead for the purpose of achieving a goal. Rather they are relatively simple rules which are used to trigger built-in subsystems that have evolved to run particular kinds of action patterns (punishing, protesting, idealistic virtue signalling, etc.). The various rules in question are built up / selected for using different reinforcement learning mechanisms, and define when the subsystems should be activated (in response to what kind of a cue) and how (e.g. who should be the target of the punishing).
Reinforcement learning does not need to have global coherence. Seemingly contradictory behaviors can be explained by e.g. a particular action being externally reinforced or becoming self-reinforcing, all the while it causes globally negative consequences despite being locally positive.
On the other hand, IFS assumes that there is dedicated hardware for each instance of an action pattern: each part corresponds to something like an evolved module in the brain, and each instance of a negative behavior/emotion corresponds to a separate part.
The assumption of dedicated hardware for each instance of an action pattern is multiplying entities beyond necessity. The kinds of reinforcement learning systems that have been described can generate the same kinds of behaviors with much less dedicated hardware. You just need the learning systems, which then learn rules for when and how to trigger a much small number of dedicated subsystems.
The assumption of dedicated hardware for each instance of an action pattern also contradicts the reconsolidation model, because if each part was a piece of built-in hardware, then you couldn’t just entirely change its behavior through changing earlier learning.
Everything in IFS could be described more simply in terms of if-then rules, reinforcement learning etc.; if you do this, you don’t need the metaphor of “parts”, and you also have a more correct model which does actual reduction to simpler components.
Re: the practical usefulness of IFS as a therapeutic approach:
Approaching things from an IFS framework can be useful when working with clients with severe trauma, or other cases when the client is not ready/willing to directly deal with some of their material. However, outside that context (and even within it), IFS has a number of issues which make it much less effective than a non-parts-based approach.
Thinking about experiences like “being in distress” or “inner criticism” as parts that can be changed suggests that one could somehow completely eliminate those. But while triggers to pre-existing brain systems can be eliminated or changed, those brain systems themselves cannot. This means that it’s useless to try to get rid of such experiences entirely. One should rather focus on the memories which shape the rules that activate such systems.
Knowing this also makes it easier to unblend, because you understand that what is activated is a more general subsystem, rather than a very specific part.
If you experience your actions and behaviors being caused by subagents with their own desires, you will feel less in control of your life and more at the mercy of your subagents. This is a nice crutch for people with denial issues who want to disclaim their own desires, but not a framework that would enable you to actually have more control over your life.
“Negotiating with parts” buys into the above denial, and has you do playacting inside your head without really getting into the memories which created the schemas in the first place. If you knew about reconsolidation, you could just target the memories directly, and bypass all of the extra hassle.
“Developing self-leadership” involves practicing a desired behavior so that it could override an old one; this is what Unlocking the Emotional Brain calls a counteractive strategy, and is fragile in all the ways that UtEB describes. It would be much more effective to just use a reconsolidation-based approach.
IFS makes it hard to surface the assumptions behind behavior, because one is stuck in the frame of negotiating with mini-people inside one’s head, rather than looking at the underlying memories and assumptions. Possibly an experienced IFS therapist can help look for those assumptions, but then one might as well use a non-parts-based framework.
Even when the therapist does know what to look for, the fact that IFS does not have a direct model of evidence and counterevidence makes it hard to find the interventions which will actually trigger reconsolidation. Rather one just acts out various behaviors which may trigger reconsolidation if they happen to hit the right pattern.
Besides the issue with luck, IFS does not really have the concept of a schema which keeps interpreting behaviors in the light of its existing model, and thus filtering out all the counter-evidence that the playacting might otherwise have contained. To address this you need to target the problematic schema directly, which requires you to actually know about this kind of a thing and be able to use reconsolidation techniques directly.
Excellent summary! There are a couple of areas where you may have slightly over-stated my claims, though:
IFS “breaks down” behavior into mini-people inside our heads, each mini-person being equally complex as a full psyche.
I wouldn’t say that IFS claims each mini-person is equally complex, only that the reduction here is just a separation of goals or concerns, and does not reduce the complexity of having agency. And this is particularly important because it is the elimination of the idea of smart or strategic agency that allows one to actually debug brains.
Compare to programming: when writing a program, one intends for it to behave in a certain way. Yet bugs exist, because the mapping of intention to actual rules for behavior is occasionally incomplete or incorrectly matched to the situation in which the program operates.
But, so long as the programmer thinks of the program as acting according to the programmer’s intention (as opposed to whatever the programmer actually wrote), it is hard for that programmer to actually debug the program. Debugging requires the programmer to discard any mental models of what the program is “supposed to” do, in order to observe what the program is actually doing… which might be quite wrong and/or stupid.
In the same way, I believe that ascribing “agency” to subsets of human behavior is a similar instance of being blinded by an abstraction that doesn’t match the actual thing. We’re made up of lots of code, and our problems can be considered bugs in the code… even if the behavior the code produces was “working as intended” when it was written. ;-)
On the other hand, IFS assumes that there is dedicated hardware for each instance of an action pattern: each part corresponds to something like an evolved module in the brain, and each instance of a negative behavior/emotion corresponds to a separate part.
I don’t claim that IFS assumes dedicated per-instance hardware; but it seems kind of implied. My understanding is that IFS at least assumes that parts are agents that 1) do things, 2) can be conversed with as if they were sentient, and 3) can be reasoned or negotiated with. That’s more than enough to view it as not reducing “agency”.
But the article that we are having this discussion on does try to a model a system with dedicated agents actually existing (whether in hardware or software), so at least that model is introducing dedicated entities beyond necessity. ;)
Besides the issue with luck, IFS does not really have the concept of a schema which keeps interpreting behaviors in the light of its existing model, and thus filtering out all the counter-evidence that the playacting might otherwise have contained. To address this you need to target the problematic schema directly, which requires you to actually know about this kind of a thing and be able to use reconsolidation techniques directly.
Technically, it’s possible to change people without intentionally using reconsolidation or a technique that works by directly attempting it. It happens by accident all the time, after all!
And it’s quite possible for an IFS therapist to notice the filtering or distortions taking place, if they’re skilled and paying attention. Presumably, they would assign it to a part and then engage in negotiation or an attempt to “heal” said part, which then might or might not result in reconsolidation.
So I’m not claiming that IFS can’t work in such cases, only that to work, it requires an observant therapist. But such a good therapist could probably get results with any therapy model that gave them sufficient freedom to notice and address the issue, no matter what terminology was used to describe the issue, or the method of addressing it.
As the authors of UTEB put it:
Transformational change of the kind addressed here—the true disappearance of long-standing, distressing emotional learning—of course occurs at times in all sorts of psychotherapies that involve no design or intention to implement the transformation sequence by creating juxtaposition experiences.
After all, reconsolidation isn’t some super-secret special hack or unintended brain exploit, it’s how the brain normally updates its predictive models, and it’s supposed to happen automatically. It’s just that once a model pushes the prior probability of something high (or low) enough, your brain starts throwing out each instance of a conflicting event, even if considered collectively they would be reason to make a major update in the probability.
This is a great comment, and I glad you wrote it. I’m rereading it several times over to try and get a handle on everything that you’re saying here.
In particular, I really like the “muscle” vs. “part” distinction. I’ve been pondering lately, when I should just squash an urge or desire, and when I should dialogue with it, and this distinction brings some things into focus.
I have some clarifying questions though:
For example, when you try to do “self-leadership”, what you’re doing is trying to model that behavior through practice while counter-reinforcement is still in place. It’s far more efficient to delete the rules that trigger conflicting behavior before you try to learn self-leadership, so that you aren’t fighting your reinforced behaviors to do so.
I don’t know what you mean by this at all. Can you give (or maybe point to) an example?
---
But if I had tried to model the above pattern as parts… I’m not sure how that would have gone. Probably would’ve made little progress trying to persuade a “manager” based on my mother to act differently if I couldn’t surface the assumptions involved, because any solution that didn’t involve me being stressed would mean I was a bad person.
Sure, in the case of IFS, we can assume that it’s the therapist’s job to be aware of these things and surface the assumptions. But that makes the process dependent on the experiences (and assumptions!) of the therapist… and presumably, a sufficiently-good therapist could use any modality and still get the result they’re after, eventually. So what is IFS adding in that case?
This is fascinating. When I read your stressing out example, my thought was basically “wow. It seems crazy-difficult to surface the core underlying assumptions”.
But you think that this is harder, in the IFS framework. That is amazing, and I want to know more.
In practice, how do you go about eliciting the rules and then emotionally significant instances?
Maybe in the context of this example, how do you get from “I seem to be overly stressed about stuff” to the memory of your mother yelling at you?
---
You’re trying to get the brain to learn a new implicit pattern alongside a broken one, hoping the new example(s) won’t simply be filtered into meaninglessness or non-existence when processed through the existing schemas. In contrast, direct reconsolidation goes directly to the source of the issue, and replaces the old implicit pattern with a new one, rather than just giving examples and hoping the brain picks up on the pattern.
I’m trying to visualize someone doing IFS or IDC, and connect it to what you’re saying here, but so far, I don’t get it.
What are the “examples”? Instances that are counter to the rule / schema of some part? (e.g. some part of me believes that if I ever change my mind about something important, then no one will love me, so I come up with an example of when this isn’t or wasn’t true?)
---
but state-dependent memory and context-specific conditioning show that reinforcement learning doesn’t have any notion of global coherence.
Given that, doesn’t it make sense to break down the different parts of a RL policy into parts? If different parts of a policy are acting at cross purposes, it seems like it is useful to say “part 1 is doing X-action, and part 2 is doing Y-action.”
...But you would say that it is even better to say “this system, as a whole is doing both X-action, and Y-action”?
I don’t know what you mean by this at all. Can you give (or maybe point to) an example?
So, let’s take the example of my mother stressing over deadlines. Until I reconsolidated that belief structure… or hell, since UTEB seems to call it a “schema”, let’s just call it that. I had a schema that said I needed to be stressed out if the goal was serious. I wasn’t aware of that, though: it just seemed like “serious projects are super stressful and I never know what to do”, except wail and grind my teeth (figuratively speaking) until stuff gets done.
Now, I was aware I was stressed, and knew this wasn’t helpful, so I did all sorts of things to calm down. People (like my wife) would tell me everything was fine, I was doing great, go easier/don’t be so hard on yourself, etc. I would try practicing self-compassion, but it didn’t do anything, except maybe momentarily, because structurally, being not-stressed was incompatible with my schema.
In fact, a rather weird thing happened: the more I managed to let go of judgments I had about how well I was doing, and the better I got at being self-compassionate, the worse I felt. It wasn’t the same kind of stress, but it was actually worse, despite being differently flavored. It was like, “you’re not taking this seriously enough” (and implicitly, “you’re an awful person”).
As it happened, the reason I got better at self-compassion was not because I was practicing it as a mode of operation, but because I used my own mindhacking methods to remove the reasons I had for self-judgment. In truth, over the last decade or two I have tried a ridiculous number of self-help and/or therapist-designed exercises intended to send love or compassion to parts or past selves or inner children etc., and what they all had in common was that they almost never clicked for me… and the few times they did, I ended up developing alternative techniques to produce the same kind of result without trying to fake the love, compassion, or care that almost never felt real to me.
In retrospect, it’s easy to see that the reason those particular things clicked is that in trying to understand the perspective from which the exercise(s) were written, I stumbled on contradictions to my existing schema, and thus fixed another way in which I was judging myself (and thus unable to have self-compassion).
Anyway, my point is that most counteractive interventions (to use the term from UTEB) involve a therapist modeling (and coaching the client to enact) helpful carer behavior. If the client’s problem is merely that they aren’t familiar with that type of behavior, then this is merely adding a new skill to their repertoire, and might work nicely.
But, if the person comes from a background where they not only didn’t receive proper care, but were actively taught say, that they were not worth being cared for, that they were bad or selfish for having normal human needs, etc., then this type of training will be counterproductive, because it goes against the client’s schemas, where being good and safe means repressing needs, judging themselves, etc.
As a result, their schema creates either negative reinforcement or neutralizing strategies. They don’t do their assignments, they stop coming to therapy. Or they develop ways to neutralize the contradiction between the schema and the new experience, e.g. by defining it as “unreal”, “you’re being nice because that’s your job”, etc.
Or, there’s the neutralizing strategy I used for many years, which was to frame things in my head as, “okay, so I’m going to be nice to my weak self so that it can shape up and do what it’s supposed to now”. (This one has been popular with some of my clients, too, as it allows you to keep punishing and diminishing yourself in the way you’re used to, while technically still completing the exercises you’re supposed to!)
So these are things that traditional therapists call all sorts of things, like transference and resistance and so on. But these are basically ways to say in effect, “the therapy is working but the client isn’t”.
This is fascinating. When I read your stressing out example, my thought was basically “wow. It seems crazy-difficult to surface the core underlying assumptions”.
But you think that this is harder, in the IFS framework. That is amazing, and I want to know more.
In practice, how do you go about eliciting the rules and then emotionally significant instances?
Maybe in the context of this example, how do you get from “I seem to be overly stressed about stuff” to the memory of your mother yelling at you?
The overall framework I call “Collect, Connect, Correct”, and it’s surprisingly similar to the “ABC123V” framework described in UTEB. (Actually, I guess it shouldn’t be that surprising, since the results they describe from their framework are quite similar to the kind I get.)
In the first stage, I collect information about the when/where/how of the problem, and attempt to pin down a repeatable emotional response, i.e. think about X, get emotional reaction Y. If it’s not repeatable, it’s not testable, which makes things a lot harder.
In the case of being stressed, the way that I got there was that I was laying down one afternoon, trying to take a nap and not being able to relax. When I’d think of trying to let go and actually sleep, I kept thinking something along the lines of, “I should be doing something, not relaxing”.
A side note: my description of this isn’t going to be terribly reliable, due to the phenomenon I call “change amnesia” (which UTEB alludes to in case studies, but doesn’t give a name, at least in the chapters I’ve read so far). Change amnesia is something that happens when you alter a schema. The meanings that you used to ascribe to things stop making sense, and as a result it’s hard to get your mind into the same mindset you used to have, even if it was something you were thinking just minutes before making the change!
So, despite the fact I still remember lying there and trying to go to sleep (as the UTEB authors note, autobiographical memory of events isn’t affected, just the meanings associated with them), I am having trouble reconstructing the mindset I was in, because once I changed the underlying schema, that mindset became alien to me.
Anyway, what I do remember was that I had identified a surface level idea. It was probably something like, “I should be doing something”, but because those words don’t make me feel the sense of urgency they did before, it’s hard to know if I am correctly recalling the exact statement.
But I do remember that the statement was sufficiently well-formed to use The Work on. The Work is a simple process for actually performing reconsolidation, the “123” part of UTEB’s ABC123V framework, or the “Correct” in my Collect-Connect-Correct framework.
But when I got to question 4 of the Work, there was an objection raised in my mind. I was imagining not thinking I should be doing something (or whatever the exact statement was), and got a bad feeling or perhaps a feeling that it wasn’t realistic, something of that sort. A reservation or hesitation in this step of the work corresponds to what UTEB describes as an objection from another schema, and as with their method, so too does mine call for switching to eliciting the newly-discovered schema, instead of continuing with the current one.
So at either that level, or the next level up of “attempt reconsolidation, spot objection, switch”, I had the image or idea come up of my mother being upset with me for not being stressed, and I switched from The Work to my SAMMSA model.
SAMMSA stands for “Surface, Attitude, Model, Mirror, Shadow, Assumptions”, and it is a tool I developed to identify and correct implicit beliefs encoded as part of an emotionally significant memory. It’s especially useful in matters relating to self-image and self-esteem, because AFAICT we learn these things almost entirely through our brain’s interpretation of other people’s behavior towards us.
In the specific instance, the “surface” is what my mother said and did. The Attitude was impatience and anger. The Model was, “when there is something important to be done, the right thing to do is be stressed”. The Mirror was, “if I don’t get you to do this, then you will never learn to take things seriously; you’ll grow up to be careless”. The Shadow (injected to my self image) was the idea that: “you’re irresponsible/uncaring”. And the Assumptions (of my mother) were ideas like “I’m helpless/can’t do anything to make this work”, “somebody needs to do something”, and “it’s a serious emergency for things to not be geting done, or for there to be any problems in the doing”.
The key with a stack like this is to fix the Shadow first, unless the Assumptions get in the way. Shadow beliefs are things that say what a person is not, and by implication never will be. They tend to lock into place all the linked beliefs, behaviors, and assumptions, like a lynchpin to the schema that formed around them.
The contradiction, then, is to 1) remember and realize that I did care, as a matter of actual fact, and was not intentionally being irresponsible or “bad”. I wanted to get the thing done, and just didn’t know how to go about it. Then, I imagined “how would my mother have acted if she knew that for a fact?” At which point I then imagine growing up with her acting that way… which I was surprised to realize could be as simple as telling me to work on it daily and checking my progress. (I did initially have to work through an objection that she couldn’t just leave me to it, and couldn’t just tell me to work on it and not follow-up, but these were pretty straightforward to work out.)
I think I also had some trouble during parts of this due to some of the Assumptions, so I had to deal with a couple of those via The Work. I may also be misremembering the order I did these bits in. (Order isn’t super important as long as you test things to make sure that none of the beliefs seem “real” any more, so you can clean up any that still do.)
Notice, here, the difference between how traditional therapy (IFS included) treats the idea of compassion or loving but firm caregivers, etc., vs the approach I took here. I do not try to act out being compassionate to my younger self or to my self now. I don’t try to construct in my mind some idealized parental figure. Instead, what I did was identify what was broken in (my mental model of) my mother’s beliefs and behavior, and correct that in my mental model of my mother, which is where my previous behavior came from.
This discovery was the result of studying a metric f**k-ton of books on developmental psychology, self-compassion, inner child stuff, shadow psychology, and even IFS. :) I had discovered that sometimes I could change things by remaigining parental behavior more in-line with the concepts from those books, but not always. Trying to divine the difference, I finally noticed that the issue was that sometimes I simply could not, no matter how hard I tried, make a particular visualization of caring behavior feel real, and thus trigger a memory mismatch to induce reconsolidation.
What I discovered was that for such visualizations, my brain was subtly twisting the visualizations in such a way as to match a deeper schema—like the idea that I was incompetent or uncaring or unlovable! -- so that even though the imagined parent was superficially acting different, the underlying schema remained unchanged. It was like they were thinking, “well, I guess I’m supposed to be nice like this in order to be a good parent to this loser”. (I’m being flippant here since change amnesia has long since wiped most of the specifics of these scenarios from easy recollection.)
I dubbed this phenomenon “false belief change”, and found my clients did it, too. I initially had a more intuitive and less systematic way of figuring out how to get past it, but in order to teach other people to do it I gradually worked out the SAMMSA mnemonic and framework for pulling out all the relevant bits, and later still came to realize that there are only three fundamental failures of trust that define Shadows, which helps a lot in rapidly pinning them down.
That’s why, this huge wall of text I’ve described for changing how I feel about important, “serious” projects is something that took maybe 20-30 minutes, including the sobbing and shaking afterward.
(Yeah, that’s a thing that happens, usually when I’m realizing all the sh** I’ve gone through in my life that was completely unnecessary. I assume it’s a sort of “accelerated grief” happening when you notice stuff like, “oh hey, I’ve spent months and years stressing out when I could’ve just worked on it each day and checked on my progress… so much pain and missed opportunities and damaged relationships and...” yeah. It can be intense to do something like that, if it’s been something that affected your life a lot.
As I said above, I did also have to tackle some of the Assumptions, like not being able to do anything and needing somebody else to do it, that any problem equals an emergency, and so on. These didn’t take very long though, with the schema’s core anchor having been taken out. I think I did one assumption before the shadow, and the rest after, but it’s been a while. Most of the time, Assumptions don’t really show up until you’ve at least started work on fixing the shadow, either blocking it directly, or showing up when you try to imagine what it would’ve been like to grow up with the differently-thinking parent.
When I work with clients, the actual SAMMSA process and reconsolidation is similarly something that can be done in 20-30 minutes, but it may take a couple hours to get up to that point, as the earlier Collect and Connect phases can take a while, getting up to the point where you can surface a relevant memory. I was lucky with the “going to sleep” problem because it was something I had immediate access to: a problem that was actually manifesting in practice. In contrast, with clients it usually takes some time to even pin down the equivalent of “I was trying to get to sleep and kept thinking I should be doing something”, especially since most of the time the original presenting problem is something quite general and abstract.
I also find that individuals vary considerably in how easy it is for them to get to emotionally relevant memories; recently I’ve had a couple of LessWrong readers take up my free session offer, and have been quite surprised at how quickly they were able to surface things. (As it turned out, they both had prior experience with Focusing, which helps a whole heck of a lot!)
The UTEB book describes some things that sound similar to what I do to stimulate access to such memories, e.g. their phrase of “symptom deprivation” describes something kind of similar in function and intent to some of my favorie “what if?” questions to ask. And I will admit that there is some degree of art and intuition to it that I have not put into a formal framework (at least yet). But since I tend to develop frameworks in response to trying to teach things, it hasn’t really come up. Example and osmosis has generally sufficed for getting people to get the hang of doing this kind of inward access, once their meta-issue with it (if any) gets pinned down.
What are the “examples”? Instances that are counter to the rule / schema of some part? (e.g. some part of me believes that if I ever change my mind about something important, then no one will love me, so I come up with an example of when this isn’t or wasn’t true?)
I think I’ve answered this above, but in case I haven’t: IFS has the therapist and/or client act out examples of caring behavior, compassion, “self-leadership”, etc. They do this by paying attention, taking parts’ needs seriously, and so on. My prediction is that for some people, some of the time, this would produce results similar to those produced by reconsolidation. Specifically, in the cases where someone doesn’t have a schema silently twisting everything into a “false belief change”, but the behavior they’re shown or taught does contradict one of their problematic schema.
But if the person is internally reframing everything to, “this is just the stupid stuff I have to do to take care of these stupid needy parts”, then no real belief change is taking place, and there will be almost no lasting benefit past the immediate reconciliation of the current conflict being worked on, if it’s even successfully resolved in the first place.
So, I understand that this isn’t what all IFS sources say they are doing. I’m just saying that, whatever you call the process of enacting these attitudes and behaviors in IFS, the only way I would expect it to ever produce any long-term effects is as the result of it being an example that triggers a contradiction in the client’s mental model, and therefore reconsolidation. (And thereby producing “transformative” change, as the UTEB authors call it, as opposed to “counteractive” change, where somebody has to intentionally maintain the counteracting behavior over time in order to sustain the effect.)
Given that, doesn’t it make sense to break down the different parts of a RL policy into parts? If different parts of a policy are acting at cross purposes, it seems like it is useful to say “part 1 is doing X-action, and part 2 is doing Y-action.”
...But you would say that it is even better to say “this system, as a whole is doing both X-action, and Y-action”?
I don’t know what you mean by “parts” here. But I do focus on the smallest possible things, because it helps to keep an investigation empirically grounded. The only reason I can go from “not wanting to go to sleep” to “my mother thinks I’m irresponsible” with confidence I’m not moving randomly or making things up, is because each step is locally verifiable and reproducible.
It’s true that there are common cycles and patterns of these smaller elements, but I stick as much as possible to dealing in repeatable stimulus-response pairs, i.e., “think about X, get feeling or impression Y”. Or “adjust the phrasing of this idea until it reaches maximum emotional salience/best match with inner feeling”. All of these are empirical, locally-verifiable, and theory-free phenomena.
In contrast, “parts” are something I’ve struggled to work with in a way that allows that kind of definitiveness. In particular, I never found my “parts” to have repeatable behavior, let alone verifiable answers to questions. I could never tell if what I seemed to be getting was real, or was just me imagining/making stuff up. In contrast, the modality of “state an idea or imagine an action, then notice how I feel” was eminently repeatable and verifiable. I was able to quickly learn the difference betwen “having a reaction” and “wondering if I’m reacting”, and was then able to test different change techniques to see what they did. If something couldn’t change the way I automatically responded, I considered it a dud, because I wanted to change me on the inside, not just how I act on the outside. I wanted to feel differently, and once I settled on using this “test-driven” approach, I began to be able to, for the first time in my life.
So if psychology is alchemy, testing automatic emotional responses is my stab at atomic theory, and I’m working on sketches of parts of the periodic table. (With the caveat that given myself as the primary audience, and my client list being subject to major selection effects, it is entirely possible that the scope of applicability of my work is just smart-but-maybe-too-sensitive, systematically-thinking people with certain types of inferiority complexes. But that worry is considerably reduced by the stuff I’ve read so far in UTEB, whose authors work’s audience does not appear as limited, and whose approach seems fairly congruent with my own.)
This distinction alone is huge when you look at IFS’ Exiles. If you have an “exile” that is struggling to be capable of doing something, but only knows how to be in distress, it’s helpful to realize that it’s just the built-in mental muscle of “seeking care via distress”, and that it will never be capable of doing anything else. It’s not the distressed “part”’s job to do things or be capable of things, and never was. That’s the job of the “everyday self”—the set of mental muscles for actual autonomy and action. But as long as someone’s reinforced pattern is to activate the “distress” muscle, then they will feel horrible and helpless and not able to do anything about it.
I wonder how much of this discussion comes down to a different extensional referent of the word “part”.
According to my view, I would call “the reinforced pattern to activate the ‘distress’ muscle [in some specific set of circumstances]” a part. That’s the thing that I would want to dialogue with.
In contrast, I would not call the “distress muscle” itself a part, because (as you say) the distress muscle doesn’t haven anything like “beliefs” that could update.
According to my view, I would call “the reinforced pattern to activate the ‘distress’ muscle [in some specific set of circumstances]” a part. That’s the thing that I would want to dialogue with.
And I don’t understand how you could “dialogue” with such a thing, except in the metaphorical sense where debugging is a “dialogue” with the software or hardware in question. I don’t ask a stimulus-respponse pattern to explain itself, I dialogue with the client or with my inner experience by trying things or running queries, and the answers I get back are whatever the machine does in response.
I don’t pretend that the behavior pattern is a coherent entity with which I can have a conversation in English, as for me that approach has only ever resulted in confusion, or at best some occasionally good but largely irreproducible results.
And I specifically coach clients not to interpret those responses they get, but just to report the bare fact of what is seen or felt or heard, because the purpose is not to have a conversation but to conduct an investigation or troubleshooting process.
A stimulus-response pattern doesn’t have goals or fears; goals or fears are things we have, that we get from our SR rules as emergent properties. That’s why treating them as intentional agents makes no sense to me: they’re what our agency is made of, but they themselves are not a class of thing that could even comprehend such a thing as the notion of agency.
Schemas are mental models, not utilitarian agents… not even in a theoretical sense! Humans don’t weigh utility, we have an action planner system that queries our predictive model for “what looks like something good to do in this situation”, and whatever comes back fastest tends to win, with emotionally weighted stuff or stuff tagged by certain mental muscles getting wired into faster routes.
To put it another way, I think the thing you’re thinking you can dialogue with is actually a spandrel of sorts, and it’s a higher-level unit than what I work with. IFS, in ascribing intention, necessarily has to look at more complex elements than raw, miniscule, almost “atomic” stimulus-response patterns, because that’s what’s required if you want to make a coherent-sounding model of an entire cycle of symptoms.
In contrast, for me the top-down view of symptom cycles is merely a guide or suggestion to begin an empirical investigation of specific repeatable responses. The larger pattern, after all, is made of things: it doesn’t just exist on its own. It’s made of smaller, simpler things whose behaviors are much more predictable and repeatable. The larger behavior cycles inevitably involve countless minor variations, but the rules that generate the cycles are much more deterministic in nature, making them more amenable to direct hacking.
If IFS said, “brains have modules for these types of mental behavior”, (e.g. hiding, firefighting, etc.), then that would also be a reduction.
I’m not sure why IFS’s exile-manager-firefighter model doesn’t fit this description? E.g. modeling something like my past behavior of compulsive computer gaming as a loop of inner critic manager pointing out that I should be doing something → exile being triggered and getting anxious → gaming firefighter seeking to suppress the anxiety with a game → inner critic manager increasing the level of criticism and triggering the other parts further, has felt like a reduction to simpler components, rather than modeling it as “little people”. They’re basically just simple trigger-action rules too, like “if there is something that Kaj should be doing and he isn’t getting around doing it, start ramping up an increasing level of reminders”.
There’s also Janina Fisher’s model
of IFS parts being linked to various specific defense systems. The way I read the first quote in the linked comment, she does conceptualize IFS parts as something like state-dependent memory; for exiles, this seems like a particularly obvious interpretation even when looking at the standard IFS descriptions of them, which talk about them being stuck at particular ages and events.
but compassion towards a “part” is not really necessary for that, just that one suppress commentary.
Certainly one can get the effect without compassion too, but compassion seems like a particularly effective and easy way of doing it. Especially given that in IFS you just need to ask parts to step aside until you get to Self, and then the compassion is generated automatically.
I’m not sure why IFS’s exile-manager-firefighter model doesn’t fit this description? E.g. modeling something like my past behavior of compulsive computer gaming as a loop of inner critic manager pointing out that I should be doing something → exile being triggered and getting anxious → gaming firefighter seeking to suppress the anxiety with a game → inner critic manager increasing the level of criticism and triggering the other parts further, has felt like a reduction to simpler components, rather than modeling it as “little people”.
Because this description creates a new entity for each thing that happens, such that the total number of entities under discussion is “count(subject matter) times count(strategies)” instead of “count(subject matter) plus count(strategies)”. By simple math, a formulation which uses brain modules for strategies plus rules they operate on, is fewer entities than one entity for every rule+strategy combo.
And that’s not even looking at the brain as a whole. If you model “inner criticism” as merely reinforcement-trained internal verbal behavior, you don’t need even one dedicated brain module for inner criticism, let alone one for each kind of thing being criticized!
Similarly, you can model most types of self-distraction behaviors as simple negative reinforcement learning: i.e., they make pain go away, so they’re reinforced. So you get “firefighting” for free as a side-effect of the brain being able to learn from reinforcement, without needing to posit a firefighting agent for each kind of deflecting behavior.
And nowhere in these descriptions is there any implication of agency, which is critical to actually producing a reductionist model of human behavior. Turning a human from one agent into multiple agents doesn’t reduce anything.
Because this description creates a new entity for each thing that happens, such that the total number of entities under discussion is “count(subject matter) times count(strategies)” instead of “count(subject matter) plus count(strategies)”. By simple math, a formulation which uses brain modules for strategies plus rules they operate on, is fewer entities than one entity for every rule+strategy combo.
It seems to me that the emotional schemas that Unlocking the Emotional Brain talks about, are basically the same as what IFS calls parts. You didn’t seem to object to the description of schemas; does your objection also apply to them?
IFS in general is very vague about how exactly the parts are implemented on a neural level. It’s not entirely clear to me what kind of a model you are arguing against and what kind of a model you are arguing for instead, but I would think that IFS would be compatible with both.
Similarly, you can model most types of self-distraction behaviors as simple negative reinforcement learning: i.e., they make pain go away, so they’re reinforced. So you get “firefighting” for free as a side-effect of the brain being able to learn from reinforcement
I agree that reinforcement learning definitely plays a role in which parts/behaviors get activated, and discussed that in some of my later posts [12]; but there need to be some innate hardwired behaviors which trigger when the organism is in sufficient pain. An infant which needs help cries; it doesn’t just try out different behaviors until it hits upon one which gets it help and which then gets reinforced.
And e.g. my own compulsive behaviors tend to have very specific signatures which do not fit together with your description; e.g. a desire to keep playing a game can get “stuck on” way past the time when it has stopped being beneficial. Such as when I’ve slept in between and I just feel a need to continue the game as the first thing in the morning, and there isn’t any pain to distract myself from anymore, but the compulsion will produce pain. This is not consistent with a simple “behaviors get reinforced” model, but it is more consistent with a “parts can get stuck on after they have been activated” model.
And nowhere in these descriptions is there any implication of agency, which is critical to actually producing a reductionist model of human behavior.
It seems to me that the emotional schemas that Unlocking the Emotional Brain talks about, are basically the same as what IFS calls parts. You didn’t seem to object to the description of schemas; does your objection also apply to them?
AFAICT, there’s a huge difference between UTEB’s “schema” (a “mental model of how the world functions”, in their words) and IFS’ notion of “agent” or “part”. A “model” is passive: it merely outputs predictions or evaluations, which are then acted on by other parts of the brain. It doesn’t have any goals, it just blindly maps situations to “things that might be good to do or avoid”. An “agent” is implicitly active and goal-seeking, whereas a model is not. “Model” implies a thing that one might change, whereas an “agent” might be required to change itself, if a change is to happen.
UTEB also describes the schema as “wordlessly [defining] how the world is”—which is quite coherent (no pun intended) with my own models of mindhacking. I’m actually looking forward to reading UTEB in full, as the introduction makes it sound like the models I’ve developed of how this stuff works, are quite similar to theirs.
(Indeed, my own approach is specifically targeted at changing implicit mental models of “how things are” or “how the world is”, because that changes lots of behaviors at once, and especially how one feels or relates to the world. So I’m curious to know if they’ve found anything else I might find useful.)
IFS in general is very vague about how exactly the parts are implemented on a neural level. It’s not entirely clear to me what kind of a model you are arguing against and what kind of a model you are arguing for instead, but I would think that IFS would be compatible with both.
What I’m arguing against is a model where a patterns of behavior (verbs) are nominalized as nouns. It’s bad enough to think that one has say, procrastination or akrasia, as if it were a disease rather than a pattern of behavior. But to further nominalize it as an agent trying to accomplish something is going all the way to needless anthropomorphism.
To put it another way, if there are “agents” (things with intention) that cause your behavior, then you are necessarily less at cause and in control of your life. But if you instead have mental models that predict certain behaviors would be a good idea, and so you feel drawn or pushed towards them, then that is a model that still validates your experience, but doesn’t require you to fight or negotiate or whatever. Reconsolidation allows you to be more you, by gaining more choices.
But that’s a values argument. You’re asking what I’m against, and I’m not “against” IFS per se. What I am saying, and have been saying, is that nominalizing behavior patterns as “parts” or “agents” is bad reductionism, independent of its value as a therapeutic metaphor.
Over the course of this conversation, I’ve actually become slightly more open to the use of parts as a metaphor in casual conversation, if only as a stepping stone to discarding it in favor of learned rules and mental muscles.
But, the reason I’m slightly more open to it is exactly the same reason I oppose it!
Specifically, using terms like “part” or “agent” encourages automatic, implicit, anthropomorphic projection of human-like intention and behavior.
This is both bad reductionism and good metaphor. (Well, in the short term, anyway.) As a metaphor, it has certain immediate effects, including retaining disidentification with the problem (and therefore validation of one’s felt lack of agency in the problem area).
But as reductionism, it fails for the very same reason, by not actually reducing the complexity of what is being modeled, due to sneaking in those very same connotations.
Unfortunately, even as a metaphor, I think it’s short-term good, but long-term bad. I have found that people love to make things into parts, precisely because of the good feelings of validation and disidentification, and they have to be weaned off of this in order to make any progress at direct reconsolidation.
In contrast, describing learned rules and mental muscles seems to me to help people with unblending, because of the realization that there’s nothing there—no “agent”, not even themselves(!), who is actually “deciding” or pursuing “goals”. There’s nothing there to be blended with, if it’s all just a collection of rules!
But that’s a discussion about a different topic, really, because as I said from the outset, my issue with IFS is that it’s bad reductionism. And I think this article’s attempt at building IFS’s model from the bottom up fails at reductionism because it’s specifically trying to justify “parts”, rather than looking at what is the minimal machinery needed to produce the observations of IFS, independent of its model. (The article also pushes a viewpoint from design, rather than evolution, further weakening its argument.)
For example, I read Healing The Fragmented Selves Of Trauma Survivors a little over a year ago, and found in it a useful refinement: Fisher described five “roles” that parts play, and one of them was something I’d not accounted for in my rough list of “mental muscles”. But the very fact that you can exhaustively enumerate the roles that parts “play”, strongly suggests that the so-called roles are in fact the thing represented in our hardware, not the “parts”!
In other words, IFS has it precisely backwards: parts don’t “play roles”, mental modules play parts. When viewed from an evolutionary perspective, going the other way makes no sense, especially given that the described functions (fight/vigilance, flight/escape, freeze/fear, submit/shame, attach/needy), are things that are pretty darn universal in mammals.
And e.g. my own compulsive behaviors tend to have very specific signatures which do not fit together with your description; e.g. a desire to keep playing a game can get “stuck on” way past the time when it has stopped being beneficial. Such as when I’ve slept in between and I just feel a need to continue the game as the first thing in the morning, and there isn’t any pain to distract myself from anymore, but the compulsion will produce pain. This is not consistent with a simple “behaviors get reinforced” model, but it is more consistent with a “parts can get stuck on after they have been activated” model.
I think you are confusing reinforcement and logic. Reinforcement learning doesn’t work on logic, it works on discounted rewards. The gaming behavior can easily become intrinsically motivating, due to it having been reinforced by previously reducing pain. (We can learn to like something “for its own sake” precisely because it has helped us avoid pain in the past, and if it produces pleasure, too, all the better!)
However, your anticipation that “continuing to play will cause me pain”, will at best be a discounted future event without the same level of reinforcement power… assuming that that’s really you thinking that at all, and not simply an internal verbal behavior being internally reinforced by a mental model of such worrying being what a “good” or “responsible” person would do! (i.e., internal virtue-signalling)
It is quite possible in my experience to put one’s self through all sorts of mental pain… and still have it feel virtuous, because then at least I care about the right things and am trying to be a responsible person… which then excuses my prior failure while also maintaining hope I can succeed in the future.
And despite these virtue-signaling behaviors seeming to be about the thing you’re doing or not doing, in my experience they don’t really include thinking about the actual problem, and so have even less impact on the outward behavior than one would expect from listening to the supposed subject matter of the inner verbalization(s).
So yeah, reinforcement learning is 100% consistent with the failure modes you describe, once you include:
negative reinforcement (that which gets us away from pain is reinforced)
secondary reinforcement (that which is reinforced, becomes “inherently” rewarding)
discounted reinforcement (that which is near in time and space has more impact than that which is far)
social reinforcement (that which signals virtue may be more reinforcing than actual virtue, due to its lower cost)
verbal behavior (what we say to ourselves or others is subject to reinforcement, independent of any actual meaning ascribed to the content of those verbalizations!)
imitative reinforcement (that which we see others do is reinforced, unless our existing learning tells us the behavior is bad, in which case it is punished instead)
All of these, I believe, are pretty well-documented properties of reinforcement learning, and more than suffice to explain the kinds of failure modes you’ve brought up. Given that they already exist, with all but verbal behavior being near-universal in the animal kingdom, a parsimonious model of human behavior needs to start from these, rather than designing a system from the ground up to account for a specific theory of psychotherapy.
What I’m arguing against is a model where a patterns of behavior (verbs) are nominalized as nouns.
Cool. That makes sense.
It’s bad enough to think that one has say, procrastination or akrasia, as if it were a disease rather than a pattern of behavior. But to further nominalize it as an agent trying to accomplish something is going all the way to needless anthropomorphism.
Well, when I talk with people at CFAR workshops, fairly often someone will have the problem of “akrasia” and they’ll conceptualize it, more or less, as “my system 1 is stupid and doesn’t understand that working harder at my job is the only thing that matters, and I need tools to force my S1 to do the right thing.”
And then I might suggest that they try on the frame where “the akrasia part”, is actually an intelligent “agent” trying to optimize for their own goals (instead of a foreign, stupid entity, that they have to subdue). If the akrasia was actually right, why would that be?
And they realize that they hate their job, and obviously their life would be terrible if they spent more of their time working at their terrible job.
[I’m obviously simplifying somewhat, but this exact pattern does come up over and over again at CFAR workshops.]
That is, in practice, the part, or subagent framing helps at least some people to own their desires more, not less.
[I do want to note that you explicitly said, “What I am saying, and have been saying, is that nominalizing behavior patterns as “parts” or “agents” is bad reductionism, independent of its value as a therapeutic metaphor.”]
---
To put it another way, if there are “agents” (things with intention) that cause your behavior, then you are necessarily less at cause and in control of your life.
This doesn’t seem right in my personal experience, because the “agents” are all me. I’m conceptualizing the parts of myself as separate from each other, because it’s easier to think about that way, but I’m not disowning or disassociating from any of them. It’s all me.
Well, when I talk with people at CFAR workshops, fairly often someone will have the problem of “akrasia” and they’ll conceptualize it, more or less, as “my system 1 is stupid and doesn’t understand that working harder at my job is the only thing that matters, and I need tools to force my S1 to do the right thing.”
So my response to that is to say, “ok, let’s get empirical about that. When does this happen, exactly? If you think about working harder right now, what happens?” Or, “What happens if you don’t work harder at your job?”
In other words, I immediately try to drop to a stimulus-response level, and reject all higher-level interpretive frameworks, except insofar as they give me ideas of where to drop my depth charges, so to speak. :)
And then I might suggest that they try on the frame where “the akrasia part”, is actually an intelligent “agent” trying to optimize for their own goals (instead of a foreign, stupid entity, that they have to subdue). If the akrasia was actually right, why would that be?
I usually don’t bring that kind of thing up until a point has been reached where the client can see that empirically. For example, if I’ve asked them to imagine what happens if they get their wish and are now working harder at their job… and they notice that they feel awful or whatever. And then I don’t need to address the intentionality at all.
And they realize that they hate their job, and obviously their life would be terrible if they spent more of their time working at their terrible job.
And sometimes, the real problem has nothing to do with the work and everything to do with a belief that they aren’t a good person unless they work more, so it doesn’t matter how terrible it is… but also, the very fact that they’re guilty about not working more may be precisely the thing they’re avoiding by not working!
In other words, sometimes an intentional model fails because brains are actually pretty stupid, and have design flaws such that trying to view them as having sensible or coherent goals simply doesn’t work.
For example, our action planning subsystem is really bad at prioritizing between things we feel good about doing vs. things we feel bad about not doing. It wants to avoid the things we feel bad about not doing, because when we think about them, we feel bad. That part of our brains doesn’t understand things like “logical negation” or “implicative reasoning”, it just processes things based on their emotional tags. (i.e., “bad = run away”)
[I’m obviously simplifying somewhat, but this exact pattern does come up over and over again at CFAR workshops.]
And I’m also not saying I never do anything that’s a modeling of intention. But I get there bottom-up, not top-down, and it only comes up in a few places.
Also, most of the intentional models I use are for things that pass through the brain’s intention-modeling system: i.e., our mental models of what other people think/thought about us!
For example, the SAMMSA pattern is all about pulling that stuff out, as is the MTF pattern (“meant to feel/made to feel”—a subset of SAMMSA dealing with learnings of how others intend for us to feel in certain circumstances).
The only other place I use quasi-intentional frames is in describing the evolutionary function or “intent” of our brain modules. For example, distress behavior is “intended” to generate caring responses from parents. But this isn’t about what the person intends, it’s about what their brain is built to do. When you were a crying baby, “you” didn’t even have anything that qualifies as intention yet, so how could we say you had a part with that intention?
And even then, I’m treating it as, “in this context, this behavior pattern would produce this result” (producing reinforcement or gene propagation), not “this thing is trying to produce this result, so it has this behavior pattern in this context.” Given the fact that my intention is always to reduce to the actual “wires” or “lines of code” producing a problem, intention modeling is going in the wrong direction most of the time.
My analogy about confusing a thermostat with something hot or cold underneath speaks to why: unlike IFS, I don’t assume that parts have positive, functional intentions, even if they arose out of the positive “design intentions” of the system as a whole. After all, the plan for achieving that original “intention” may no longer be valid! (insofar as there even was one to begin with.)
That’s why I don’t think of the thermostat as being something that “wants” temperature, because it would distract me from actually looking at the wall and the wiring and the sensors, which is the only way I can be certain that I’m always getting closer to a solution rather than guessing or going in circles. (That is, by always working with things I can test, like a programmer debugging a program. Rerunning it and inspecting, putting in different data values and seeing how the behavior changes, and so on.)
Some reasons for the popularity of IFS which seem true to me, and independent of whether you accept your desires:
It’s the main modality that rationalists happen to know which lets you do this kind of thing at all. The other popular one is Focusing, which isn’t always framed in terms of subagents, but in terms of the memory reconsolidation model it basically only does accessing; de- and reconsolidation will only happen to the extent that the accessing happens to trigger the brain’s spontaneous mismatch detection systems. (Also the Bio-Emotive Framework has gotten somewhat popular of late, but that’s a very recent development.)
Rationalists tend to really like reductionism, in the sense of breaking complex systems down into simpler parts that you can reason about. IFS is good at giving you various gears about how minds operate, and e.g. turning previously incomprehensible emotional reactions into a completely sensible chain of parts triggering each other. (And this doesn’t feel substantially different than thinking in terms of e.g. schemas the way Coherence Therapy does; one is subagent-framed and the other isn’t, but one’s predictions seem to be essentially the same regardless of whether you think of schemas setting off each other or IFS-parts doing it.)
Many people have natural experiences of multiplicity, e.g. having the experience of an internal critic which communicates in internal speech; if your mind tends to natively represent things as subagents already, then it’s natural to be drawn to an approach which lets you use the existing interface. On the other hand, even if someone doesn’t experience natural multiplicity, especially if they’ve dealt with severely traumatized people, they are likely to have experienced something like part-switching in others.
IFS seems to offer some advantages that non-subagent ones don’t; as an example, I noticed a bug earlier today and used Coherence Therapy’s “what does my brain predict would happen if I acted differently” technique to access a schema’s prediction… but then I noticed that I was starting to get too impatient to disprove that belief before I had established sufficient access to it, so I switched to treating the schema as a subagent that I could experience compassion and curiosity towards, and that helped deal with the sense of urgency. In general, it feels like the “internal compassion” frame seems to help with a lot of things such as just wanting to rush into solutions, or figuring that some particular bug isn’t so important to fix; and knowing about the qualities of Self and having a procedure for getting there is often helpful for putting those kinds of meta-problems to the side.
That said, I do agree that sometimes simulating subagents seems to get in the way; I’ve had some IFS sessions where I did make progress, but it felt like the process wasn’t quite cutting reality at the joints, and I suspect that something like Coherence Therapy might have produced results quicker… and I also agree that
is a thing. In my IFS training, it was said that “Self-like parts” (parts which pretend to be Self, and which mostly care about making the mind-system stable and bringing it under control) tend to be really strongly attracted towards any ideology or system which claims to offer a sense of control. I suspect that a many of the people who are drawn to rationalism are indeed driven by a strong part/schema which strongly dislikes uncertainty, and likes the promise of e.g. objectively correct methods of thinking and reasoning that you can just adopt. This would go hand in hand with wanting to reject some of your desires entirely.
I don’t think IFS is good reductionism, though. That is, presupposing subagents in general is not a reduction in complexity from “you’re an agent”. That’s not actually reducing anything! It’s just multiplying entities contra Occam.
Now, if IFS said, “these are specific subagents that basically everyone has, that bias towards learning specific types of evolution-advanged behavior”, then that would actually be a reduction.
If IFS said, “brains have modules for these types of mental behavior”, (e.g. hiding, firefighting, etc.), then that would also be a reduction.
But dividing people into lots of mini-people isn’t a reduction.
The way I reduce the same landscape of things is to group functional categories of mental behavior as standard modules, and treat the specific things people are reacting to and the actual behaviors as data those modules operate on. This model doesn’t require any sort of agency, because it’s just rules and triggers. (And things like “critical voices” are just triggered mental behavior or memories, not actual agents, which is why they can often be disrupted by changing the way they sound—e.g. making them seductive or squeaky—while keeping the content the same. If there were an “agent” desiring to criticize, this technique wouldn’t make any sense.)
As for compassion, the equivalent in what I’m doing would be the connect stage in collect/connect/correct:
“Collect” is getting information about the problem, determining a specific trigger and automatic emotional response (the thing that we will test post-reconsolidation to ensure we got it)
“Connect” is surfacing the inner experience and memory or belief that drives the response, either as a prediction or learned evaluation
“Correct” is the reconsolidation part: establishing contradiction and generating new predictions, before checking if the automatic response from “Collect” changed
All of these require one to be able to objectively observe and communicate inner experience without any meta-level processing (e.g. judging, objecting, explaining, justifying, etc.), but compassion towards a “part” is not really necessary for that, just that one suppress commentary. (There are some specific techniques that involve compassion in the “Correct” phase, but that’s really part of creating a contradiction, not part of eliciting information.)
With respect to trauma and DID, I will only say that again, the subagent model is not reduction, because it doesn’t break things down into simpler elements. (In contrast, the concept of state-specific memory is a simpler element that can be used in modeling trauma and DID.)
(adding to my other comment)
And like, the post you’re responding to just spent several thousand words building up a version of IFS which explicitly doesn’t have “mini-people” and where the subagents are much closer to something like reinforcement learning agents which just try to prevent/achieve something by sending different objects to consciousness, and learn based on their success in doing so...
The presented model of Exiles, Managers, Firefighters, etc. all describes “parts” doing things, but the same ideas can be expressed without using the idea of “parts”, which makes that idea redundant.
For example, here is a simpler description of the same categories of behavior:
Voila! The same three things (Exile, Firefighter, Manager), described in less text and without the need for a concept of “parts”. I’m not saying this model is right and the IFS model is wrong, just that IFS isn’t very good at reductionism and fails Occam’s razor because it literally multiplies entities beyond necessity.
From this discussion and the one on reconsolidation, I would hazard a guess that to the extent IFS is more useful than some non-parts-based (non-partisan?) approach, it is because one’s treatment of the “parts” (e.g. with compassion) can potentially trigger a contradiction and therefore reconsolidation. (I would hypothesize, though, that in most cases this is a considerably less efficient way to do it than directly going after the actual reconsolidation.)
Also, as I mentioned earlier, there are times when the UTE (thing we’re Unwilling To Experience) is better kept conceptually dissociated rather than brought into the open, and in such a case the model of “parts” is a useful therapeutic metaphor.
But “therapeutic metaphor” and “reductionist model” are not the same thing. IFS has a useful metaphor—in some contexts—but AFAICT it is not a very good model of behavior, in the reductionist sense of modeling.
If I try to steelman this argument, I have to taboo “agent”, since otherwise the definition of subagent is recursive and non-reductionistic. I can taboo it to “thing”, in which case I get “things which just try to prevent/achieve something”, and now I have to figure out how to reduce “try”… is that try iteratively? When do they try? How do they know what to try?
As far as I can tell, the answers to all the important questions for actual understanding are pure handwavium here. And the numerical argument still stands, since new “things” are proposed for each group of things to prevent or achieve, rather than (say) a single “thing” whose purpose is to “prevent” other things, and one whose purpose is to “achieve” them.
If it was just that brief description, then sure, the parts metaphor would be unnecessary. But the IFS model contains all kinds of additional predictions and applications which make further use of those concepts.
For example, firefighters are called that because “they are willing to let the house burn down to contain the fire”; that is, when they are triggered, they typically act to make the pain stop, without any regard for consequences (such as loss of social standing). At the same time, managers tend to be terrified of exactly the kind of lack of control that’s involved with a typical firefighter response. This makes firefighters and managers typically polarized—mutually opposed—with each other.
Now, it’s true that you don’t need to use the “part” expression for explaining this. But if we only talked about various behaviors getting reinforced, we wouldn’t predict that the system simultaneously considers a loss of a social standing to be a bad thing, and that it also keeps reinforcing behaviors which cause exactly that thing. Now, obviously it can still be explained in a more sophisticated reinforcement model, in which you talk about e.g. differing prioritizations in different situations, and some behavioral routines kicking in at different situations...
...but if at the end, this comes down to there being two distinct kinds of responses depending on whether you are trying to avoid a situation or are already in it, then you need names for those two categories anyway. So why not go with “manager” and “firefighter” while you’re at it?
And sure, you could call it, say, “a response pattern” instead of “part”—but the response pattern is still physically instantiated in some collection of neurons, so it’s not like “part” would be any less correct, or worse at reductionism. Either way, you still get a useful model of how those patterns interact to cause different kinds of behavior.
I agree that the practical usefulness of IFS is distinct from the question of whether it’s a good model of behavior.
That said, if we are also discussing the benefits of IFS as a therapeutic method, then what you said is one aspect of what I think makes it powerful. Another is its conception of Self and unblending from parts.
I have had situations where for instance, several conflicting thoughts are going around my head, and identifying with all of them at the same time feels like I’m being torn into several different directions. However, then I have been able to unblend from each part, go into Self, and experience myself as listening to the concerns of the parts while being personally in Self; in some situations, I have been able to facilitate a dialogue and then feel fine.
IFS also has the general thing of “fostering Self-Leadership”, where parts are gradually convinced to remain slightly on the side as advisors, while keeping Self in control of things at all times. The narrative is something like, this can only happen if the Self is willing to take the concerns of _all_ the parts into account. The system learns to increasingly give the Self leadership, not because they would agree that the Self’s values would be better than theirs, but because they come to trust the Self as a leader which does its best to fulfill the values of all the parts. And this trust is only possible because the Self is the only part of the system which doesn’t have its own agenda, except for making sure that every part gets what it wants.
This is further facilitated by there being distinctive qualities of being in Self, and IFS users developing a “parts detector” which lets them notice when parts have been triggered, helping them unblend and return back to Self.
I’m not saying that you couldn’t express unblending in a non-partisan way. But I’m not sure how you would use it if you didn’t take the frame of parts and unblending from them. To be more explicit, by “use it” here I mean “be able to notice when you have been emotionally triggered, and then get some distance from that emotional reaction in the very moment when you are triggered, being able to see the belief in the underlying schema but neither needing to buy into it nor needing to reject it”.
(But of course, as you said, this is a digression to whether IFS is a useful mindhacking tool, which is distinct from the question of whether it’s good reductionism.)
I said a few words about my initial definition of agent in the sequence introduction:
I don’t think that the distinction between “agent” and “rule-based process” really cuts reality at joints; an agent is just any set of rules that we can meaningfully model by taking an intentional stance. A thermostat can be called a set of rules which adjusts the heating up when the temperature is below a certain value, and adjusts the heating down when the temperature is above a certain value; or it can be called an agent which tries to maintain a target temperature by adjusting the heating. Both make the same predictions, they’re just different ways of describing the same thing.
Or as I discussed in “Integrating disagreeing subagents”:
In my experience, this distinction merely looks like normal reinforcement: you can be short-term reinforced to do things that are against your interests in the long-term. This happens with virtually every addictive behavior; in fact, Dodes’ theory of addiction is that people feel better the moment they decide to drink, gamble, etc., and it is that decision that is immediately reinforced, while the downsides of the action are still distant. (Indeed, he notes that people often make that decision hours in advance of the actual behavior.)
On the contrary, contradictions in reinforced behavior are quite normal and expected. Timing and certainty are quite powerful influencers of reinforcement. Also, imitation learning is a thing: we learn from caretakers what to monitor ourselves about and when to punish ourselves… but this has no bearing on what we’ve also been reinforced to actually do. (Think “belief in belief” vs “actual belief”. We profess beliefs verbally about what’s important that are distinct from what we actually, implicitly value or reward.)
So, you can easily get a person who keeps doing something they think is bad and punish themselves for, because they learned from their parents that punishing themselves was a good thing to do. Not because the punishment has any impact on their actual behavior, but because the act of self-punishing is reinforced, either because it reduces the frequency of outside punishment, or because we have hardware whose job it is to learn what our group punishes, so we can punish everyone else for it.
Anyway, it sounds like you think reinforcement has to have some kind of global coherence, but state-dependent memory and context-specific conditioning show that reinforcement learning doesn’t have any notion of global coherence. If you were designing a machine to act like a human, you might try to build in such coherence, but evolution isn’t required to be coherent. (Indeed, reconsolidation theory shows that coherence testing can only happen with local contradiction, as there’s no global automatic system checking for contradictions!)
Because the categories are not of two classes of things, but two classes of behavior. If we assume the brain has machinery for them, it is more parsimonious to assume that the brain has two modules or modes that bias behavior in a particular direction based on a specific class of stimuli, with the specific triggers being mediated through the general-purpose learning machinery of the cortex. To assume that there is dedicated neural machinery for each instance of these patterns is not consistent with the ability to wipe them out via reconsolidation.
That is, I’m pretty sure you can’t wipe out physical skills or procedural memory by trivial reconsolidation, but these other types of behavior pattern can be. That suggests that there is not individual hardwired machinery for each instance of a “part” in the IFS model, such that parts do not have physically dedicated storage or true parallel operation like motor skills have.
Compare to say, Satir’s parts model, where the parts were generic roles like Blamer, Placater, etc. We can easily imagine dedicated machinery evolved to perform these functions (and present in other species besides us), with the selection criteria and behavioral details being individually learned. In such a model, one only needs one “manager” module, one “firefighter” module, and so on, to the extent that the behaviors are actually an evolved pattern and not merely an emergent property of reinforced behavior.
I personally believe we have dedicated systems for punishing, protesting, idealistic virtue signalling, ego defense, and so on. These are not “parts” in the sense that they weren’t grown to cope with specific situations, but are more like mental muscles that sit there waiting to be trained as to when and how to act—much like the primate circuitry for learning whether to fear snakes, and which ones, and how to respond to their presence.
An important difference, then, between a model that treats parts as real, vs one that treats “parts” as triggers wired to pre-existing mental muscles, is that in the mental muscles model, you cannot universally prevent that pattern from occurring. There is always a part of you ready to punish or protest, to virtue signal or ego-defend. In addition, it is not possible in such a model for that muscle to ever learn to do something else. All you can do is learn to not use that muscle, and use another one instead!
This distinction alone is huge when you look at IFS’ Exiles. If you have an “exile” that is struggling to be capable of doing something, but only knows how to be in distress, it’s helpful to realize that it’s just the built-in mental muscle of “seeking care via distress”, and that it will never be capable of doing anything else. It’s not the distressed “part”’s job to do things or be capable of things, and never was. That’s the job of the “everyday self”—the set of mental muscles for actual autonomy and action. But as long as someone’s reinforced pattern is to activate the “distress” muscle, then they will feel horrible and helpless and not able to do anything about it.
Resolving this challenge doesn’t require that one “fix” or “heal” a specific “part”, and this is actually a situation where it’s therapeutically helpful to realize there is no such thing as a “part”, and therefore nothing to be healed or fixed! Signaling distress is just something brains do, and it’s not possible for the part of your brain that signals distress to do anything else. You have to use a different part of the brain to do anything else.
The same thing goes for inner criticism: thinking of it as originating from a “part” suggests the idea that perhaps one can somehow placate this part to make it stop criticizing, when it is fact just triggering the mental muscle of social punishment, aimed at one’s self. The hardware for criticizing and put-downs will always be there, and can’t be gotten rid of. But one can reconsolidate the memories that tell it who’s an acceptable target! (And as a side effect, you’ll become less critical of people doing things similar to you, and less triggered by the behavior in others. Increased compassion comes about automatically, not as a practiced, “fake-it-till-you-make-it” process!)
I think I’ve just presented such an expression. Unblending doesn’t require that you have an individual part for every possible occurrence of behavior, only that you realize that your brain has dedicated machinery for specific classes of behavior. Indeed, I think this is a cleaner way to unblend, since it does not lend itself to stereotyped thoughts of agent-like behavior, such as trying to make an exile feel better or convince a manager you have things under control. It’s validating to realize that as long as you are using the mental muscles of distress or self-punishment or self-promotion to try to accomplish something, it never would have worked, beacuse those muscles do not do anything except the preprogrammed thing they do.
When you try to negotiate with parts, you’re playacting a complicated way to do something that’s much simpler, and hoping that you’ll hit the right combination of stimuli to accidentally accomplish a reconsolidation you could’ve targeted directly in a lot less time.
In IFS, you’re basically trying to provide new models of effective caretaker behavior, in the hope that the person’s brain figures out what rules this new behavior contradicts, and then reconsolidate. But if you directly reconsolidate the situationally-relevant memories of their actual caretaker’s behaviors, you can create an immediate change in how it feels to be one’s self, instead of painstakingly building up a set of rote behaviors and trying to make them feel natural.
Except that if you actually want to predict how a thermostat behaves, using the brain’s built-in model of “thing with intentional stance”, you’re making your model worse. If you model the thermostat as, “thing that ‘wants’ the house a certain temperature”, then you’ll be confused when somebody sticks an ice cube or teapot underneath it, or when the temperature sensor breaks.
That’s why the IFS model is bad reductionism: calling things agents brings in connotations that are detrimental to its use as an actual predictive model. To the extent that IFS works, it’s actually accidental side-effects of the therapeutic behavior, rather than directly targeting reconsolidation on the underlying rules.
For example, when you try to do “self-leadership”, what you’re doing is trying to model that behavior through practice while counter-reinforcement is still in place. It’s far more efficient to delete the rules that trigger conflicting behavior before you try to learn self-leadership, so that you aren’t fighting your reinforced behaviors to do so.
So, at least in my experience, the failure of IFS to carve reality at the actual joint of “preprogrammed modules + individually-learned triggers”, makes it more complex, more time-consuming, less effective, and more likely to have unintended side-effects than approaches that directly target the “individually-learned triggers”.
In my own approach, rather than nominalizing behavior into parts, I try to elicit rules—“when this, then that”—and then go after emotionally significant examples, to break down implicit predictions and evaluations in the memory.
For example, my mother yelling at me that I need to take things seriously when I was being too relaxed (for her standards) about something important I needed to do. Though not explicitly stated, the presuppositions of my mother in this memory are that “taking things seriously” requires being stressed about them, and that further, if you don’t do this, then you won’t accomplish your goal or be a responsible person, because obviously, if you cared at all, you would be freaked out.
To reconsolidate, I first establish that these things aren’t actually true, and then predict a mother who realizes these things aren’t true, and ask how she would have behaved if she didn’t believe those things. In my imagination, I realize that she probably would’ve told me she wanted me to work on the thing for an hour a day before dinner, and that she wanted me to show her what I did, so she can track my progress. Then I imagine how my life would’ve been different, growing up with that example.
Boom! Reconsolidation. My whole outlook on accomplishing long-term goals changes instantly from “be stressed until it’s done, or else you’re a bad person” to “work on it regularly and keep an eye on your progress”. I don’t have to practice “self-leadership”, because I now feel different when I think about long-term goals than I did before. Instead of triggering the less-useful muscles of self-punishment, the ones I need are triggered instead.
But if I had tried to model the above pattern as parts… I’m not sure how that would have gone. Probably would’ve made little progress trying to persuade a “manager” based on my mother to act differently if I couldn’t surface the assumptions involved, because any solution that didn’t involve me being stressed would mean I was a bad person.
Sure, in the case of IFS, we can assume that it’s the therapist’s job to be aware of these things and surface the assumptions. But that makes the process dependent on the experiences (and assumptions!) of the therapist… and presumably, a sufficiently-good therapist could use any modality and still get the result they’re after, eventually. So what is IFS adding in that case?
Further, when compared to reconsolidation targeting specific schemas, the IFS process is really indirect. You’re trying to get the brain to learn a new implicit pattern alongside a broken one, hoping the new example(s) won’t simply be filtered into meaninglessness or non-existence when processed through the existing schemas. In contrast, direct reconsolidation goes directly to the source of the issue, and replaces the old implicit pattern with a new one, rather than just giving examples and hoping the brain picks up on the pattern.
(Also notice that in practice, a lot of things IFS calls “parts” as if they were aspects of the client, are in fact mental models of other people, i.e. “what would mom or dad do in this situation?”, as a proxy for “what should I do in this situation?”. Changing the model of what the other people would do or should have done then immediately changes one’s sense of what “I” should do also.)
Anyway, the main part of IFS that I have found useful is merely knowing which behaviors are a good idea for caregivers to exemplify, as this is valuable in knowing what parts of one’s schemas are broken and what they should be changed to. But the actual process of changing them in IFS is really suboptimal compared to directly targeting those schema… which is more evidence suggesting that IFS as a theory is incorrect, in spite of its successes.
The content of this and the other comment thread seems to be overlapping, so I’ll consolidate (pun intended) my responses to this one. Before we go on, let me check that I’ve correctly understood what I take to be your points.
Does the following seem like a fair summary of what you are saying?
Re: IFS as a reductionist model:
Good reductionism involves breaking down complex things into simpler parts. IFS “breaks down” behavior into mini-people inside our heads, each mini-person being equally complex as a full psyche. This isn’t simplifying anything.
Talking about subagents/parts or using intentional language causes people to assign things properties that they actually don’t have. If you say that a thermostat “wants” the temperature to be something in particular, or that a part “wants” to keep you safe, then you will predict its behavior to be more flexible and strategic than it really is.
The real mechanisms behind emotional issues aren’t really doing anything agentic, such as strategically planning ahead for the purpose of achieving a goal. Rather they are relatively simple rules which are used to trigger built-in subsystems that have evolved to run particular kinds of action patterns (punishing, protesting, idealistic virtue signalling, etc.). The various rules in question are built up / selected for using different reinforcement learning mechanisms, and define when the subsystems should be activated (in response to what kind of a cue) and how (e.g. who should be the target of the punishing).
Reinforcement learning does not need to have global coherence. Seemingly contradictory behaviors can be explained by e.g. a particular action being externally reinforced or becoming self-reinforcing, all the while it causes globally negative consequences despite being locally positive.
On the other hand, IFS assumes that there is dedicated hardware for each instance of an action pattern: each part corresponds to something like an evolved module in the brain, and each instance of a negative behavior/emotion corresponds to a separate part.
The assumption of dedicated hardware for each instance of an action pattern is multiplying entities beyond necessity. The kinds of reinforcement learning systems that have been described can generate the same kinds of behaviors with much less dedicated hardware. You just need the learning systems, which then learn rules for when and how to trigger a much small number of dedicated subsystems.
The assumption of dedicated hardware for each instance of an action pattern also contradicts the reconsolidation model, because if each part was a piece of built-in hardware, then you couldn’t just entirely change its behavior through changing earlier learning.
Everything in IFS could be described more simply in terms of if-then rules, reinforcement learning etc.; if you do this, you don’t need the metaphor of “parts”, and you also have a more correct model which does actual reduction to simpler components.
Re: the practical usefulness of IFS as a therapeutic approach:
Approaching things from an IFS framework can be useful when working with clients with severe trauma, or other cases when the client is not ready/willing to directly deal with some of their material. However, outside that context (and even within it), IFS has a number of issues which make it much less effective than a non-parts-based approach.
Thinking about experiences like “being in distress” or “inner criticism” as parts that can be changed suggests that one could somehow completely eliminate those. But while triggers to pre-existing brain systems can be eliminated or changed, those brain systems themselves cannot. This means that it’s useless to try to get rid of such experiences entirely. One should rather focus on the memories which shape the rules that activate such systems.
Knowing this also makes it easier to unblend, because you understand that what is activated is a more general subsystem, rather than a very specific part.
If you experience your actions and behaviors being caused by subagents with their own desires, you will feel less in control of your life and more at the mercy of your subagents. This is a nice crutch for people with denial issues who want to disclaim their own desires, but not a framework that would enable you to actually have more control over your life.
“Negotiating with parts” buys into the above denial, and has you do playacting inside your head without really getting into the memories which created the schemas in the first place. If you knew about reconsolidation, you could just target the memories directly, and bypass all of the extra hassle.
“Developing self-leadership” involves practicing a desired behavior so that it could override an old one; this is what Unlocking the Emotional Brain calls a counteractive strategy, and is fragile in all the ways that UtEB describes. It would be much more effective to just use a reconsolidation-based approach.
IFS makes it hard to surface the assumptions behind behavior, because one is stuck in the frame of negotiating with mini-people inside one’s head, rather than looking at the underlying memories and assumptions. Possibly an experienced IFS therapist can help look for those assumptions, but then one might as well use a non-parts-based framework.
Even when the therapist does know what to look for, the fact that IFS does not have a direct model of evidence and counterevidence makes it hard to find the interventions which will actually trigger reconsolidation. Rather one just acts out various behaviors which may trigger reconsolidation if they happen to hit the right pattern.
Besides the issue with luck, IFS does not really have the concept of a schema which keeps interpreting behaviors in the light of its existing model, and thus filtering out all the counter-evidence that the playacting might otherwise have contained. To address this you need to target the problematic schema directly, which requires you to actually know about this kind of a thing and be able to use reconsolidation techniques directly.
Excellent summary! There are a couple of areas where you may have slightly over-stated my claims, though:
I wouldn’t say that IFS claims each mini-person is equally complex, only that the reduction here is just a separation of goals or concerns, and does not reduce the complexity of having agency. And this is particularly important because it is the elimination of the idea of smart or strategic agency that allows one to actually debug brains.
Compare to programming: when writing a program, one intends for it to behave in a certain way. Yet bugs exist, because the mapping of intention to actual rules for behavior is occasionally incomplete or incorrectly matched to the situation in which the program operates.
But, so long as the programmer thinks of the program as acting according to the programmer’s intention (as opposed to whatever the programmer actually wrote), it is hard for that programmer to actually debug the program. Debugging requires the programmer to discard any mental models of what the program is “supposed to” do, in order to observe what the program is actually doing… which might be quite wrong and/or stupid.
In the same way, I believe that ascribing “agency” to subsets of human behavior is a similar instance of being blinded by an abstraction that doesn’t match the actual thing. We’re made up of lots of code, and our problems can be considered bugs in the code… even if the behavior the code produces was “working as intended” when it was written. ;-)
I don’t claim that IFS assumes dedicated per-instance hardware; but it seems kind of implied. My understanding is that IFS at least assumes that parts are agents that 1) do things, 2) can be conversed with as if they were sentient, and 3) can be reasoned or negotiated with. That’s more than enough to view it as not reducing “agency”.
But the article that we are having this discussion on does try to a model a system with dedicated agents actually existing (whether in hardware or software), so at least that model is introducing dedicated entities beyond necessity. ;)
Technically, it’s possible to change people without intentionally using reconsolidation or a technique that works by directly attempting it. It happens by accident all the time, after all!
And it’s quite possible for an IFS therapist to notice the filtering or distortions taking place, if they’re skilled and paying attention. Presumably, they would assign it to a part and then engage in negotiation or an attempt to “heal” said part, which then might or might not result in reconsolidation.
So I’m not claiming that IFS can’t work in such cases, only that to work, it requires an observant therapist. But such a good therapist could probably get results with any therapy model that gave them sufficient freedom to notice and address the issue, no matter what terminology was used to describe the issue, or the method of addressing it.
As the authors of UTEB put it:
After all, reconsolidation isn’t some super-secret special hack or unintended brain exploit, it’s how the brain normally updates its predictive models, and it’s supposed to happen automatically. It’s just that once a model pushes the prior probability of something high (or low) enough, your brain starts throwing out each instance of a conflicting event, even if considered collectively they would be reason to make a major update in the probability.
Here’s my reply! Got article-length, so I posted it separately.
Thanks for the clarifications! I’ll get back to you with my responses soon-ish.
This is a great comment, and I glad you wrote it. I’m rereading it several times over to try and get a handle on everything that you’re saying here.
In particular, I really like the “muscle” vs. “part” distinction. I’ve been pondering lately, when I should just squash an urge or desire, and when I should dialogue with it, and this distinction brings some things into focus.
I have some clarifying questions though:
I don’t know what you mean by this at all. Can you give (or maybe point to) an example?
---
This is fascinating. When I read your stressing out example, my thought was basically “wow. It seems crazy-difficult to surface the core underlying assumptions”.
But you think that this is harder, in the IFS framework. That is amazing, and I want to know more.
In practice, how do you go about eliciting the rules and then emotionally significant instances?
Maybe in the context of this example, how do you get from “I seem to be overly stressed about stuff” to the memory of your mother yelling at you?
---
I’m trying to visualize someone doing IFS or IDC, and connect it to what you’re saying here, but so far, I don’t get it.
What are the “examples”? Instances that are counter to the rule / schema of some part? (e.g. some part of me believes that if I ever change my mind about something important, then no one will love me, so I come up with an example of when this isn’t or wasn’t true?)
---
Given that, doesn’t it make sense to break down the different parts of a RL policy into parts? If different parts of a policy are acting at cross purposes, it seems like it is useful to say “part 1 is doing X-action, and part 2 is doing Y-action.”
...But you would say that it is even better to say “this system, as a whole is doing both X-action, and Y-action”?
So, let’s take the example of my mother stressing over deadlines. Until I reconsolidated that belief structure… or hell, since UTEB seems to call it a “schema”, let’s just call it that. I had a schema that said I needed to be stressed out if the goal was serious. I wasn’t aware of that, though: it just seemed like “serious projects are super stressful and I never know what to do”, except wail and grind my teeth (figuratively speaking) until stuff gets done.
Now, I was aware I was stressed, and knew this wasn’t helpful, so I did all sorts of things to calm down. People (like my wife) would tell me everything was fine, I was doing great, go easier/don’t be so hard on yourself, etc. I would try practicing self-compassion, but it didn’t do anything, except maybe momentarily, because structurally, being not-stressed was incompatible with my schema.
In fact, a rather weird thing happened: the more I managed to let go of judgments I had about how well I was doing, and the better I got at being self-compassionate, the worse I felt. It wasn’t the same kind of stress, but it was actually worse, despite being differently flavored. It was like, “you’re not taking this seriously enough” (and implicitly, “you’re an awful person”).
As it happened, the reason I got better at self-compassion was not because I was practicing it as a mode of operation, but because I used my own mindhacking methods to remove the reasons I had for self-judgment. In truth, over the last decade or two I have tried a ridiculous number of self-help and/or therapist-designed exercises intended to send love or compassion to parts or past selves or inner children etc., and what they all had in common was that they almost never clicked for me… and the few times they did, I ended up developing alternative techniques to produce the same kind of result without trying to fake the love, compassion, or care that almost never felt real to me.
In retrospect, it’s easy to see that the reason those particular things clicked is that in trying to understand the perspective from which the exercise(s) were written, I stumbled on contradictions to my existing schema, and thus fixed another way in which I was judging myself (and thus unable to have self-compassion).
Anyway, my point is that most counteractive interventions (to use the term from UTEB) involve a therapist modeling (and coaching the client to enact) helpful carer behavior. If the client’s problem is merely that they aren’t familiar with that type of behavior, then this is merely adding a new skill to their repertoire, and might work nicely.
But, if the person comes from a background where they not only didn’t receive proper care, but were actively taught say, that they were not worth being cared for, that they were bad or selfish for having normal human needs, etc., then this type of training will be counterproductive, because it goes against the client’s schemas, where being good and safe means repressing needs, judging themselves, etc.
As a result, their schema creates either negative reinforcement or neutralizing strategies. They don’t do their assignments, they stop coming to therapy. Or they develop ways to neutralize the contradiction between the schema and the new experience, e.g. by defining it as “unreal”, “you’re being nice because that’s your job”, etc.
Or, there’s the neutralizing strategy I used for many years, which was to frame things in my head as, “okay, so I’m going to be nice to my weak self so that it can shape up and do what it’s supposed to now”. (This one has been popular with some of my clients, too, as it allows you to keep punishing and diminishing yourself in the way you’re used to, while technically still completing the exercises you’re supposed to!)
So these are things that traditional therapists call all sorts of things, like transference and resistance and so on. But these are basically ways to say in effect, “the therapy is working but the client isn’t”.
The overall framework I call “Collect, Connect, Correct”, and it’s surprisingly similar to the “ABC123V” framework described in UTEB. (Actually, I guess it shouldn’t be that surprising, since the results they describe from their framework are quite similar to the kind I get.)
In the first stage, I collect information about the when/where/how of the problem, and attempt to pin down a repeatable emotional response, i.e. think about X, get emotional reaction Y. If it’s not repeatable, it’s not testable, which makes things a lot harder.
In the case of being stressed, the way that I got there was that I was laying down one afternoon, trying to take a nap and not being able to relax. When I’d think of trying to let go and actually sleep, I kept thinking something along the lines of, “I should be doing something, not relaxing”.
A side note: my description of this isn’t going to be terribly reliable, due to the phenomenon I call “change amnesia” (which UTEB alludes to in case studies, but doesn’t give a name, at least in the chapters I’ve read so far). Change amnesia is something that happens when you alter a schema. The meanings that you used to ascribe to things stop making sense, and as a result it’s hard to get your mind into the same mindset you used to have, even if it was something you were thinking just minutes before making the change!
So, despite the fact I still remember lying there and trying to go to sleep (as the UTEB authors note, autobiographical memory of events isn’t affected, just the meanings associated with them), I am having trouble reconstructing the mindset I was in, because once I changed the underlying schema, that mindset became alien to me.
Anyway, what I do remember was that I had identified a surface level idea. It was probably something like, “I should be doing something”, but because those words don’t make me feel the sense of urgency they did before, it’s hard to know if I am correctly recalling the exact statement.
But I do remember that the statement was sufficiently well-formed to use The Work on. The Work is a simple process for actually performing reconsolidation, the “123” part of UTEB’s ABC123V framework, or the “Correct” in my Collect-Connect-Correct framework.
But when I got to question 4 of the Work, there was an objection raised in my mind. I was imagining not thinking I should be doing something (or whatever the exact statement was), and got a bad feeling or perhaps a feeling that it wasn’t realistic, something of that sort. A reservation or hesitation in this step of the work corresponds to what UTEB describes as an objection from another schema, and as with their method, so too does mine call for switching to eliciting the newly-discovered schema, instead of continuing with the current one.
So at either that level, or the next level up of “attempt reconsolidation, spot objection, switch”, I had the image or idea come up of my mother being upset with me for not being stressed, and I switched from The Work to my SAMMSA model.
SAMMSA stands for “Surface, Attitude, Model, Mirror, Shadow, Assumptions”, and it is a tool I developed to identify and correct implicit beliefs encoded as part of an emotionally significant memory. It’s especially useful in matters relating to self-image and self-esteem, because AFAICT we learn these things almost entirely through our brain’s interpretation of other people’s behavior towards us.
In the specific instance, the “surface” is what my mother said and did. The Attitude was impatience and anger. The Model was, “when there is something important to be done, the right thing to do is be stressed”. The Mirror was, “if I don’t get you to do this, then you will never learn to take things seriously; you’ll grow up to be careless”. The Shadow (injected to my self image) was the idea that: “you’re irresponsible/uncaring”. And the Assumptions (of my mother) were ideas like “I’m helpless/can’t do anything to make this work”, “somebody needs to do something”, and “it’s a serious emergency for things to not be geting done, or for there to be any problems in the doing”.
The key with a stack like this is to fix the Shadow first, unless the Assumptions get in the way. Shadow beliefs are things that say what a person is not, and by implication never will be. They tend to lock into place all the linked beliefs, behaviors, and assumptions, like a lynchpin to the schema that formed around them.
The contradiction, then, is to 1) remember and realize that I did care, as a matter of actual fact, and was not intentionally being irresponsible or “bad”. I wanted to get the thing done, and just didn’t know how to go about it. Then, I imagined “how would my mother have acted if she knew that for a fact?” At which point I then imagine growing up with her acting that way… which I was surprised to realize could be as simple as telling me to work on it daily and checking my progress. (I did initially have to work through an objection that she couldn’t just leave me to it, and couldn’t just tell me to work on it and not follow-up, but these were pretty straightforward to work out.)
I think I also had some trouble during parts of this due to some of the Assumptions, so I had to deal with a couple of those via The Work. I may also be misremembering the order I did these bits in. (Order isn’t super important as long as you test things to make sure that none of the beliefs seem “real” any more, so you can clean up any that still do.)
Notice, here, the difference between how traditional therapy (IFS included) treats the idea of compassion or loving but firm caregivers, etc., vs the approach I took here. I do not try to act out being compassionate to my younger self or to my self now. I don’t try to construct in my mind some idealized parental figure. Instead, what I did was identify what was broken in (my mental model of) my mother’s beliefs and behavior, and correct that in my mental model of my mother, which is where my previous behavior came from.
This discovery was the result of studying a metric f**k-ton of books on developmental psychology, self-compassion, inner child stuff, shadow psychology, and even IFS. :) I had discovered that sometimes I could change things by remaigining parental behavior more in-line with the concepts from those books, but not always. Trying to divine the difference, I finally noticed that the issue was that sometimes I simply could not, no matter how hard I tried, make a particular visualization of caring behavior feel real, and thus trigger a memory mismatch to induce reconsolidation.
What I discovered was that for such visualizations, my brain was subtly twisting the visualizations in such a way as to match a deeper schema—like the idea that I was incompetent or uncaring or unlovable! -- so that even though the imagined parent was superficially acting different, the underlying schema remained unchanged. It was like they were thinking, “well, I guess I’m supposed to be nice like this in order to be a good parent to this loser”. (I’m being flippant here since change amnesia has long since wiped most of the specifics of these scenarios from easy recollection.)
I dubbed this phenomenon “false belief change”, and found my clients did it, too. I initially had a more intuitive and less systematic way of figuring out how to get past it, but in order to teach other people to do it I gradually worked out the SAMMSA mnemonic and framework for pulling out all the relevant bits, and later still came to realize that there are only three fundamental failures of trust that define Shadows, which helps a lot in rapidly pinning them down.
That’s why, this huge wall of text I’ve described for changing how I feel about important, “serious” projects is something that took maybe 20-30 minutes, including the sobbing and shaking afterward.
(Yeah, that’s a thing that happens, usually when I’m realizing all the sh** I’ve gone through in my life that was completely unnecessary. I assume it’s a sort of “accelerated grief” happening when you notice stuff like, “oh hey, I’ve spent months and years stressing out when I could’ve just worked on it each day and checked on my progress… so much pain and missed opportunities and damaged relationships and...” yeah. It can be intense to do something like that, if it’s been something that affected your life a lot.
As I said above, I did also have to tackle some of the Assumptions, like not being able to do anything and needing somebody else to do it, that any problem equals an emergency, and so on. These didn’t take very long though, with the schema’s core anchor having been taken out. I think I did one assumption before the shadow, and the rest after, but it’s been a while. Most of the time, Assumptions don’t really show up until you’ve at least started work on fixing the shadow, either blocking it directly, or showing up when you try to imagine what it would’ve been like to grow up with the differently-thinking parent.
When I work with clients, the actual SAMMSA process and reconsolidation is similarly something that can be done in 20-30 minutes, but it may take a couple hours to get up to that point, as the earlier Collect and Connect phases can take a while, getting up to the point where you can surface a relevant memory. I was lucky with the “going to sleep” problem because it was something I had immediate access to: a problem that was actually manifesting in practice. In contrast, with clients it usually takes some time to even pin down the equivalent of “I was trying to get to sleep and kept thinking I should be doing something”, especially since most of the time the original presenting problem is something quite general and abstract.
I also find that individuals vary considerably in how easy it is for them to get to emotionally relevant memories; recently I’ve had a couple of LessWrong readers take up my free session offer, and have been quite surprised at how quickly they were able to surface things. (As it turned out, they both had prior experience with Focusing, which helps a whole heck of a lot!)
The UTEB book describes some things that sound similar to what I do to stimulate access to such memories, e.g. their phrase of “symptom deprivation” describes something kind of similar in function and intent to some of my favorie “what if?” questions to ask. And I will admit that there is some degree of art and intuition to it that I have not put into a formal framework (at least yet). But since I tend to develop frameworks in response to trying to teach things, it hasn’t really come up. Example and osmosis has generally sufficed for getting people to get the hang of doing this kind of inward access, once their meta-issue with it (if any) gets pinned down.
I think I’ve answered this above, but in case I haven’t: IFS has the therapist and/or client act out examples of caring behavior, compassion, “self-leadership”, etc. They do this by paying attention, taking parts’ needs seriously, and so on. My prediction is that for some people, some of the time, this would produce results similar to those produced by reconsolidation. Specifically, in the cases where someone doesn’t have a schema silently twisting everything into a “false belief change”, but the behavior they’re shown or taught does contradict one of their problematic schema.
But if the person is internally reframing everything to, “this is just the stupid stuff I have to do to take care of these stupid needy parts”, then no real belief change is taking place, and there will be almost no lasting benefit past the immediate reconciliation of the current conflict being worked on, if it’s even successfully resolved in the first place.
So, I understand that this isn’t what all IFS sources say they are doing. I’m just saying that, whatever you call the process of enacting these attitudes and behaviors in IFS, the only way I would expect it to ever produce any long-term effects is as the result of it being an example that triggers a contradiction in the client’s mental model, and therefore reconsolidation. (And thereby producing “transformative” change, as the UTEB authors call it, as opposed to “counteractive” change, where somebody has to intentionally maintain the counteracting behavior over time in order to sustain the effect.)
I don’t know what you mean by “parts” here. But I do focus on the smallest possible things, because it helps to keep an investigation empirically grounded. The only reason I can go from “not wanting to go to sleep” to “my mother thinks I’m irresponsible” with confidence I’m not moving randomly or making things up, is because each step is locally verifiable and reproducible.
It’s true that there are common cycles and patterns of these smaller elements, but I stick as much as possible to dealing in repeatable stimulus-response pairs, i.e., “think about X, get feeling or impression Y”. Or “adjust the phrasing of this idea until it reaches maximum emotional salience/best match with inner feeling”. All of these are empirical, locally-verifiable, and theory-free phenomena.
In contrast, “parts” are something I’ve struggled to work with in a way that allows that kind of definitiveness. In particular, I never found my “parts” to have repeatable behavior, let alone verifiable answers to questions. I could never tell if what I seemed to be getting was real, or was just me imagining/making stuff up. In contrast, the modality of “state an idea or imagine an action, then notice how I feel” was eminently repeatable and verifiable. I was able to quickly learn the difference betwen “having a reaction” and “wondering if I’m reacting”, and was then able to test different change techniques to see what they did. If something couldn’t change the way I automatically responded, I considered it a dud, because I wanted to change me on the inside, not just how I act on the outside. I wanted to feel differently, and once I settled on using this “test-driven” approach, I began to be able to, for the first time in my life.
So if psychology is alchemy, testing automatic emotional responses is my stab at atomic theory, and I’m working on sketches of parts of the periodic table. (With the caveat that given myself as the primary audience, and my client list being subject to major selection effects, it is entirely possible that the scope of applicability of my work is just smart-but-maybe-too-sensitive, systematically-thinking people with certain types of inferiority complexes. But that worry is considerably reduced by the stuff I’ve read so far in UTEB, whose authors work’s audience does not appear as limited, and whose approach seems fairly congruent with my own.)
I wonder how much of this discussion comes down to a different extensional referent of the word “part”.
According to my view, I would call “the reinforced pattern to activate the ‘distress’ muscle [in some specific set of circumstances]” a part. That’s the thing that I would want to dialogue with.
In contrast, I would not call the “distress muscle” itself a part, because (as you say) the distress muscle doesn’t haven anything like “beliefs” that could update.
In that frame, do you still have an objection?
And I don’t understand how you could “dialogue” with such a thing, except in the metaphorical sense where debugging is a “dialogue” with the software or hardware in question. I don’t ask a stimulus-respponse pattern to explain itself, I dialogue with the client or with my inner experience by trying things or running queries, and the answers I get back are whatever the machine does in response.
I don’t pretend that the behavior pattern is a coherent entity with which I can have a conversation in English, as for me that approach has only ever resulted in confusion, or at best some occasionally good but largely irreproducible results.
And I specifically coach clients not to interpret those responses they get, but just to report the bare fact of what is seen or felt or heard, because the purpose is not to have a conversation but to conduct an investigation or troubleshooting process.
A stimulus-response pattern doesn’t have goals or fears; goals or fears are things we have, that we get from our SR rules as emergent properties. That’s why treating them as intentional agents makes no sense to me: they’re what our agency is made of, but they themselves are not a class of thing that could even comprehend such a thing as the notion of agency.
Schemas are mental models, not utilitarian agents… not even in a theoretical sense! Humans don’t weigh utility, we have an action planner system that queries our predictive model for “what looks like something good to do in this situation”, and whatever comes back fastest tends to win, with emotionally weighted stuff or stuff tagged by certain mental muscles getting wired into faster routes.
To put it another way, I think the thing you’re thinking you can dialogue with is actually a spandrel of sorts, and it’s a higher-level unit than what I work with. IFS, in ascribing intention, necessarily has to look at more complex elements than raw, miniscule, almost “atomic” stimulus-response patterns, because that’s what’s required if you want to make a coherent-sounding model of an entire cycle of symptoms.
In contrast, for me the top-down view of symptom cycles is merely a guide or suggestion to begin an empirical investigation of specific repeatable responses. The larger pattern, after all, is made of things: it doesn’t just exist on its own. It’s made of smaller, simpler things whose behaviors are much more predictable and repeatable. The larger behavior cycles inevitably involve countless minor variations, but the rules that generate the cycles are much more deterministic in nature, making them more amenable to direct hacking.
I’m not sure why IFS’s exile-manager-firefighter model doesn’t fit this description? E.g. modeling something like my past behavior of compulsive computer gaming as a loop of inner critic manager pointing out that I should be doing something → exile being triggered and getting anxious → gaming firefighter seeking to suppress the anxiety with a game → inner critic manager increasing the level of criticism and triggering the other parts further, has felt like a reduction to simpler components, rather than modeling it as “little people”. They’re basically just simple trigger-action rules too, like “if there is something that Kaj should be doing and he isn’t getting around doing it, start ramping up an increasing level of reminders”.
There’s also Janina Fisher’s model of IFS parts being linked to various specific defense systems. The way I read the first quote in the linked comment, she does conceptualize IFS parts as something like state-dependent memory; for exiles, this seems like a particularly obvious interpretation even when looking at the standard IFS descriptions of them, which talk about them being stuck at particular ages and events.
Certainly one can get the effect without compassion too, but compassion seems like a particularly effective and easy way of doing it. Especially given that in IFS you just need to ask parts to step aside until you get to Self, and then the compassion is generated automatically.
Because this description creates a new entity for each thing that happens, such that the total number of entities under discussion is “count(subject matter) times count(strategies)” instead of “count(subject matter) plus count(strategies)”. By simple math, a formulation which uses brain modules for strategies plus rules they operate on, is fewer entities than one entity for every rule+strategy combo.
And that’s not even looking at the brain as a whole. If you model “inner criticism” as merely reinforcement-trained internal verbal behavior, you don’t need even one dedicated brain module for inner criticism, let alone one for each kind of thing being criticized!
Similarly, you can model most types of self-distraction behaviors as simple negative reinforcement learning: i.e., they make pain go away, so they’re reinforced. So you get “firefighting” for free as a side-effect of the brain being able to learn from reinforcement, without needing to posit a firefighting agent for each kind of deflecting behavior.
And nowhere in these descriptions is there any implication of agency, which is critical to actually producing a reductionist model of human behavior. Turning a human from one agent into multiple agents doesn’t reduce anything.
It seems to me that the emotional schemas that Unlocking the Emotional Brain talks about, are basically the same as what IFS calls parts. You didn’t seem to object to the description of schemas; does your objection also apply to them?
IFS in general is very vague about how exactly the parts are implemented on a neural level. It’s not entirely clear to me what kind of a model you are arguing against and what kind of a model you are arguing for instead, but I would think that IFS would be compatible with both.
I agree that reinforcement learning definitely plays a role in which parts/behaviors get activated, and discussed that in some of my later posts [1 2]; but there need to be some innate hardwired behaviors which trigger when the organism is in sufficient pain. An infant which needs help cries; it doesn’t just try out different behaviors until it hits upon one which gets it help and which then gets reinforced.
And e.g. my own compulsive behaviors tend to have very specific signatures which do not fit together with your description; e.g. a desire to keep playing a game can get “stuck on” way past the time when it has stopped being beneficial. Such as when I’ve slept in between and I just feel a need to continue the game as the first thing in the morning, and there isn’t any pain to distract myself from anymore, but the compulsion will produce pain. This is not consistent with a simple “behaviors get reinforced” model, but it is more consistent with a “parts can get stuck on after they have been activated” model.
Not sure what you mean by agency?
AFAICT, there’s a huge difference between UTEB’s “schema” (a “mental model of how the world functions”, in their words) and IFS’ notion of “agent” or “part”. A “model” is passive: it merely outputs predictions or evaluations, which are then acted on by other parts of the brain. It doesn’t have any goals, it just blindly maps situations to “things that might be good to do or avoid”. An “agent” is implicitly active and goal-seeking, whereas a model is not. “Model” implies a thing that one might change, whereas an “agent” might be required to change itself, if a change is to happen.
UTEB also describes the schema as “wordlessly [defining] how the world is”—which is quite coherent (no pun intended) with my own models of mindhacking. I’m actually looking forward to reading UTEB in full, as the introduction makes it sound like the models I’ve developed of how this stuff works, are quite similar to theirs.
(Indeed, my own approach is specifically targeted at changing implicit mental models of “how things are” or “how the world is”, because that changes lots of behaviors at once, and especially how one feels or relates to the world. So I’m curious to know if they’ve found anything else I might find useful.)
What I’m arguing against is a model where a patterns of behavior (verbs) are nominalized as nouns. It’s bad enough to think that one has say, procrastination or akrasia, as if it were a disease rather than a pattern of behavior. But to further nominalize it as an agent trying to accomplish something is going all the way to needless anthropomorphism.
To put it another way, if there are “agents” (things with intention) that cause your behavior, then you are necessarily less at cause and in control of your life. But if you instead have mental models that predict certain behaviors would be a good idea, and so you feel drawn or pushed towards them, then that is a model that still validates your experience, but doesn’t require you to fight or negotiate or whatever. Reconsolidation allows you to be more you, by gaining more choices.
But that’s a values argument. You’re asking what I’m against, and I’m not “against” IFS per se. What I am saying, and have been saying, is that nominalizing behavior patterns as “parts” or “agents” is bad reductionism, independent of its value as a therapeutic metaphor.
Over the course of this conversation, I’ve actually become slightly more open to the use of parts as a metaphor in casual conversation, if only as a stepping stone to discarding it in favor of learned rules and mental muscles.
But, the reason I’m slightly more open to it is exactly the same reason I oppose it!
Specifically, using terms like “part” or “agent” encourages automatic, implicit, anthropomorphic projection of human-like intention and behavior.
This is both bad reductionism and good metaphor. (Well, in the short term, anyway.) As a metaphor, it has certain immediate effects, including retaining disidentification with the problem (and therefore validation of one’s felt lack of agency in the problem area).
But as reductionism, it fails for the very same reason, by not actually reducing the complexity of what is being modeled, due to sneaking in those very same connotations.
Unfortunately, even as a metaphor, I think it’s short-term good, but long-term bad. I have found that people love to make things into parts, precisely because of the good feelings of validation and disidentification, and they have to be weaned off of this in order to make any progress at direct reconsolidation.
In contrast, describing learned rules and mental muscles seems to me to help people with unblending, because of the realization that there’s nothing there—no “agent”, not even themselves(!), who is actually “deciding” or pursuing “goals”. There’s nothing there to be blended with, if it’s all just a collection of rules!
But that’s a discussion about a different topic, really, because as I said from the outset, my issue with IFS is that it’s bad reductionism. And I think this article’s attempt at building IFS’s model from the bottom up fails at reductionism because it’s specifically trying to justify “parts”, rather than looking at what is the minimal machinery needed to produce the observations of IFS, independent of its model. (The article also pushes a viewpoint from design, rather than evolution, further weakening its argument.)
For example, I read Healing The Fragmented Selves Of Trauma Survivors a little over a year ago, and found in it a useful refinement: Fisher described five “roles” that parts play, and one of them was something I’d not accounted for in my rough list of “mental muscles”. But the very fact that you can exhaustively enumerate the roles that parts “play”, strongly suggests that the so-called roles are in fact the thing represented in our hardware, not the “parts”!
In other words, IFS has it precisely backwards: parts don’t “play roles”, mental modules play parts. When viewed from an evolutionary perspective, going the other way makes no sense, especially given that the described functions (fight/vigilance, flight/escape, freeze/fear, submit/shame, attach/needy), are things that are pretty darn universal in mammals.
I think you are confusing reinforcement and logic. Reinforcement learning doesn’t work on logic, it works on discounted rewards. The gaming behavior can easily become intrinsically motivating, due to it having been reinforced by previously reducing pain. (We can learn to like something “for its own sake” precisely because it has helped us avoid pain in the past, and if it produces pleasure, too, all the better!)
However, your anticipation that “continuing to play will cause me pain”, will at best be a discounted future event without the same level of reinforcement power… assuming that that’s really you thinking that at all, and not simply an internal verbal behavior being internally reinforced by a mental model of such worrying being what a “good” or “responsible” person would do! (i.e., internal virtue-signalling)
It is quite possible in my experience to put one’s self through all sorts of mental pain… and still have it feel virtuous, because then at least I care about the right things and am trying to be a responsible person… which then excuses my prior failure while also maintaining hope I can succeed in the future.
And despite these virtue-signaling behaviors seeming to be about the thing you’re doing or not doing, in my experience they don’t really include thinking about the actual problem, and so have even less impact on the outward behavior than one would expect from listening to the supposed subject matter of the inner verbalization(s).
So yeah, reinforcement learning is 100% consistent with the failure modes you describe, once you include:
negative reinforcement (that which gets us away from pain is reinforced)
secondary reinforcement (that which is reinforced, becomes “inherently” rewarding)
discounted reinforcement (that which is near in time and space has more impact than that which is far)
social reinforcement (that which signals virtue may be more reinforcing than actual virtue, due to its lower cost)
verbal behavior (what we say to ourselves or others is subject to reinforcement, independent of any actual meaning ascribed to the content of those verbalizations!)
imitative reinforcement (that which we see others do is reinforced, unless our existing learning tells us the behavior is bad, in which case it is punished instead)
All of these, I believe, are pretty well-documented properties of reinforcement learning, and more than suffice to explain the kinds of failure modes you’ve brought up. Given that they already exist, with all but verbal behavior being near-universal in the animal kingdom, a parsimonious model of human behavior needs to start from these, rather than designing a system from the ground up to account for a specific theory of psychotherapy.
Cool. That makes sense.
Well, when I talk with people at CFAR workshops, fairly often someone will have the problem of “akrasia” and they’ll conceptualize it, more or less, as “my system 1 is stupid and doesn’t understand that working harder at my job is the only thing that matters, and I need tools to force my S1 to do the right thing.”
And then I might suggest that they try on the frame where “the akrasia part”, is actually an intelligent “agent” trying to optimize for their own goals (instead of a foreign, stupid entity, that they have to subdue). If the akrasia was actually right, why would that be?
And they realize that they hate their job, and obviously their life would be terrible if they spent more of their time working at their terrible job.
[I’m obviously simplifying somewhat, but this exact pattern does come up over and over again at CFAR workshops.]
That is, in practice, the part, or subagent framing helps at least some people to own their desires more, not less.
[I do want to note that you explicitly said, “What I am saying, and have been saying, is that nominalizing behavior patterns as “parts” or “agents” is bad reductionism, independent of its value as a therapeutic metaphor.”]
---
This doesn’t seem right in my personal experience, because the “agents” are all me. I’m conceptualizing the parts of myself as separate from each other, because it’s easier to think about that way, but I’m not disowning or disassociating from any of them. It’s all me.
So my response to that is to say, “ok, let’s get empirical about that. When does this happen, exactly? If you think about working harder right now, what happens?” Or, “What happens if you don’t work harder at your job?”
In other words, I immediately try to drop to a stimulus-response level, and reject all higher-level interpretive frameworks, except insofar as they give me ideas of where to drop my depth charges, so to speak. :)
I usually don’t bring that kind of thing up until a point has been reached where the client can see that empirically. For example, if I’ve asked them to imagine what happens if they get their wish and are now working harder at their job… and they notice that they feel awful or whatever. And then I don’t need to address the intentionality at all.
And sometimes, the real problem has nothing to do with the work and everything to do with a belief that they aren’t a good person unless they work more, so it doesn’t matter how terrible it is… but also, the very fact that they’re guilty about not working more may be precisely the thing they’re avoiding by not working!
In other words, sometimes an intentional model fails because brains are actually pretty stupid, and have design flaws such that trying to view them as having sensible or coherent goals simply doesn’t work.
For example, our action planning subsystem is really bad at prioritizing between things we feel good about doing vs. things we feel bad about not doing. It wants to avoid the things we feel bad about not doing, because when we think about them, we feel bad. That part of our brains doesn’t understand things like “logical negation” or “implicative reasoning”, it just processes things based on their emotional tags. (i.e., “bad = run away”)
And I’m also not saying I never do anything that’s a modeling of intention. But I get there bottom-up, not top-down, and it only comes up in a few places.
Also, most of the intentional models I use are for things that pass through the brain’s intention-modeling system: i.e., our mental models of what other people think/thought about us!
For example, the SAMMSA pattern is all about pulling that stuff out, as is the MTF pattern (“meant to feel/made to feel”—a subset of SAMMSA dealing with learnings of how others intend for us to feel in certain circumstances).
The only other place I use quasi-intentional frames is in describing the evolutionary function or “intent” of our brain modules. For example, distress behavior is “intended” to generate caring responses from parents. But this isn’t about what the person intends, it’s about what their brain is built to do. When you were a crying baby, “you” didn’t even have anything that qualifies as intention yet, so how could we say you had a part with that intention?
And even then, I’m treating it as, “in this context, this behavior pattern would produce this result” (producing reinforcement or gene propagation), not “this thing is trying to produce this result, so it has this behavior pattern in this context.” Given the fact that my intention is always to reduce to the actual “wires” or “lines of code” producing a problem, intention modeling is going in the wrong direction most of the time.
My analogy about confusing a thermostat with something hot or cold underneath speaks to why: unlike IFS, I don’t assume that parts have positive, functional intentions, even if they arose out of the positive “design intentions” of the system as a whole. After all, the plan for achieving that original “intention” may no longer be valid! (insofar as there even was one to begin with.)
That’s why I don’t think of the thermostat as being something that “wants” temperature, because it would distract me from actually looking at the wall and the wiring and the sensors, which is the only way I can be certain that I’m always getting closer to a solution rather than guessing or going in circles. (That is, by always working with things I can test, like a programmer debugging a program. Rerunning it and inspecting, putting in different data values and seeing how the behavior changes, and so on.)
+1.