If IFS said, “brains have modules for these types of mental behavior”, (e.g. hiding, firefighting, etc.), then that would also be a reduction.
I’m not sure why IFS’s exile-manager-firefighter model doesn’t fit this description? E.g. modeling something like my past behavior of compulsive computer gaming as a loop of inner critic manager pointing out that I should be doing something → exile being triggered and getting anxious → gaming firefighter seeking to suppress the anxiety with a game → inner critic manager increasing the level of criticism and triggering the other parts further, has felt like a reduction to simpler components, rather than modeling it as “little people”. They’re basically just simple trigger-action rules too, like “if there is something that Kaj should be doing and he isn’t getting around doing it, start ramping up an increasing level of reminders”.
There’s also Janina Fisher’s model
of IFS parts being linked to various specific defense systems. The way I read the first quote in the linked comment, she does conceptualize IFS parts as something like state-dependent memory; for exiles, this seems like a particularly obvious interpretation even when looking at the standard IFS descriptions of them, which talk about them being stuck at particular ages and events.
but compassion towards a “part” is not really necessary for that, just that one suppress commentary.
Certainly one can get the effect without compassion too, but compassion seems like a particularly effective and easy way of doing it. Especially given that in IFS you just need to ask parts to step aside until you get to Self, and then the compassion is generated automatically.
I’m not sure why IFS’s exile-manager-firefighter model doesn’t fit this description? E.g. modeling something like my past behavior of compulsive computer gaming as a loop of inner critic manager pointing out that I should be doing something → exile being triggered and getting anxious → gaming firefighter seeking to suppress the anxiety with a game → inner critic manager increasing the level of criticism and triggering the other parts further, has felt like a reduction to simpler components, rather than modeling it as “little people”.
Because this description creates a new entity for each thing that happens, such that the total number of entities under discussion is “count(subject matter) times count(strategies)” instead of “count(subject matter) plus count(strategies)”. By simple math, a formulation which uses brain modules for strategies plus rules they operate on, is fewer entities than one entity for every rule+strategy combo.
And that’s not even looking at the brain as a whole. If you model “inner criticism” as merely reinforcement-trained internal verbal behavior, you don’t need even one dedicated brain module for inner criticism, let alone one for each kind of thing being criticized!
Similarly, you can model most types of self-distraction behaviors as simple negative reinforcement learning: i.e., they make pain go away, so they’re reinforced. So you get “firefighting” for free as a side-effect of the brain being able to learn from reinforcement, without needing to posit a firefighting agent for each kind of deflecting behavior.
And nowhere in these descriptions is there any implication of agency, which is critical to actually producing a reductionist model of human behavior. Turning a human from one agent into multiple agents doesn’t reduce anything.
Because this description creates a new entity for each thing that happens, such that the total number of entities under discussion is “count(subject matter) times count(strategies)” instead of “count(subject matter) plus count(strategies)”. By simple math, a formulation which uses brain modules for strategies plus rules they operate on, is fewer entities than one entity for every rule+strategy combo.
It seems to me that the emotional schemas that Unlocking the Emotional Brain talks about, are basically the same as what IFS calls parts. You didn’t seem to object to the description of schemas; does your objection also apply to them?
IFS in general is very vague about how exactly the parts are implemented on a neural level. It’s not entirely clear to me what kind of a model you are arguing against and what kind of a model you are arguing for instead, but I would think that IFS would be compatible with both.
Similarly, you can model most types of self-distraction behaviors as simple negative reinforcement learning: i.e., they make pain go away, so they’re reinforced. So you get “firefighting” for free as a side-effect of the brain being able to learn from reinforcement
I agree that reinforcement learning definitely plays a role in which parts/behaviors get activated, and discussed that in some of my later posts [12]; but there need to be some innate hardwired behaviors which trigger when the organism is in sufficient pain. An infant which needs help cries; it doesn’t just try out different behaviors until it hits upon one which gets it help and which then gets reinforced.
And e.g. my own compulsive behaviors tend to have very specific signatures which do not fit together with your description; e.g. a desire to keep playing a game can get “stuck on” way past the time when it has stopped being beneficial. Such as when I’ve slept in between and I just feel a need to continue the game as the first thing in the morning, and there isn’t any pain to distract myself from anymore, but the compulsion will produce pain. This is not consistent with a simple “behaviors get reinforced” model, but it is more consistent with a “parts can get stuck on after they have been activated” model.
And nowhere in these descriptions is there any implication of agency, which is critical to actually producing a reductionist model of human behavior.
It seems to me that the emotional schemas that Unlocking the Emotional Brain talks about, are basically the same as what IFS calls parts. You didn’t seem to object to the description of schemas; does your objection also apply to them?
AFAICT, there’s a huge difference between UTEB’s “schema” (a “mental model of how the world functions”, in their words) and IFS’ notion of “agent” or “part”. A “model” is passive: it merely outputs predictions or evaluations, which are then acted on by other parts of the brain. It doesn’t have any goals, it just blindly maps situations to “things that might be good to do or avoid”. An “agent” is implicitly active and goal-seeking, whereas a model is not. “Model” implies a thing that one might change, whereas an “agent” might be required to change itself, if a change is to happen.
UTEB also describes the schema as “wordlessly [defining] how the world is”—which is quite coherent (no pun intended) with my own models of mindhacking. I’m actually looking forward to reading UTEB in full, as the introduction makes it sound like the models I’ve developed of how this stuff works, are quite similar to theirs.
(Indeed, my own approach is specifically targeted at changing implicit mental models of “how things are” or “how the world is”, because that changes lots of behaviors at once, and especially how one feels or relates to the world. So I’m curious to know if they’ve found anything else I might find useful.)
IFS in general is very vague about how exactly the parts are implemented on a neural level. It’s not entirely clear to me what kind of a model you are arguing against and what kind of a model you are arguing for instead, but I would think that IFS would be compatible with both.
What I’m arguing against is a model where a patterns of behavior (verbs) are nominalized as nouns. It’s bad enough to think that one has say, procrastination or akrasia, as if it were a disease rather than a pattern of behavior. But to further nominalize it as an agent trying to accomplish something is going all the way to needless anthropomorphism.
To put it another way, if there are “agents” (things with intention) that cause your behavior, then you are necessarily less at cause and in control of your life. But if you instead have mental models that predict certain behaviors would be a good idea, and so you feel drawn or pushed towards them, then that is a model that still validates your experience, but doesn’t require you to fight or negotiate or whatever. Reconsolidation allows you to be more you, by gaining more choices.
But that’s a values argument. You’re asking what I’m against, and I’m not “against” IFS per se. What I am saying, and have been saying, is that nominalizing behavior patterns as “parts” or “agents” is bad reductionism, independent of its value as a therapeutic metaphor.
Over the course of this conversation, I’ve actually become slightly more open to the use of parts as a metaphor in casual conversation, if only as a stepping stone to discarding it in favor of learned rules and mental muscles.
But, the reason I’m slightly more open to it is exactly the same reason I oppose it!
Specifically, using terms like “part” or “agent” encourages automatic, implicit, anthropomorphic projection of human-like intention and behavior.
This is both bad reductionism and good metaphor. (Well, in the short term, anyway.) As a metaphor, it has certain immediate effects, including retaining disidentification with the problem (and therefore validation of one’s felt lack of agency in the problem area).
But as reductionism, it fails for the very same reason, by not actually reducing the complexity of what is being modeled, due to sneaking in those very same connotations.
Unfortunately, even as a metaphor, I think it’s short-term good, but long-term bad. I have found that people love to make things into parts, precisely because of the good feelings of validation and disidentification, and they have to be weaned off of this in order to make any progress at direct reconsolidation.
In contrast, describing learned rules and mental muscles seems to me to help people with unblending, because of the realization that there’s nothing there—no “agent”, not even themselves(!), who is actually “deciding” or pursuing “goals”. There’s nothing there to be blended with, if it’s all just a collection of rules!
But that’s a discussion about a different topic, really, because as I said from the outset, my issue with IFS is that it’s bad reductionism. And I think this article’s attempt at building IFS’s model from the bottom up fails at reductionism because it’s specifically trying to justify “parts”, rather than looking at what is the minimal machinery needed to produce the observations of IFS, independent of its model. (The article also pushes a viewpoint from design, rather than evolution, further weakening its argument.)
For example, I read Healing The Fragmented Selves Of Trauma Survivors a little over a year ago, and found in it a useful refinement: Fisher described five “roles” that parts play, and one of them was something I’d not accounted for in my rough list of “mental muscles”. But the very fact that you can exhaustively enumerate the roles that parts “play”, strongly suggests that the so-called roles are in fact the thing represented in our hardware, not the “parts”!
In other words, IFS has it precisely backwards: parts don’t “play roles”, mental modules play parts. When viewed from an evolutionary perspective, going the other way makes no sense, especially given that the described functions (fight/vigilance, flight/escape, freeze/fear, submit/shame, attach/needy), are things that are pretty darn universal in mammals.
And e.g. my own compulsive behaviors tend to have very specific signatures which do not fit together with your description; e.g. a desire to keep playing a game can get “stuck on” way past the time when it has stopped being beneficial. Such as when I’ve slept in between and I just feel a need to continue the game as the first thing in the morning, and there isn’t any pain to distract myself from anymore, but the compulsion will produce pain. This is not consistent with a simple “behaviors get reinforced” model, but it is more consistent with a “parts can get stuck on after they have been activated” model.
I think you are confusing reinforcement and logic. Reinforcement learning doesn’t work on logic, it works on discounted rewards. The gaming behavior can easily become intrinsically motivating, due to it having been reinforced by previously reducing pain. (We can learn to like something “for its own sake” precisely because it has helped us avoid pain in the past, and if it produces pleasure, too, all the better!)
However, your anticipation that “continuing to play will cause me pain”, will at best be a discounted future event without the same level of reinforcement power… assuming that that’s really you thinking that at all, and not simply an internal verbal behavior being internally reinforced by a mental model of such worrying being what a “good” or “responsible” person would do! (i.e., internal virtue-signalling)
It is quite possible in my experience to put one’s self through all sorts of mental pain… and still have it feel virtuous, because then at least I care about the right things and am trying to be a responsible person… which then excuses my prior failure while also maintaining hope I can succeed in the future.
And despite these virtue-signaling behaviors seeming to be about the thing you’re doing or not doing, in my experience they don’t really include thinking about the actual problem, and so have even less impact on the outward behavior than one would expect from listening to the supposed subject matter of the inner verbalization(s).
So yeah, reinforcement learning is 100% consistent with the failure modes you describe, once you include:
negative reinforcement (that which gets us away from pain is reinforced)
secondary reinforcement (that which is reinforced, becomes “inherently” rewarding)
discounted reinforcement (that which is near in time and space has more impact than that which is far)
social reinforcement (that which signals virtue may be more reinforcing than actual virtue, due to its lower cost)
verbal behavior (what we say to ourselves or others is subject to reinforcement, independent of any actual meaning ascribed to the content of those verbalizations!)
imitative reinforcement (that which we see others do is reinforced, unless our existing learning tells us the behavior is bad, in which case it is punished instead)
All of these, I believe, are pretty well-documented properties of reinforcement learning, and more than suffice to explain the kinds of failure modes you’ve brought up. Given that they already exist, with all but verbal behavior being near-universal in the animal kingdom, a parsimonious model of human behavior needs to start from these, rather than designing a system from the ground up to account for a specific theory of psychotherapy.
What I’m arguing against is a model where a patterns of behavior (verbs) are nominalized as nouns.
Cool. That makes sense.
It’s bad enough to think that one has say, procrastination or akrasia, as if it were a disease rather than a pattern of behavior. But to further nominalize it as an agent trying to accomplish something is going all the way to needless anthropomorphism.
Well, when I talk with people at CFAR workshops, fairly often someone will have the problem of “akrasia” and they’ll conceptualize it, more or less, as “my system 1 is stupid and doesn’t understand that working harder at my job is the only thing that matters, and I need tools to force my S1 to do the right thing.”
And then I might suggest that they try on the frame where “the akrasia part”, is actually an intelligent “agent” trying to optimize for their own goals (instead of a foreign, stupid entity, that they have to subdue). If the akrasia was actually right, why would that be?
And they realize that they hate their job, and obviously their life would be terrible if they spent more of their time working at their terrible job.
[I’m obviously simplifying somewhat, but this exact pattern does come up over and over again at CFAR workshops.]
That is, in practice, the part, or subagent framing helps at least some people to own their desires more, not less.
[I do want to note that you explicitly said, “What I am saying, and have been saying, is that nominalizing behavior patterns as “parts” or “agents” is bad reductionism, independent of its value as a therapeutic metaphor.”]
---
To put it another way, if there are “agents” (things with intention) that cause your behavior, then you are necessarily less at cause and in control of your life.
This doesn’t seem right in my personal experience, because the “agents” are all me. I’m conceptualizing the parts of myself as separate from each other, because it’s easier to think about that way, but I’m not disowning or disassociating from any of them. It’s all me.
Well, when I talk with people at CFAR workshops, fairly often someone will have the problem of “akrasia” and they’ll conceptualize it, more or less, as “my system 1 is stupid and doesn’t understand that working harder at my job is the only thing that matters, and I need tools to force my S1 to do the right thing.”
So my response to that is to say, “ok, let’s get empirical about that. When does this happen, exactly? If you think about working harder right now, what happens?” Or, “What happens if you don’t work harder at your job?”
In other words, I immediately try to drop to a stimulus-response level, and reject all higher-level interpretive frameworks, except insofar as they give me ideas of where to drop my depth charges, so to speak. :)
And then I might suggest that they try on the frame where “the akrasia part”, is actually an intelligent “agent” trying to optimize for their own goals (instead of a foreign, stupid entity, that they have to subdue). If the akrasia was actually right, why would that be?
I usually don’t bring that kind of thing up until a point has been reached where the client can see that empirically. For example, if I’ve asked them to imagine what happens if they get their wish and are now working harder at their job… and they notice that they feel awful or whatever. And then I don’t need to address the intentionality at all.
And they realize that they hate their job, and obviously their life would be terrible if they spent more of their time working at their terrible job.
And sometimes, the real problem has nothing to do with the work and everything to do with a belief that they aren’t a good person unless they work more, so it doesn’t matter how terrible it is… but also, the very fact that they’re guilty about not working more may be precisely the thing they’re avoiding by not working!
In other words, sometimes an intentional model fails because brains are actually pretty stupid, and have design flaws such that trying to view them as having sensible or coherent goals simply doesn’t work.
For example, our action planning subsystem is really bad at prioritizing between things we feel good about doing vs. things we feel bad about not doing. It wants to avoid the things we feel bad about not doing, because when we think about them, we feel bad. That part of our brains doesn’t understand things like “logical negation” or “implicative reasoning”, it just processes things based on their emotional tags. (i.e., “bad = run away”)
[I’m obviously simplifying somewhat, but this exact pattern does come up over and over again at CFAR workshops.]
And I’m also not saying I never do anything that’s a modeling of intention. But I get there bottom-up, not top-down, and it only comes up in a few places.
Also, most of the intentional models I use are for things that pass through the brain’s intention-modeling system: i.e., our mental models of what other people think/thought about us!
For example, the SAMMSA pattern is all about pulling that stuff out, as is the MTF pattern (“meant to feel/made to feel”—a subset of SAMMSA dealing with learnings of how others intend for us to feel in certain circumstances).
The only other place I use quasi-intentional frames is in describing the evolutionary function or “intent” of our brain modules. For example, distress behavior is “intended” to generate caring responses from parents. But this isn’t about what the person intends, it’s about what their brain is built to do. When you were a crying baby, “you” didn’t even have anything that qualifies as intention yet, so how could we say you had a part with that intention?
And even then, I’m treating it as, “in this context, this behavior pattern would produce this result” (producing reinforcement or gene propagation), not “this thing is trying to produce this result, so it has this behavior pattern in this context.” Given the fact that my intention is always to reduce to the actual “wires” or “lines of code” producing a problem, intention modeling is going in the wrong direction most of the time.
My analogy about confusing a thermostat with something hot or cold underneath speaks to why: unlike IFS, I don’t assume that parts have positive, functional intentions, even if they arose out of the positive “design intentions” of the system as a whole. After all, the plan for achieving that original “intention” may no longer be valid! (insofar as there even was one to begin with.)
That’s why I don’t think of the thermostat as being something that “wants” temperature, because it would distract me from actually looking at the wall and the wiring and the sensors, which is the only way I can be certain that I’m always getting closer to a solution rather than guessing or going in circles. (That is, by always working with things I can test, like a programmer debugging a program. Rerunning it and inspecting, putting in different data values and seeing how the behavior changes, and so on.)
I’m not sure why IFS’s exile-manager-firefighter model doesn’t fit this description? E.g. modeling something like my past behavior of compulsive computer gaming as a loop of inner critic manager pointing out that I should be doing something → exile being triggered and getting anxious → gaming firefighter seeking to suppress the anxiety with a game → inner critic manager increasing the level of criticism and triggering the other parts further, has felt like a reduction to simpler components, rather than modeling it as “little people”. They’re basically just simple trigger-action rules too, like “if there is something that Kaj should be doing and he isn’t getting around doing it, start ramping up an increasing level of reminders”.
There’s also Janina Fisher’s model of IFS parts being linked to various specific defense systems. The way I read the first quote in the linked comment, she does conceptualize IFS parts as something like state-dependent memory; for exiles, this seems like a particularly obvious interpretation even when looking at the standard IFS descriptions of them, which talk about them being stuck at particular ages and events.
Certainly one can get the effect without compassion too, but compassion seems like a particularly effective and easy way of doing it. Especially given that in IFS you just need to ask parts to step aside until you get to Self, and then the compassion is generated automatically.
Because this description creates a new entity for each thing that happens, such that the total number of entities under discussion is “count(subject matter) times count(strategies)” instead of “count(subject matter) plus count(strategies)”. By simple math, a formulation which uses brain modules for strategies plus rules they operate on, is fewer entities than one entity for every rule+strategy combo.
And that’s not even looking at the brain as a whole. If you model “inner criticism” as merely reinforcement-trained internal verbal behavior, you don’t need even one dedicated brain module for inner criticism, let alone one for each kind of thing being criticized!
Similarly, you can model most types of self-distraction behaviors as simple negative reinforcement learning: i.e., they make pain go away, so they’re reinforced. So you get “firefighting” for free as a side-effect of the brain being able to learn from reinforcement, without needing to posit a firefighting agent for each kind of deflecting behavior.
And nowhere in these descriptions is there any implication of agency, which is critical to actually producing a reductionist model of human behavior. Turning a human from one agent into multiple agents doesn’t reduce anything.
It seems to me that the emotional schemas that Unlocking the Emotional Brain talks about, are basically the same as what IFS calls parts. You didn’t seem to object to the description of schemas; does your objection also apply to them?
IFS in general is very vague about how exactly the parts are implemented on a neural level. It’s not entirely clear to me what kind of a model you are arguing against and what kind of a model you are arguing for instead, but I would think that IFS would be compatible with both.
I agree that reinforcement learning definitely plays a role in which parts/behaviors get activated, and discussed that in some of my later posts [1 2]; but there need to be some innate hardwired behaviors which trigger when the organism is in sufficient pain. An infant which needs help cries; it doesn’t just try out different behaviors until it hits upon one which gets it help and which then gets reinforced.
And e.g. my own compulsive behaviors tend to have very specific signatures which do not fit together with your description; e.g. a desire to keep playing a game can get “stuck on” way past the time when it has stopped being beneficial. Such as when I’ve slept in between and I just feel a need to continue the game as the first thing in the morning, and there isn’t any pain to distract myself from anymore, but the compulsion will produce pain. This is not consistent with a simple “behaviors get reinforced” model, but it is more consistent with a “parts can get stuck on after they have been activated” model.
Not sure what you mean by agency?
AFAICT, there’s a huge difference between UTEB’s “schema” (a “mental model of how the world functions”, in their words) and IFS’ notion of “agent” or “part”. A “model” is passive: it merely outputs predictions or evaluations, which are then acted on by other parts of the brain. It doesn’t have any goals, it just blindly maps situations to “things that might be good to do or avoid”. An “agent” is implicitly active and goal-seeking, whereas a model is not. “Model” implies a thing that one might change, whereas an “agent” might be required to change itself, if a change is to happen.
UTEB also describes the schema as “wordlessly [defining] how the world is”—which is quite coherent (no pun intended) with my own models of mindhacking. I’m actually looking forward to reading UTEB in full, as the introduction makes it sound like the models I’ve developed of how this stuff works, are quite similar to theirs.
(Indeed, my own approach is specifically targeted at changing implicit mental models of “how things are” or “how the world is”, because that changes lots of behaviors at once, and especially how one feels or relates to the world. So I’m curious to know if they’ve found anything else I might find useful.)
What I’m arguing against is a model where a patterns of behavior (verbs) are nominalized as nouns. It’s bad enough to think that one has say, procrastination or akrasia, as if it were a disease rather than a pattern of behavior. But to further nominalize it as an agent trying to accomplish something is going all the way to needless anthropomorphism.
To put it another way, if there are “agents” (things with intention) that cause your behavior, then you are necessarily less at cause and in control of your life. But if you instead have mental models that predict certain behaviors would be a good idea, and so you feel drawn or pushed towards them, then that is a model that still validates your experience, but doesn’t require you to fight or negotiate or whatever. Reconsolidation allows you to be more you, by gaining more choices.
But that’s a values argument. You’re asking what I’m against, and I’m not “against” IFS per se. What I am saying, and have been saying, is that nominalizing behavior patterns as “parts” or “agents” is bad reductionism, independent of its value as a therapeutic metaphor.
Over the course of this conversation, I’ve actually become slightly more open to the use of parts as a metaphor in casual conversation, if only as a stepping stone to discarding it in favor of learned rules and mental muscles.
But, the reason I’m slightly more open to it is exactly the same reason I oppose it!
Specifically, using terms like “part” or “agent” encourages automatic, implicit, anthropomorphic projection of human-like intention and behavior.
This is both bad reductionism and good metaphor. (Well, in the short term, anyway.) As a metaphor, it has certain immediate effects, including retaining disidentification with the problem (and therefore validation of one’s felt lack of agency in the problem area).
But as reductionism, it fails for the very same reason, by not actually reducing the complexity of what is being modeled, due to sneaking in those very same connotations.
Unfortunately, even as a metaphor, I think it’s short-term good, but long-term bad. I have found that people love to make things into parts, precisely because of the good feelings of validation and disidentification, and they have to be weaned off of this in order to make any progress at direct reconsolidation.
In contrast, describing learned rules and mental muscles seems to me to help people with unblending, because of the realization that there’s nothing there—no “agent”, not even themselves(!), who is actually “deciding” or pursuing “goals”. There’s nothing there to be blended with, if it’s all just a collection of rules!
But that’s a discussion about a different topic, really, because as I said from the outset, my issue with IFS is that it’s bad reductionism. And I think this article’s attempt at building IFS’s model from the bottom up fails at reductionism because it’s specifically trying to justify “parts”, rather than looking at what is the minimal machinery needed to produce the observations of IFS, independent of its model. (The article also pushes a viewpoint from design, rather than evolution, further weakening its argument.)
For example, I read Healing The Fragmented Selves Of Trauma Survivors a little over a year ago, and found in it a useful refinement: Fisher described five “roles” that parts play, and one of them was something I’d not accounted for in my rough list of “mental muscles”. But the very fact that you can exhaustively enumerate the roles that parts “play”, strongly suggests that the so-called roles are in fact the thing represented in our hardware, not the “parts”!
In other words, IFS has it precisely backwards: parts don’t “play roles”, mental modules play parts. When viewed from an evolutionary perspective, going the other way makes no sense, especially given that the described functions (fight/vigilance, flight/escape, freeze/fear, submit/shame, attach/needy), are things that are pretty darn universal in mammals.
I think you are confusing reinforcement and logic. Reinforcement learning doesn’t work on logic, it works on discounted rewards. The gaming behavior can easily become intrinsically motivating, due to it having been reinforced by previously reducing pain. (We can learn to like something “for its own sake” precisely because it has helped us avoid pain in the past, and if it produces pleasure, too, all the better!)
However, your anticipation that “continuing to play will cause me pain”, will at best be a discounted future event without the same level of reinforcement power… assuming that that’s really you thinking that at all, and not simply an internal verbal behavior being internally reinforced by a mental model of such worrying being what a “good” or “responsible” person would do! (i.e., internal virtue-signalling)
It is quite possible in my experience to put one’s self through all sorts of mental pain… and still have it feel virtuous, because then at least I care about the right things and am trying to be a responsible person… which then excuses my prior failure while also maintaining hope I can succeed in the future.
And despite these virtue-signaling behaviors seeming to be about the thing you’re doing or not doing, in my experience they don’t really include thinking about the actual problem, and so have even less impact on the outward behavior than one would expect from listening to the supposed subject matter of the inner verbalization(s).
So yeah, reinforcement learning is 100% consistent with the failure modes you describe, once you include:
negative reinforcement (that which gets us away from pain is reinforced)
secondary reinforcement (that which is reinforced, becomes “inherently” rewarding)
discounted reinforcement (that which is near in time and space has more impact than that which is far)
social reinforcement (that which signals virtue may be more reinforcing than actual virtue, due to its lower cost)
verbal behavior (what we say to ourselves or others is subject to reinforcement, independent of any actual meaning ascribed to the content of those verbalizations!)
imitative reinforcement (that which we see others do is reinforced, unless our existing learning tells us the behavior is bad, in which case it is punished instead)
All of these, I believe, are pretty well-documented properties of reinforcement learning, and more than suffice to explain the kinds of failure modes you’ve brought up. Given that they already exist, with all but verbal behavior being near-universal in the animal kingdom, a parsimonious model of human behavior needs to start from these, rather than designing a system from the ground up to account for a specific theory of psychotherapy.
Cool. That makes sense.
Well, when I talk with people at CFAR workshops, fairly often someone will have the problem of “akrasia” and they’ll conceptualize it, more or less, as “my system 1 is stupid and doesn’t understand that working harder at my job is the only thing that matters, and I need tools to force my S1 to do the right thing.”
And then I might suggest that they try on the frame where “the akrasia part”, is actually an intelligent “agent” trying to optimize for their own goals (instead of a foreign, stupid entity, that they have to subdue). If the akrasia was actually right, why would that be?
And they realize that they hate their job, and obviously their life would be terrible if they spent more of their time working at their terrible job.
[I’m obviously simplifying somewhat, but this exact pattern does come up over and over again at CFAR workshops.]
That is, in practice, the part, or subagent framing helps at least some people to own their desires more, not less.
[I do want to note that you explicitly said, “What I am saying, and have been saying, is that nominalizing behavior patterns as “parts” or “agents” is bad reductionism, independent of its value as a therapeutic metaphor.”]
---
This doesn’t seem right in my personal experience, because the “agents” are all me. I’m conceptualizing the parts of myself as separate from each other, because it’s easier to think about that way, but I’m not disowning or disassociating from any of them. It’s all me.
So my response to that is to say, “ok, let’s get empirical about that. When does this happen, exactly? If you think about working harder right now, what happens?” Or, “What happens if you don’t work harder at your job?”
In other words, I immediately try to drop to a stimulus-response level, and reject all higher-level interpretive frameworks, except insofar as they give me ideas of where to drop my depth charges, so to speak. :)
I usually don’t bring that kind of thing up until a point has been reached where the client can see that empirically. For example, if I’ve asked them to imagine what happens if they get their wish and are now working harder at their job… and they notice that they feel awful or whatever. And then I don’t need to address the intentionality at all.
And sometimes, the real problem has nothing to do with the work and everything to do with a belief that they aren’t a good person unless they work more, so it doesn’t matter how terrible it is… but also, the very fact that they’re guilty about not working more may be precisely the thing they’re avoiding by not working!
In other words, sometimes an intentional model fails because brains are actually pretty stupid, and have design flaws such that trying to view them as having sensible or coherent goals simply doesn’t work.
For example, our action planning subsystem is really bad at prioritizing between things we feel good about doing vs. things we feel bad about not doing. It wants to avoid the things we feel bad about not doing, because when we think about them, we feel bad. That part of our brains doesn’t understand things like “logical negation” or “implicative reasoning”, it just processes things based on their emotional tags. (i.e., “bad = run away”)
And I’m also not saying I never do anything that’s a modeling of intention. But I get there bottom-up, not top-down, and it only comes up in a few places.
Also, most of the intentional models I use are for things that pass through the brain’s intention-modeling system: i.e., our mental models of what other people think/thought about us!
For example, the SAMMSA pattern is all about pulling that stuff out, as is the MTF pattern (“meant to feel/made to feel”—a subset of SAMMSA dealing with learnings of how others intend for us to feel in certain circumstances).
The only other place I use quasi-intentional frames is in describing the evolutionary function or “intent” of our brain modules. For example, distress behavior is “intended” to generate caring responses from parents. But this isn’t about what the person intends, it’s about what their brain is built to do. When you were a crying baby, “you” didn’t even have anything that qualifies as intention yet, so how could we say you had a part with that intention?
And even then, I’m treating it as, “in this context, this behavior pattern would produce this result” (producing reinforcement or gene propagation), not “this thing is trying to produce this result, so it has this behavior pattern in this context.” Given the fact that my intention is always to reduce to the actual “wires” or “lines of code” producing a problem, intention modeling is going in the wrong direction most of the time.
My analogy about confusing a thermostat with something hot or cold underneath speaks to why: unlike IFS, I don’t assume that parts have positive, functional intentions, even if they arose out of the positive “design intentions” of the system as a whole. After all, the plan for achieving that original “intention” may no longer be valid! (insofar as there even was one to begin with.)
That’s why I don’t think of the thermostat as being something that “wants” temperature, because it would distract me from actually looking at the wall and the wiring and the sensors, which is the only way I can be certain that I’m always getting closer to a solution rather than guessing or going in circles. (That is, by always working with things I can test, like a programmer debugging a program. Rerunning it and inspecting, putting in different data values and seeing how the behavior changes, and so on.)