presupposing that all my desires are mine and that I have good reasons even for doing apparently self-destructive things
I’ve always disliked the term “subagent”, but this sentence seems to capture what I mean when I’m talking about psychological “parts”.
So I think I agree with you about the ontological status of parts, but I can’t tell, if you’re making some bolder claim.
What are you imagining would be the case if IFS was literally true, and subagents were real, instead of “just a metaphor”?
. . .
In fact, I dislike the word “subagent”, because it imports implications that might not hold. A part might be agent-like, but it also might be closer to an urge or a desire or an impulse.
To my understanding the key idea of the “parts” framing, is that I should assume, by default, that each part is acting from a model, a set of beliefs about the world or my goals. That is, my desire/ urge / reflex, is not “mindless”: it can update.
Overall this makes your comment read to me as “these things are not really [subagents], they’re just reactions that have [these specific properties of subagents].”
What are you imagining would be the case if IFS was literally true, and subagents were real, instead of “just a metaphor”?
Well, for one thing, that they would intelligently shift their behavior to achieve their outcomes, rather than stupidly continuing things that don’t work any more. That would be one implication of agency.
Also, if IFS were literally true, and “subagents” were the atomic unit of behavior, then the UTEB model shouldn’t work, and neither should mine or many other modalities that operate on smaller, non-intentional units.
In fact, I dislike the word “subagent”, because it imports implications that might not hold. A part might be agent-like, but it also might be closer to an urge or a desire or an impulse.
Ah! Now we’re getting somewhere. In my frame, an urge, desire or impulse is a reaction. The “response” in stimulus-response. Which is why I want to pin down “when does this thing happen?”, to get the stimulus part that goes with it.
To my understanding the key idea of the “parts” framing, is that I should assume, by default, that each part is acting from a model, a set of beliefs about the world or my goals. That is, my desire/ urge / reflex, is not “mindless”: it can update.
I see it differently: we have mental models of the world, that contain “here are some things that might be good to do in certain situations”, where “things to do” can include “how you should feel, so as to bias towards a certain category of behaviors that might be helpful based on what we know”. (And the actions or feelings listed in the model can be things other people did or felt!)
In other words, the desire or urge is the output of a lookup table, and the lookup table can be changed. But both the urge and the lookup table are dumb, passive, and prefer not to update if at all possible. (To the extent that information processed through the lookup table will be distorted to reinforce the validity of what’s already in the lookup table.)
Even in the cases where somebody makes a conscious decision to pursue a goal, (e.g. a child thinking “I’ll be good so my parents will love me”, or “I’ll be perfect so nobody can reject me”), that’s just slapping an urge or desire into the lookup table, basically. It doesn’t mean we pursue it in any systematic or even sane way!
So, what you’re seeing as a coherent “part”, I see as a collection of assorted interacting machinery that, when it works, could maybe be seen as an intelligent goal-seeking agent… but mostly is dumb machinery subject to all kinds of weird breakage scenarios, turning us all into neurotic f**kups, full of hypocrisy and akrasia. ;-)
I’ve always disliked the term “subagent”, but this sentence seems to capture what I mean when I’m talking about psychological “parts”.
So I think I agree with you about the ontological status of parts, but I can’t tell, if you’re making some bolder claim.
What are you imagining would be the case if IFS was literally true, and subagents were real, instead of “just a metaphor”?
. . .
In fact, I dislike the word “subagent”, because it imports implications that might not hold. A part might be agent-like, but it also might be closer to an urge or a desire or an impulse.
To my understanding the key idea of the “parts” framing, is that I should assume, by default, that each part is acting from a model, a set of beliefs about the world or my goals. That is, my desire/ urge / reflex, is not “mindless”: it can update.
Overall this makes your comment read to me as “these things are not really [subagents], they’re just reactions that have [these specific properties of subagents].”
Well, for one thing, that they would intelligently shift their behavior to achieve their outcomes, rather than stupidly continuing things that don’t work any more. That would be one implication of agency.
Also, if IFS were literally true, and “subagents” were the atomic unit of behavior, then the UTEB model shouldn’t work, and neither should mine or many other modalities that operate on smaller, non-intentional units.
Ah! Now we’re getting somewhere. In my frame, an urge, desire or impulse is a reaction. The “response” in stimulus-response. Which is why I want to pin down “when does this thing happen?”, to get the stimulus part that goes with it.
I see it differently: we have mental models of the world, that contain “here are some things that might be good to do in certain situations”, where “things to do” can include “how you should feel, so as to bias towards a certain category of behaviors that might be helpful based on what we know”. (And the actions or feelings listed in the model can be things other people did or felt!)
In other words, the desire or urge is the output of a lookup table, and the lookup table can be changed. But both the urge and the lookup table are dumb, passive, and prefer not to update if at all possible. (To the extent that information processed through the lookup table will be distorted to reinforce the validity of what’s already in the lookup table.)
Even in the cases where somebody makes a conscious decision to pursue a goal, (e.g. a child thinking “I’ll be good so my parents will love me”, or “I’ll be perfect so nobody can reject me”), that’s just slapping an urge or desire into the lookup table, basically. It doesn’t mean we pursue it in any systematic or even sane way!
So, what you’re seeing as a coherent “part”, I see as a collection of assorted interacting machinery that, when it works, could maybe be seen as an intelligent goal-seeking agent… but mostly is dumb machinery subject to all kinds of weird breakage scenarios, turning us all into neurotic f**kups, full of hypocrisy and akrasia. ;-)