Excellent summary! There are a couple of areas where you may have slightly over-stated my claims, though:
IFS “breaks down” behavior into mini-people inside our heads, each mini-person being equally complex as a full psyche.
I wouldn’t say that IFS claims each mini-person is equally complex, only that the reduction here is just a separation of goals or concerns, and does not reduce the complexity of having agency. And this is particularly important because it is the elimination of the idea of smart or strategic agency that allows one to actually debug brains.
Compare to programming: when writing a program, one intends for it to behave in a certain way. Yet bugs exist, because the mapping of intention to actual rules for behavior is occasionally incomplete or incorrectly matched to the situation in which the program operates.
But, so long as the programmer thinks of the program as acting according to the programmer’s intention (as opposed to whatever the programmer actually wrote), it is hard for that programmer to actually debug the program. Debugging requires the programmer to discard any mental models of what the program is “supposed to” do, in order to observe what the program is actually doing… which might be quite wrong and/or stupid.
In the same way, I believe that ascribing “agency” to subsets of human behavior is a similar instance of being blinded by an abstraction that doesn’t match the actual thing. We’re made up of lots of code, and our problems can be considered bugs in the code… even if the behavior the code produces was “working as intended” when it was written. ;-)
On the other hand, IFS assumes that there is dedicated hardware for each instance of an action pattern: each part corresponds to something like an evolved module in the brain, and each instance of a negative behavior/emotion corresponds to a separate part.
I don’t claim that IFS assumes dedicated per-instance hardware; but it seems kind of implied. My understanding is that IFS at least assumes that parts are agents that 1) do things, 2) can be conversed with as if they were sentient, and 3) can be reasoned or negotiated with. That’s more than enough to view it as not reducing “agency”.
But the article that we are having this discussion on does try to a model a system with dedicated agents actually existing (whether in hardware or software), so at least that model is introducing dedicated entities beyond necessity. ;)
Besides the issue with luck, IFS does not really have the concept of a schema which keeps interpreting behaviors in the light of its existing model, and thus filtering out all the counter-evidence that the playacting might otherwise have contained. To address this you need to target the problematic schema directly, which requires you to actually know about this kind of a thing and be able to use reconsolidation techniques directly.
Technically, it’s possible to change people without intentionally using reconsolidation or a technique that works by directly attempting it. It happens by accident all the time, after all!
And it’s quite possible for an IFS therapist to notice the filtering or distortions taking place, if they’re skilled and paying attention. Presumably, they would assign it to a part and then engage in negotiation or an attempt to “heal” said part, which then might or might not result in reconsolidation.
So I’m not claiming that IFS can’t work in such cases, only that to work, it requires an observant therapist. But such a good therapist could probably get results with any therapy model that gave them sufficient freedom to notice and address the issue, no matter what terminology was used to describe the issue, or the method of addressing it.
As the authors of UTEB put it:
Transformational change of the kind addressed here—the true disappearance of long-standing, distressing emotional learning—of course occurs at times in all sorts of psychotherapies that involve no design or intention to implement the transformation sequence by creating juxtaposition experiences.
After all, reconsolidation isn’t some super-secret special hack or unintended brain exploit, it’s how the brain normally updates its predictive models, and it’s supposed to happen automatically. It’s just that once a model pushes the prior probability of something high (or low) enough, your brain starts throwing out each instance of a conflicting event, even if considered collectively they would be reason to make a major update in the probability.
Excellent summary! There are a couple of areas where you may have slightly over-stated my claims, though:
I wouldn’t say that IFS claims each mini-person is equally complex, only that the reduction here is just a separation of goals or concerns, and does not reduce the complexity of having agency. And this is particularly important because it is the elimination of the idea of smart or strategic agency that allows one to actually debug brains.
Compare to programming: when writing a program, one intends for it to behave in a certain way. Yet bugs exist, because the mapping of intention to actual rules for behavior is occasionally incomplete or incorrectly matched to the situation in which the program operates.
But, so long as the programmer thinks of the program as acting according to the programmer’s intention (as opposed to whatever the programmer actually wrote), it is hard for that programmer to actually debug the program. Debugging requires the programmer to discard any mental models of what the program is “supposed to” do, in order to observe what the program is actually doing… which might be quite wrong and/or stupid.
In the same way, I believe that ascribing “agency” to subsets of human behavior is a similar instance of being blinded by an abstraction that doesn’t match the actual thing. We’re made up of lots of code, and our problems can be considered bugs in the code… even if the behavior the code produces was “working as intended” when it was written. ;-)
I don’t claim that IFS assumes dedicated per-instance hardware; but it seems kind of implied. My understanding is that IFS at least assumes that parts are agents that 1) do things, 2) can be conversed with as if they were sentient, and 3) can be reasoned or negotiated with. That’s more than enough to view it as not reducing “agency”.
But the article that we are having this discussion on does try to a model a system with dedicated agents actually existing (whether in hardware or software), so at least that model is introducing dedicated entities beyond necessity. ;)
Technically, it’s possible to change people without intentionally using reconsolidation or a technique that works by directly attempting it. It happens by accident all the time, after all!
And it’s quite possible for an IFS therapist to notice the filtering or distortions taking place, if they’re skilled and paying attention. Presumably, they would assign it to a part and then engage in negotiation or an attempt to “heal” said part, which then might or might not result in reconsolidation.
So I’m not claiming that IFS can’t work in such cases, only that to work, it requires an observant therapist. But such a good therapist could probably get results with any therapy model that gave them sufficient freedom to notice and address the issue, no matter what terminology was used to describe the issue, or the method of addressing it.
As the authors of UTEB put it:
After all, reconsolidation isn’t some super-secret special hack or unintended brain exploit, it’s how the brain normally updates its predictive models, and it’s supposed to happen automatically. It’s just that once a model pushes the prior probability of something high (or low) enough, your brain starts throwing out each instance of a conflicting event, even if considered collectively they would be reason to make a major update in the probability.
Here’s my reply! Got article-length, so I posted it separately.
Thanks for the clarifications! I’ll get back to you with my responses soon-ish.