I’m still not sure what it would mean for humans to actually have subagents, versus to just behave exactly as if they have subagents. I don’t know what empirical finding would distinguish between those two theories.
There are some interesting things that crop up during IFS sessions that I think require explanation.
For example, I find it surprising that you can ask the Part a verbal question, and that part will answer in English, and the answer it gives can often be startling, and true. The whole process feels qualitatively different from just “asking yourself” that same question. It also feels qualitatively different from constructing fictional characters and asking them questions.
I also find that taking an IFS approach, in contrast to a pure Focusing approach, results in much more dramatic and noticeable internal/emotional shifts. The IFS framework is accessing internal levers that Focusing alone isn’t.
One thing I wanted to show with my toy model, but didn’t really succeed, was that arranging an agent architecture where certain functions belong to the “subagents” rather than the “agent” can be more elegant or parsimonious or strictly simpler. Philosophically, I would have preferred to write the code without using any for loops, because I’m pretty sure human brains never do anything that looks like a for loop. Rather, all of the subagents are running constantly, in parallel, and doing something more like message-passing according to their individual needs. The “agent” doesn’t check each subagent, sequentially, for its state; the subagents pro-actively inject their states into the global workspace when a certain threshold is met. This is almost certainly how the brain works, regardless of whether you wish to use the word “subagent” or “neural submodule” or what exactly. In this light, at least algorithmically, it would seem that the submodules do qualify as agents, in most senses of the word.
I’m still not sure what it would mean for humans to actually have subagents, versus to just behave exactly as if they have subagents. I don’t know what empirical finding would distinguish between those two theories.
I agree that you shouldn’t expect to see any findings that distinguish between those theories. But I thought the main question here was closer to “do humans behave as if they have subagents?”, where there’s evidence that points in that direction (“I have conflicting desires and moods!”) and evidence that points away from that direction (“Anger isn’t an agent, if it were you would see Y!”).
My examples of subagents appearing to mysteriously answer questions was meant to suggest that there are subtle things that IFS explains/predicts, which aren’t automatically explained in other models. Examples of phenomena that contradict IFS model would be even more useful, though I’m failing to think of what those would look like.
Examples of phenomena that contradict IFS model would be even more useful, though I’m failing to think of what those would look like.
The reason it’s hard to think of what it would look like is that viewing things through an agentic lens makes you miss those counterexamples by confabulation. For example, you can trivially explain any behavior by postulating a part that wants to do that behavior. In contrast, simpler models have to be grounded in something more concrete (such as what the rule(s) are and how they were learned), which means such models are less flexible… and thus more likely to tell us something useful about the world, due to what they rule out.
My examples of subagents appearing to mysteriously answer questions was meant to suggest that there are subtle things that IFS explains/predicts, which aren’t automatically explained in other models
I don’t see that, though: if you ask people questions, they are mysteriously able to answer them, and the answers can often be startling, and true. While I can’t speak to your subjective experience, I can describe something from mine that sounds similar: if somebody asks me a question under certain circumstances, I find myself listening to an answer coming out of my mouth that I did not know before, and which I do not experience myself as knowing until after I hear myself say it.
This state of affairs does not involve IFS-style parts, so ISTM that IFS does not add anything special here to the idea that questions can trigger the appearance of information in the mind that one did not explicitly know beforehand… Almost as if it had just been prepared fresh by a chef in the kitchen, vs. something that was already in our refrigerator of knowledge. ;-)
I’m still not sure what it would mean for humans to actually have subagents, versus to just behave exactly as if they have subagents. I don’t know what empirical finding would distinguish between those two theories.
There are some interesting things that crop up during IFS sessions that I think require explanation.
For example, I find it surprising that you can ask the Part a verbal question, and that part will answer in English, and the answer it gives can often be startling, and true. The whole process feels qualitatively different from just “asking yourself” that same question. It also feels qualitatively different from constructing fictional characters and asking them questions.
I also find that taking an IFS approach, in contrast to a pure Focusing approach, results in much more dramatic and noticeable internal/emotional shifts. The IFS framework is accessing internal levers that Focusing alone isn’t.
One thing I wanted to show with my toy model, but didn’t really succeed, was that arranging an agent architecture where certain functions belong to the “subagents” rather than the “agent” can be more elegant or parsimonious or strictly simpler. Philosophically, I would have preferred to write the code without using any for loops, because I’m pretty sure human brains never do anything that looks like a for loop. Rather, all of the subagents are running constantly, in parallel, and doing something more like message-passing according to their individual needs. The “agent” doesn’t check each subagent, sequentially, for its state; the subagents pro-actively inject their states into the global workspace when a certain threshold is met. This is almost certainly how the brain works, regardless of whether you wish to use the word “subagent” or “neural submodule” or what exactly. In this light, at least algorithmically, it would seem that the submodules do qualify as agents, in most senses of the word.
I agree that you shouldn’t expect to see any findings that distinguish between those theories. But I thought the main question here was closer to “do humans behave as if they have subagents?”, where there’s evidence that points in that direction (“I have conflicting desires and moods!”) and evidence that points away from that direction (“Anger isn’t an agent, if it were you would see Y!”).
My examples of subagents appearing to mysteriously answer questions was meant to suggest that there are subtle things that IFS explains/predicts, which aren’t automatically explained in other models. Examples of phenomena that contradict IFS model would be even more useful, though I’m failing to think of what those would look like.
The reason it’s hard to think of what it would look like is that viewing things through an agentic lens makes you miss those counterexamples by confabulation. For example, you can trivially explain any behavior by postulating a part that wants to do that behavior. In contrast, simpler models have to be grounded in something more concrete (such as what the rule(s) are and how they were learned), which means such models are less flexible… and thus more likely to tell us something useful about the world, due to what they rule out.
I don’t see that, though: if you ask people questions, they are mysteriously able to answer them, and the answers can often be startling, and true. While I can’t speak to your subjective experience, I can describe something from mine that sounds similar: if somebody asks me a question under certain circumstances, I find myself listening to an answer coming out of my mouth that I did not know before, and which I do not experience myself as knowing until after I hear myself say it.
This state of affairs does not involve IFS-style parts, so ISTM that IFS does not add anything special here to the idea that questions can trigger the appearance of information in the mind that one did not explicitly know beforehand… Almost as if it had just been prepared fresh by a chef in the kitchen, vs. something that was already in our refrigerator of knowledge. ;-)