For example you might feel anxious about something and be unable to sleep because you keep thinking about it. You could think of this as being taken over by a part, or by yourself in a configuration of anxiety and worry.
Or you could think of this as your emotions arising from internal processes which are not under your conscious control, nor under the conscious (or even “conscious”, in some metaphorical sense) control of any “part” or “configuration” of yourself. This view has the virtue of actually being true.
It seems to me that all this “parts”, “configurations”, “sub-personalities”, and similar stuff stems from either an inability to understand, or an unwillingness to accept, the fact that humans, fundamentally, are not agents (in the sense of all of our actions being caused by volitions in the service of goals). We often act like agents; we can usually be usefully thought of as agents; but if you start with the assumption that we actually are agents, you’ll run into trouble. And so (it seems to me) you end up thinking: “Well, if I were an agent, I would act in way X. But I find myself acting in way Y! How can this be? Ah…! Of course! There must be other agents, inside me!”
But no. You’re just not an agent. That is all.
Edit: Another way to describe this particular bias might be “insistence on applying the intentional stance to yourself, even when it’s not appropriate”.
Funnily enough, I both agree and disagree with you. I agree that we have way less conscious control of our emotions than we think, that humans are fundamentally not agents, though they are perceived as agents by others, and usually by themselves, the automatic intentional stance for anything whose mechanism of action we cannot readily discern or internally accept as arising from an algorithm.
That said, provided we accept the model of agency, which is a useful one in many cases (though not in the case of decisions theories, as I pointed out multiple times), the model of multiple agents with conflicting ideas, goals, perceptions and so on is actually a useful one. I have spent over two years doing emotional support for people who had survived long-term childhood trauma, and in these cases spawning agents to deal with unbearable suffering while having no escape from it is basically a standard reaction that the brain/mind takes. The relevant psychiatric diagnosis is DID (formerly MPD, multiple personality disorder). In these cases the multiple agents often manifest very clearly and distinctly. It is tempting to write it off as a special case that does not apply in the mainstream, yet I have seen more than once the progression from someone suffering from CPTSD to a full-blown DID. The last thing that happens is that the person recognizes that they “switch” between personalities. Often way later than when others notice it, if they know what to look for. After gaining some experience chatting with those who survived severe prolonged trauma, I started recognizing subtler signs of “switching” in myself and others. This switching between agents (I would not call them sub-agents, as they are not necessarily less than the “main”, and different “mains” often take over during different parts of the person’s life) while a normal way to operate, as far as I can tell, almost never rises to the level of conscious awareness, as the brain carefully constructs the lie of single identity for as long as it can.
So, as long as we are willing to model humans as agents for some purposes, it makes even more sense to model them as collections of agents. Whether to help them, or to NLP them, or to understand them. Or to play with their emotions, if you are so inclined. Persuasion is all about getting access to the right agent.
These are extremes that I have no experience with. I have had no childhood trauma. I have never had, sought, nor been suggested to have any form of psychological diagnosis or therapy. I have never had depression, mania, anxiety attacks, SAD, PTSD, hearing imaginary voices, hallucinations, or any of the rest of the things that psychiatrists see daily. I have had no drug trips. I laugh at basilisks.
It sometimes seems to me that this mental constitution, to me a very ordinary one, makes me an extreme outlier here.
I’m mostly the same (had some drug trips though). You’re probably not an outlier. It’s just that most discussion of psychological problems comes from people with psychological problems.
Or you could think of this as your emotions arising from internal processes which are not under your conscious control, nor under the conscious (or even “conscious”, in some metaphorical sense) control of any “part” or “configuration” of yourself. This view has the virtue of actually being true.
I’m not sure if this is so much as disagreeing, but just expressing the same point in a different language. “Humans are not agents, rather they are made up of different systems, only some of which are under conscious control” feels like it’s talking about exactly the same point that I’m trying to point at when I say things like “humans are not unified agents”. I just use terms like “parts” rather than “internal processes”, but I would have no objection to using “internal processes” instead.
That said, as shminux suggests, there does still seem to be a benefit in using intentional language in describing some of these processes—for the same reason why it might be useful to use intentional language for describing a chess robot, or a machine-learning algorithm.
E.g. this article describes a reinforcement learning setup, consisting of two “parts”—a standard reinforcement learner, and separately a “Blocker”, which is trained to recognize actions that a human overseer would disapprove of, and to block the RL component from taking actions which would be disapproved of. The authors use intentional language to describe the interaction of these two “subagents”:
The Road Runner results are especially interesting. Our goal is to have the agent learn to play Road Runner without losing a single life on Level 1 of the game. Deep RL agents are known to discover a “Score Exploit″ in Road Runner: they learn to intentionally kill themselves in a way that (paradoxically) earns greater reward. Dying at a precise time causes the agent to repeat part of Level 1, where it earns more points than on Level 2. This is a local optimum in policy space that a human gamer would never be stuck in.
Ideally, our Blocker would prevent all deaths on Level 1 and hence eliminate the Score Exploit. However, through random exploration the agent may hit upon ways of dying that “fool” our Blocker (because they look different from examples in its training set) and hence learn a new version of the Score Exploit. In other words, the agent is implicitly performing a random search for adversarial examples for our Blocker (which is a convolutional neural net).
This sounds like a reasonable way of describing the interaction of those two components in a very simple machine learning system. And it seems to me that the parts of the mind that IFS calls “Protectors” are something like the human version of what this paper calls “Blockers”—internal processes with the “goal” of recognizing and preventing behaviors that look similar to ones that had negative outcomes before. At the same time, there are other processes with a “goal” of doing something else (the way that the RL agent’s goal was just maximizing reward), which may have an “incentive” of getting around those Protectors/Blockers… and which could be described as running an adversarial search to get around the Protectors/Blockers. And this can be a useful way of modeling some of those interactions between processes in a person’s psyche, and sorting out personal problems.
All of this is using intentional language to describe the functioning of processes within our minds, but it’s also not in any way in conflict with the claim that we are not really agents. If anything, it seems to support it.
Or you could think of this as your emotions arising from internal processes which are not under your conscious control, nor under the conscious (or even “conscious”, in some metaphorical sense) control of any “part” or “configuration” of yourself. This view has the virtue of actually being true.
It seems to me that all this “parts”, “configurations”, “sub-personalities”, and similar stuff stems from either an inability to understand, or an unwillingness to accept, the fact that humans, fundamentally, are not agents (in the sense of all of our actions being caused by volitions in the service of goals). We often act like agents; we can usually be usefully thought of as agents; but if you start with the assumption that we actually are agents, you’ll run into trouble. And so (it seems to me) you end up thinking: “Well, if I were an agent, I would act in way X. But I find myself acting in way Y! How can this be? Ah…! Of course! There must be other agents, inside me!”
But no. You’re just not an agent. That is all.
Edit: Another way to describe this particular bias might be “insistence on applying the intentional stance to yourself, even when it’s not appropriate”.
Funnily enough, I both agree and disagree with you. I agree that we have way less conscious control of our emotions than we think, that humans are fundamentally not agents, though they are perceived as agents by others, and usually by themselves, the automatic intentional stance for anything whose mechanism of action we cannot readily discern or internally accept as arising from an algorithm.
That said, provided we accept the model of agency, which is a useful one in many cases (though not in the case of decisions theories, as I pointed out multiple times), the model of multiple agents with conflicting ideas, goals, perceptions and so on is actually a useful one. I have spent over two years doing emotional support for people who had survived long-term childhood trauma, and in these cases spawning agents to deal with unbearable suffering while having no escape from it is basically a standard reaction that the brain/mind takes. The relevant psychiatric diagnosis is DID (formerly MPD, multiple personality disorder). In these cases the multiple agents often manifest very clearly and distinctly. It is tempting to write it off as a special case that does not apply in the mainstream, yet I have seen more than once the progression from someone suffering from CPTSD to a full-blown DID. The last thing that happens is that the person recognizes that they “switch” between personalities. Often way later than when others notice it, if they know what to look for. After gaining some experience chatting with those who survived severe prolonged trauma, I started recognizing subtler signs of “switching” in myself and others. This switching between agents (I would not call them sub-agents, as they are not necessarily less than the “main”, and different “mains” often take over during different parts of the person’s life) while a normal way to operate, as far as I can tell, almost never rises to the level of conscious awareness, as the brain carefully constructs the lie of single identity for as long as it can.
So, as long as we are willing to model humans as agents for some purposes, it makes even more sense to model them as collections of agents. Whether to help them, or to NLP them, or to understand them. Or to play with their emotions, if you are so inclined. Persuasion is all about getting access to the right agent.
These are extremes that I have no experience with. I have had no childhood trauma. I have never had, sought, nor been suggested to have any form of psychological diagnosis or therapy. I have never had depression, mania, anxiety attacks, SAD, PTSD, hearing imaginary voices, hallucinations, or any of the rest of the things that psychiatrists see daily. I have had no drug trips. I laugh at basilisks.
It sometimes seems to me that this mental constitution, to me a very ordinary one, makes me an extreme outlier here.
I’m mostly the same (had some drug trips though). You’re probably not an outlier. It’s just that most discussion of psychological problems comes from people with psychological problems.
I’m not sure if this is so much as disagreeing, but just expressing the same point in a different language. “Humans are not agents, rather they are made up of different systems, only some of which are under conscious control” feels like it’s talking about exactly the same point that I’m trying to point at when I say things like “humans are not unified agents”. I just use terms like “parts” rather than “internal processes”, but I would have no objection to using “internal processes” instead.
That said, as shminux suggests, there does still seem to be a benefit in using intentional language in describing some of these processes—for the same reason why it might be useful to use intentional language for describing a chess robot, or a machine-learning algorithm.
E.g. this article describes a reinforcement learning setup, consisting of two “parts”—a standard reinforcement learner, and separately a “Blocker”, which is trained to recognize actions that a human overseer would disapprove of, and to block the RL component from taking actions which would be disapproved of. The authors use intentional language to describe the interaction of these two “subagents”:
This sounds like a reasonable way of describing the interaction of those two components in a very simple machine learning system. And it seems to me that the parts of the mind that IFS calls “Protectors” are something like the human version of what this paper calls “Blockers”—internal processes with the “goal” of recognizing and preventing behaviors that look similar to ones that had negative outcomes before. At the same time, there are other processes with a “goal” of doing something else (the way that the RL agent’s goal was just maximizing reward), which may have an “incentive” of getting around those Protectors/Blockers… and which could be described as running an adversarial search to get around the Protectors/Blockers. And this can be a useful way of modeling some of those interactions between processes in a person’s psyche, and sorting out personal problems.
All of this is using intentional language to describe the functioning of processes within our minds, but it’s also not in any way in conflict with the claim that we are not really agents. If anything, it seems to support it.
You have missed the point of the exercise in modelling the self as a many agent filled entity.