Or you could think of this as your emotions arising from internal processes which are not under your conscious control, nor under the conscious (or even “conscious”, in some metaphorical sense) control of any “part” or “configuration” of yourself. This view has the virtue of actually being true.
I’m not sure if this is so much as disagreeing, but just expressing the same point in a different language. “Humans are not agents, rather they are made up of different systems, only some of which are under conscious control” feels like it’s talking about exactly the same point that I’m trying to point at when I say things like “humans are not unified agents”. I just use terms like “parts” rather than “internal processes”, but I would have no objection to using “internal processes” instead.
That said, as shminux suggests, there does still seem to be a benefit in using intentional language in describing some of these processes—for the same reason why it might be useful to use intentional language for describing a chess robot, or a machine-learning algorithm.
E.g. this article describes a reinforcement learning setup, consisting of two “parts”—a standard reinforcement learner, and separately a “Blocker”, which is trained to recognize actions that a human overseer would disapprove of, and to block the RL component from taking actions which would be disapproved of. The authors use intentional language to describe the interaction of these two “subagents”:
The Road Runner results are especially interesting. Our goal is to have the agent learn to play Road Runner without losing a single life on Level 1 of the game. Deep RL agents are known to discover a “Score Exploit″ in Road Runner: they learn to intentionally kill themselves in a way that (paradoxically) earns greater reward. Dying at a precise time causes the agent to repeat part of Level 1, where it earns more points than on Level 2. This is a local optimum in policy space that a human gamer would never be stuck in.
Ideally, our Blocker would prevent all deaths on Level 1 and hence eliminate the Score Exploit. However, through random exploration the agent may hit upon ways of dying that “fool” our Blocker (because they look different from examples in its training set) and hence learn a new version of the Score Exploit. In other words, the agent is implicitly performing a random search for adversarial examples for our Blocker (which is a convolutional neural net).
This sounds like a reasonable way of describing the interaction of those two components in a very simple machine learning system. And it seems to me that the parts of the mind that IFS calls “Protectors” are something like the human version of what this paper calls “Blockers”—internal processes with the “goal” of recognizing and preventing behaviors that look similar to ones that had negative outcomes before. At the same time, there are other processes with a “goal” of doing something else (the way that the RL agent’s goal was just maximizing reward), which may have an “incentive” of getting around those Protectors/Blockers… and which could be described as running an adversarial search to get around the Protectors/Blockers. And this can be a useful way of modeling some of those interactions between processes in a person’s psyche, and sorting out personal problems.
All of this is using intentional language to describe the functioning of processes within our minds, but it’s also not in any way in conflict with the claim that we are not really agents. If anything, it seems to support it.
I’m not sure if this is so much as disagreeing, but just expressing the same point in a different language. “Humans are not agents, rather they are made up of different systems, only some of which are under conscious control” feels like it’s talking about exactly the same point that I’m trying to point at when I say things like “humans are not unified agents”. I just use terms like “parts” rather than “internal processes”, but I would have no objection to using “internal processes” instead.
That said, as shminux suggests, there does still seem to be a benefit in using intentional language in describing some of these processes—for the same reason why it might be useful to use intentional language for describing a chess robot, or a machine-learning algorithm.
E.g. this article describes a reinforcement learning setup, consisting of two “parts”—a standard reinforcement learner, and separately a “Blocker”, which is trained to recognize actions that a human overseer would disapprove of, and to block the RL component from taking actions which would be disapproved of. The authors use intentional language to describe the interaction of these two “subagents”:
This sounds like a reasonable way of describing the interaction of those two components in a very simple machine learning system. And it seems to me that the parts of the mind that IFS calls “Protectors” are something like the human version of what this paper calls “Blockers”—internal processes with the “goal” of recognizing and preventing behaviors that look similar to ones that had negative outcomes before. At the same time, there are other processes with a “goal” of doing something else (the way that the RL agent’s goal was just maximizing reward), which may have an “incentive” of getting around those Protectors/Blockers… and which could be described as running an adversarial search to get around the Protectors/Blockers. And this can be a useful way of modeling some of those interactions between processes in a person’s psyche, and sorting out personal problems.
All of this is using intentional language to describe the functioning of processes within our minds, but it’s also not in any way in conflict with the claim that we are not really agents. If anything, it seems to support it.