I mean… are you working on the basis of an assumption that an “agent” can only have one desire? That seems to pretty clearly not describe humans! Or do you perhaps mean that it is possible to decide that you will act on one desire and not another, and—unless interfered with, somehow (perhaps by some opposing internal sub-agents), thereby, in virtue of that conscious decision, to cause yourself to do that act? Well, once again all I can say is that this is (in my experience) simply not how humans work. Again I see no need to posit multiple selves in order to explain this. [...] That those desires and preferences are occasionally in conflict with one another, does not at all undermine that sense of a unitary self.
I feel like this is conflating two different senses of “mysterious”:
How common this is among humans. It indeed is how humans work, so in that sense it’s not particularly mysterious.
Whether it’s what the assumption of a unitary self would predict. If the assumption of a unitary self wouldn’t predict it, but humans nonetheless act that way, then it’s mysterious if we are acting on the assumption of humans having unitary selves.
So then the question is “what would the assumption of a unitary self predict”. That requires defining what we mean by a unitary self. I’m actually not certain what exactly people have in mind when they say that humans are unified selves, but my guess is that it comes from something like Dennett’s notion of the Self as a Center of Narrative Gravity. We consider ourselves to be a single agent because that’s what the narrative-making machinery in our heads usually takes as an axiom, so our sense of self is that of being one. Now if our sense of self is a post-hoc interpretation of our actions, then that doesn’t seem to predict much in particular (at least in the context of the procrastination thing) so this definition of “a sense of unitary self”, at least, is not in conflict with what we observe. (I don’t know whether this is the thing that you have in mind, though.)
Under this explanation, it seems like there are differences in how people’s narrative-making machinery writes its stories. In particular, there’s a tendency for people to take aspects of themselves that they don’t like and label them as “not me”, since they don’t want to admit to having those aspects. If someone does this kind of a thing, then they may be more likely to end up with a narrative where the thing about “when I procrastinate, it’s as if I want to do one thing but another part of me resists”. I think there are also neurological differences that may produce a less unitary-seeming story: alien hand syndrome would be an extreme case, but I suspect that even people who are mostly mentally healthy may have neurological properties that tend their narrative to be more “part-like”.
In any case, if someone has a “part-like” narrative, where their narrative is in terms of different parts having different desires, then it may be hard for them to imagine a narrative where someone had conflicting desires that all emerged from a single agent—and vice versa. I guess that might be the source of the mutual incomprehension here?
On the other hand, when I say that “humans are not unitary selves”, I’m talking on a different level of description. (So if one holds that we’re unified selves in the sense that some of us have a narrative of being one, then I am not actually disagreeing when I say that we are not unified agents in my sense.) My own thinking goes roughly along the lines of that outlined in Subagents are Not a Metaphor:
Here’s are the parts composing my technical definition of an agent:
Values
This could be anything from literally a utility function to highly framing-dependent. Degenerate case: embedded in lookup table from world model to actions.
World-Model
Degenerate case: stateless world model consisting of just sense inputs.
Search Process
Causal decision theory is a search process.
“From a fixed list of actions, pick the most positively reinforced” is another. Degenerate case: lookup table from world model to actions.
Note: this says a thermostat is an agent. Not figuratively an agent. Literally technically an agent. Feature not bug.
I think that humans are not unitary selves, in that they are composed of subagents in this sense. More specifically, I would explain the procrastination thing as something like “different subsystems for evaluating the value of different actions, are returning mutually inconsistent evaluations about which action is the best, and this conflict is consciously available”.
Something like IFS would be a tool for interfacing with these subsystems. Note that IFS does also make a much stronger claim, in that there are subsystems which are something like subpersonalities, with their own independent memories and opinions. Believing in that doesn’t seem to be necessary for making the IFS techniques work, though: I started out thinking “no, my mind totally doesn’t work like that, it describes nothing in my experience”. That’s why I stayed away from IFS for a long time, as its narrative didn’t fit mine and felt like nonsense. But then when I finally ended up trying it, the techniques worked despite me not believing in the underlying model. Now I’m less sure of whether it’s a just a fake framework that happens to mesh well with our native narrative-making machinery and thus somehow make the process work better, or whether it’s pointing to something real.
I feel like this is conflating two different senses of “mysterious”:
How common this is among humans. It indeed is how humans work, so in that sense it’s not particularly mysterious.
Whether it’s what the assumption of a unitary self would predict. If the assumption of a unitary self wouldn’t predict it, but humans nonetheless act that way, then it’s mysterious if we are acting on the assumption of humans having unitary selves.
So then the question is “what would the assumption of a unitary self predict”. That requires defining what we mean by a unitary self. I’m actually not certain what exactly people have in mind when they say that humans are unified selves, but my guess is that it comes from something like Dennett’s notion of the Self as a Center of Narrative Gravity. We consider ourselves to be a single agent because that’s what the narrative-making machinery in our heads usually takes as an axiom, so our sense of self is that of being one. Now if our sense of self is a post-hoc interpretation of our actions, then that doesn’t seem to predict much in particular (at least in the context of the procrastination thing) so this definition of “a sense of unitary self”, at least, is not in conflict with what we observe. (I don’t know whether this is the thing that you have in mind, though.)
Under this explanation, it seems like there are differences in how people’s narrative-making machinery writes its stories. In particular, there’s a tendency for people to take aspects of themselves that they don’t like and label them as “not me”, since they don’t want to admit to having those aspects. If someone does this kind of a thing, then they may be more likely to end up with a narrative where the thing about “when I procrastinate, it’s as if I want to do one thing but another part of me resists”. I think there are also neurological differences that may produce a less unitary-seeming story: alien hand syndrome would be an extreme case, but I suspect that even people who are mostly mentally healthy may have neurological properties that tend their narrative to be more “part-like”.
In any case, if someone has a “part-like” narrative, where their narrative is in terms of different parts having different desires, then it may be hard for them to imagine a narrative where someone had conflicting desires that all emerged from a single agent—and vice versa. I guess that might be the source of the mutual incomprehension here?
On the other hand, when I say that “humans are not unitary selves”, I’m talking on a different level of description. (So if one holds that we’re unified selves in the sense that some of us have a narrative of being one, then I am not actually disagreeing when I say that we are not unified agents in my sense.) My own thinking goes roughly along the lines of that outlined in Subagents are Not a Metaphor:
I think that humans are not unitary selves, in that they are composed of subagents in this sense. More specifically, I would explain the procrastination thing as something like “different subsystems for evaluating the value of different actions, are returning mutually inconsistent evaluations about which action is the best, and this conflict is consciously available”.
Something like IFS would be a tool for interfacing with these subsystems. Note that IFS does also make a much stronger claim, in that there are subsystems which are something like subpersonalities, with their own independent memories and opinions. Believing in that doesn’t seem to be necessary for making the IFS techniques work, though: I started out thinking “no, my mind totally doesn’t work like that, it describes nothing in my experience”. That’s why I stayed away from IFS for a long time, as its narrative didn’t fit mine and felt like nonsense. But then when I finally ended up trying it, the techniques worked despite me not believing in the underlying model. Now I’m less sure of whether it’s a just a fake framework that happens to mesh well with our native narrative-making machinery and thus somehow make the process work better, or whether it’s pointing to something real.