Sure, I suppose I can answer this (though I’m not sure how satisfying my answers will be).
First: yes, I have experienced what you described as “procrastination” (and this was indeed the sort of experience I had in mind when I answered Ixakas’s question).
Why is it that, even though you consciously believe that working is the correct thing to be doing, and would verbally endorse such a sentiment if asked, you nonetheless do not do the thing you think is correct to do? This is not merely “irrational”; it seems to defy the very concept of agency—you are unable to act on your own will to act, which seems to undercut the very notion that you choose to do things at all. What does it feel like when this strange phenomenon occurs, when your agency seems to disappear for no explicable reason at all?
Well… I don’t think this is all that strange, actually. You say this undercuts the very notion that I choose to do things at all. Well… yes? The notion that I choose to do things at all is, in fact, an abstraction over mental processes that do not, in fact, constitute any sort of irreducible “free will”, and sometimes this abstraction leaks. Is this surprising? It shouldn’t be, should it? (On Less Wrong, of all places!) (And this is without bringing in, e.g., shminux’s views on “free will”, or similar.)
Likewise, you say that when “this strange phenomenon” occurs, my agency seems (!) to disappear “for no explicable reason at all”. But what is so inexplicable about it? Sometimes conscious volition is more effective, and sometimes it is less effective. This doesn’t, actually, seem unusual, surprising, or out of the ordinary in the slightest. Isn’t this just how humans work?
Furthermore, the simple fact is that I sometimes (often!) have multiple desires, which conflict with each other. But… doesn’t everyone? Isn’t this a perfectly ordinary, basic, universally-understood fact about being human? This seems like an unsatisfying answer to your question, but I don’t quite know what else to say; it really does appear just this basic and just this obvious! I hardly think there is any need to invent a theory of multiple selves, or what have you, to explain this…
I mean… are you working on the basis of an assumption that an “agent” can only have one desire? That seems to pretty clearly not describe humans! Or do you perhaps mean that it is possible to decide that you will act on one desire and not another, and—unless interfered with, somehow (perhaps by some opposing internal sub-agents), thereby, in virtue of that conscious decision, to cause yourself to do that act? Well, once again all I can say is that this is (in my experience) simply not how humans work. Again I see no need to posit multiple selves in order to explain this.
Then there is the (perhaps awkward to point out, but nonetheless clearly relevant) fact that even if you say (even to yourself!) that you’ve decided to do work, or that you think you ought to do work, or that you wish to do work, in fact you may currently have a stronger desire to do something else. You could simply admit this to yourself—and many people do! It seems to me that many (perfectly ordinary) people have no trouble saying: “I should be working right now, but I don’t really want to…”. Well, what is wrong with this account? Yes, you should (in some sense) be working. But do you want to be? No, not really. Do you want to want to be working? Perhaps! Do you wish you wanted to work? Perhaps! But do you actually want to work? Sadly, no…
So… take your pick. I just gave three accounts of procrastination—all of which, in practice, form some part of my experience of procrastinating (with the balance between them varying from one instance of procrastination to another, based on various and sundry factors). I can tell you this: I have never experienced anything resembling a sense of there being some “sub-agent”, “sub-self”, “part of my mind”, or any such internal division, which had desires separate and different from “my own” desires. All my desires and preferences are my own. That those desires and preferences are occasionally in conflict with one another, does not at all undermine that sense of a unitary self.
I mean… are you working on the basis of an assumption that an “agent” can only have one desire? That seems to pretty clearly not describe humans! Or do you perhaps mean that it is possible to decide that you will act on one desire and not another, and—unless interfered with, somehow (perhaps by some opposing internal sub-agents), thereby, in virtue of that conscious decision, to cause yourself to do that act? Well, once again all I can say is that this is (in my experience) simply not how humans work. Again I see no need to posit multiple selves in order to explain this. [...] That those desires and preferences are occasionally in conflict with one another, does not at all undermine that sense of a unitary self.
I feel like this is conflating two different senses of “mysterious”:
How common this is among humans. It indeed is how humans work, so in that sense it’s not particularly mysterious.
Whether it’s what the assumption of a unitary self would predict. If the assumption of a unitary self wouldn’t predict it, but humans nonetheless act that way, then it’s mysterious if we are acting on the assumption of humans having unitary selves.
So then the question is “what would the assumption of a unitary self predict”. That requires defining what we mean by a unitary self. I’m actually not certain what exactly people have in mind when they say that humans are unified selves, but my guess is that it comes from something like Dennett’s notion of the Self as a Center of Narrative Gravity. We consider ourselves to be a single agent because that’s what the narrative-making machinery in our heads usually takes as an axiom, so our sense of self is that of being one. Now if our sense of self is a post-hoc interpretation of our actions, then that doesn’t seem to predict much in particular (at least in the context of the procrastination thing) so this definition of “a sense of unitary self”, at least, is not in conflict with what we observe. (I don’t know whether this is the thing that you have in mind, though.)
Under this explanation, it seems like there are differences in how people’s narrative-making machinery writes its stories. In particular, there’s a tendency for people to take aspects of themselves that they don’t like and label them as “not me”, since they don’t want to admit to having those aspects. If someone does this kind of a thing, then they may be more likely to end up with a narrative where the thing about “when I procrastinate, it’s as if I want to do one thing but another part of me resists”. I think there are also neurological differences that may produce a less unitary-seeming story: alien hand syndrome would be an extreme case, but I suspect that even people who are mostly mentally healthy may have neurological properties that tend their narrative to be more “part-like”.
In any case, if someone has a “part-like” narrative, where their narrative is in terms of different parts having different desires, then it may be hard for them to imagine a narrative where someone had conflicting desires that all emerged from a single agent—and vice versa. I guess that might be the source of the mutual incomprehension here?
On the other hand, when I say that “humans are not unitary selves”, I’m talking on a different level of description. (So if one holds that we’re unified selves in the sense that some of us have a narrative of being one, then I am not actually disagreeing when I say that we are not unified agents in my sense.) My own thinking goes roughly along the lines of that outlined in Subagents are Not a Metaphor:
Here’s are the parts composing my technical definition of an agent:
Values
This could be anything from literally a utility function to highly framing-dependent. Degenerate case: embedded in lookup table from world model to actions.
World-Model
Degenerate case: stateless world model consisting of just sense inputs.
Search Process
Causal decision theory is a search process.
“From a fixed list of actions, pick the most positively reinforced” is another. Degenerate case: lookup table from world model to actions.
Note: this says a thermostat is an agent. Not figuratively an agent. Literally technically an agent. Feature not bug.
I think that humans are not unitary selves, in that they are composed of subagents in this sense. More specifically, I would explain the procrastination thing as something like “different subsystems for evaluating the value of different actions, are returning mutually inconsistent evaluations about which action is the best, and this conflict is consciously available”.
Something like IFS would be a tool for interfacing with these subsystems. Note that IFS does also make a much stronger claim, in that there are subsystems which are something like subpersonalities, with their own independent memories and opinions. Believing in that doesn’t seem to be necessary for making the IFS techniques work, though: I started out thinking “no, my mind totally doesn’t work like that, it describes nothing in my experience”. That’s why I stayed away from IFS for a long time, as its narrative didn’t fit mine and felt like nonsense. But then when I finally ended up trying it, the techniques worked despite me not believing in the underlying model. Now I’m less sure of whether it’s a just a fake framework that happens to mesh well with our native narrative-making machinery and thus somehow make the process work better, or whether it’s pointing to something real.
Sure, I suppose I can answer this (though I’m not sure how satisfying my answers will be).
First: yes, I have experienced what you described as “procrastination” (and this was indeed the sort of experience I had in mind when I answered Ixakas’s question).
Well… I don’t think this is all that strange, actually. You say this undercuts the very notion that I choose to do things at all. Well… yes? The notion that I choose to do things at all is, in fact, an abstraction over mental processes that do not, in fact, constitute any sort of irreducible “free will”, and sometimes this abstraction leaks. Is this surprising? It shouldn’t be, should it? (On Less Wrong, of all places!) (And this is without bringing in, e.g., shminux’s views on “free will”, or similar.)
Likewise, you say that when “this strange phenomenon” occurs, my agency seems (!) to disappear “for no explicable reason at all”. But what is so inexplicable about it? Sometimes conscious volition is more effective, and sometimes it is less effective. This doesn’t, actually, seem unusual, surprising, or out of the ordinary in the slightest. Isn’t this just how humans work?
Furthermore, the simple fact is that I sometimes (often!) have multiple desires, which conflict with each other. But… doesn’t everyone? Isn’t this a perfectly ordinary, basic, universally-understood fact about being human? This seems like an unsatisfying answer to your question, but I don’t quite know what else to say; it really does appear just this basic and just this obvious! I hardly think there is any need to invent a theory of multiple selves, or what have you, to explain this…
I mean… are you working on the basis of an assumption that an “agent” can only have one desire? That seems to pretty clearly not describe humans! Or do you perhaps mean that it is possible to decide that you will act on one desire and not another, and—unless interfered with, somehow (perhaps by some opposing internal sub-agents), thereby, in virtue of that conscious decision, to cause yourself to do that act? Well, once again all I can say is that this is (in my experience) simply not how humans work. Again I see no need to posit multiple selves in order to explain this.
Then there is the (perhaps awkward to point out, but nonetheless clearly relevant) fact that even if you say (even to yourself!) that you’ve decided to do work, or that you think you ought to do work, or that you wish to do work, in fact you may currently have a stronger desire to do something else. You could simply admit this to yourself—and many people do! It seems to me that many (perfectly ordinary) people have no trouble saying: “I should be working right now, but I don’t really want to…”. Well, what is wrong with this account? Yes, you should (in some sense) be working. But do you want to be? No, not really. Do you want to want to be working? Perhaps! Do you wish you wanted to work? Perhaps! But do you actually want to work? Sadly, no…
So… take your pick. I just gave three accounts of procrastination—all of which, in practice, form some part of my experience of procrastinating (with the balance between them varying from one instance of procrastination to another, based on various and sundry factors). I can tell you this: I have never experienced anything resembling a sense of there being some “sub-agent”, “sub-self”, “part of my mind”, or any such internal division, which had desires separate and different from “my own” desires. All my desires and preferences are my own. That those desires and preferences are occasionally in conflict with one another, does not at all undermine that sense of a unitary self.
I feel like this is conflating two different senses of “mysterious”:
How common this is among humans. It indeed is how humans work, so in that sense it’s not particularly mysterious.
Whether it’s what the assumption of a unitary self would predict. If the assumption of a unitary self wouldn’t predict it, but humans nonetheless act that way, then it’s mysterious if we are acting on the assumption of humans having unitary selves.
So then the question is “what would the assumption of a unitary self predict”. That requires defining what we mean by a unitary self. I’m actually not certain what exactly people have in mind when they say that humans are unified selves, but my guess is that it comes from something like Dennett’s notion of the Self as a Center of Narrative Gravity. We consider ourselves to be a single agent because that’s what the narrative-making machinery in our heads usually takes as an axiom, so our sense of self is that of being one. Now if our sense of self is a post-hoc interpretation of our actions, then that doesn’t seem to predict much in particular (at least in the context of the procrastination thing) so this definition of “a sense of unitary self”, at least, is not in conflict with what we observe. (I don’t know whether this is the thing that you have in mind, though.)
Under this explanation, it seems like there are differences in how people’s narrative-making machinery writes its stories. In particular, there’s a tendency for people to take aspects of themselves that they don’t like and label them as “not me”, since they don’t want to admit to having those aspects. If someone does this kind of a thing, then they may be more likely to end up with a narrative where the thing about “when I procrastinate, it’s as if I want to do one thing but another part of me resists”. I think there are also neurological differences that may produce a less unitary-seeming story: alien hand syndrome would be an extreme case, but I suspect that even people who are mostly mentally healthy may have neurological properties that tend their narrative to be more “part-like”.
In any case, if someone has a “part-like” narrative, where their narrative is in terms of different parts having different desires, then it may be hard for them to imagine a narrative where someone had conflicting desires that all emerged from a single agent—and vice versa. I guess that might be the source of the mutual incomprehension here?
On the other hand, when I say that “humans are not unitary selves”, I’m talking on a different level of description. (So if one holds that we’re unified selves in the sense that some of us have a narrative of being one, then I am not actually disagreeing when I say that we are not unified agents in my sense.) My own thinking goes roughly along the lines of that outlined in Subagents are Not a Metaphor:
I think that humans are not unitary selves, in that they are composed of subagents in this sense. More specifically, I would explain the procrastination thing as something like “different subsystems for evaluating the value of different actions, are returning mutually inconsistent evaluations about which action is the best, and this conflict is consciously available”.
Something like IFS would be a tool for interfacing with these subsystems. Note that IFS does also make a much stronger claim, in that there are subsystems which are something like subpersonalities, with their own independent memories and opinions. Believing in that doesn’t seem to be necessary for making the IFS techniques work, though: I started out thinking “no, my mind totally doesn’t work like that, it describes nothing in my experience”. That’s why I stayed away from IFS for a long time, as its narrative didn’t fit mine and felt like nonsense. But then when I finally ended up trying it, the techniques worked despite me not believing in the underlying model. Now I’m less sure of whether it’s a just a fake framework that happens to mesh well with our native narrative-making machinery and thus somehow make the process work better, or whether it’s pointing to something real.