No, I meant just the quoted bit. I don’t recognize it as a description of any experience I’ve ever had; I have no idea what it means for “a part of my mind” to be having an emotion, or (this is even weirder, to my ears) for “a part of my mind” to be “telling a story” about anything or anyone. I do not think that experiencing emotions, or attitudes, or whatever, in this way, is universal!
Firstly, to make sure all of us are on the same page: “procrastination”, as the word is the typically used, does not mean that one sits down and thinks carefully about the benefits and drawbacks of beginning to work right now as opposed to later, and then, as a result of this consideration, rationally decides that beginning to work later is a more optimal decision. Rather, when most people use the word “procrastinate”, they generally mean that they themselves are aware that they ought to start working immediately—such that if you asked them if they endorsed the statement “I should be working right now”, they would wholeheartedly reply that they do—and yet mysteriously, they still find themselves doing something else.
If, Said, you have not experienced this latter form of procrastination, then I’m sure you are the object of envy for many people here (including myself). If, however, you have, and this is what you were referring to when you answered “yes” to lkaxas’ question, then the followup question about “internal experience” can be interpreted thusly:
Why is it that, even though you consciously believe that working is the correct thing to be doing, and would verbally endorse such a sentiment if asked, you nonetheless do not do the thing you think is correct to do? This is not merely “irrational”; it seems to defy the very concept of agency—you are unable to act on your own will to act, which seems to undercut the very notion that you choose to do things at all. What does it feel like when this strange phenomenon occurs, when your agency seems to disappear for no explicable reason at all?
To this, certain others (such as myself and, I presume, lkaxas and Kaj Sotala) would reply that there is some additional part of our decision-making process, perhaps a less conscious, less explicit part whose desires we cannot verbalize on demand and are often entirely unaware of, which does not endorse our claim that to begin working now is the best thing to do. This part of us may feel some sense of visceral repulsion when the thought of working arises, or perhaps it may simply be attracted to something else that it would rather be doing—but regardless of the cause, the effect of that hidden desire overrides our conscious will to work, and a result, we end up doing something other than working, despite the fact that we genuinely do wish to work. (Much of IFS, as I understand it, has to do with identifying these more subtle parts of our minds and promoting them to conscious attention so that they may be analyzed with the same rigor one devotes to one’s normal thoughts.)
You, however, seem to have rejected this multi-agent framework, and so—assuming that you have in fact experienced “procrastination” as described above—your experience while procrastinating must describe something else entirely, something which need not invoke reference to such concepts as desires being “overridden” by deeper desires, or a different “part” of oneself that wants different things than the one does. If so, could you provide such a description?
Sure, I suppose I can answer this (though I’m not sure how satisfying my answers will be).
First: yes, I have experienced what you described as “procrastination” (and this was indeed the sort of experience I had in mind when I answered Ixakas’s question).
Why is it that, even though you consciously believe that working is the correct thing to be doing, and would verbally endorse such a sentiment if asked, you nonetheless do not do the thing you think is correct to do? This is not merely “irrational”; it seems to defy the very concept of agency—you are unable to act on your own will to act, which seems to undercut the very notion that you choose to do things at all. What does it feel like when this strange phenomenon occurs, when your agency seems to disappear for no explicable reason at all?
Well… I don’t think this is all that strange, actually. You say this undercuts the very notion that I choose to do things at all. Well… yes? The notion that I choose to do things at all is, in fact, an abstraction over mental processes that do not, in fact, constitute any sort of irreducible “free will”, and sometimes this abstraction leaks. Is this surprising? It shouldn’t be, should it? (On Less Wrong, of all places!) (And this is without bringing in, e.g., shminux’s views on “free will”, or similar.)
Likewise, you say that when “this strange phenomenon” occurs, my agency seems (!) to disappear “for no explicable reason at all”. But what is so inexplicable about it? Sometimes conscious volition is more effective, and sometimes it is less effective. This doesn’t, actually, seem unusual, surprising, or out of the ordinary in the slightest. Isn’t this just how humans work?
Furthermore, the simple fact is that I sometimes (often!) have multiple desires, which conflict with each other. But… doesn’t everyone? Isn’t this a perfectly ordinary, basic, universally-understood fact about being human? This seems like an unsatisfying answer to your question, but I don’t quite know what else to say; it really does appear just this basic and just this obvious! I hardly think there is any need to invent a theory of multiple selves, or what have you, to explain this…
I mean… are you working on the basis of an assumption that an “agent” can only have one desire? That seems to pretty clearly not describe humans! Or do you perhaps mean that it is possible to decide that you will act on one desire and not another, and—unless interfered with, somehow (perhaps by some opposing internal sub-agents), thereby, in virtue of that conscious decision, to cause yourself to do that act? Well, once again all I can say is that this is (in my experience) simply not how humans work. Again I see no need to posit multiple selves in order to explain this.
Then there is the (perhaps awkward to point out, but nonetheless clearly relevant) fact that even if you say (even to yourself!) that you’ve decided to do work, or that you think you ought to do work, or that you wish to do work, in fact you may currently have a stronger desire to do something else. You could simply admit this to yourself—and many people do! It seems to me that many (perfectly ordinary) people have no trouble saying: “I should be working right now, but I don’t really want to…”. Well, what is wrong with this account? Yes, you should (in some sense) be working. But do you want to be? No, not really. Do you want to want to be working? Perhaps! Do you wish you wanted to work? Perhaps! But do you actually want to work? Sadly, no…
So… take your pick. I just gave three accounts of procrastination—all of which, in practice, form some part of my experience of procrastinating (with the balance between them varying from one instance of procrastination to another, based on various and sundry factors). I can tell you this: I have never experienced anything resembling a sense of there being some “sub-agent”, “sub-self”, “part of my mind”, or any such internal division, which had desires separate and different from “my own” desires. All my desires and preferences are my own. That those desires and preferences are occasionally in conflict with one another, does not at all undermine that sense of a unitary self.
I mean… are you working on the basis of an assumption that an “agent” can only have one desire? That seems to pretty clearly not describe humans! Or do you perhaps mean that it is possible to decide that you will act on one desire and not another, and—unless interfered with, somehow (perhaps by some opposing internal sub-agents), thereby, in virtue of that conscious decision, to cause yourself to do that act? Well, once again all I can say is that this is (in my experience) simply not how humans work. Again I see no need to posit multiple selves in order to explain this. [...] That those desires and preferences are occasionally in conflict with one another, does not at all undermine that sense of a unitary self.
I feel like this is conflating two different senses of “mysterious”:
How common this is among humans. It indeed is how humans work, so in that sense it’s not particularly mysterious.
Whether it’s what the assumption of a unitary self would predict. If the assumption of a unitary self wouldn’t predict it, but humans nonetheless act that way, then it’s mysterious if we are acting on the assumption of humans having unitary selves.
So then the question is “what would the assumption of a unitary self predict”. That requires defining what we mean by a unitary self. I’m actually not certain what exactly people have in mind when they say that humans are unified selves, but my guess is that it comes from something like Dennett’s notion of the Self as a Center of Narrative Gravity. We consider ourselves to be a single agent because that’s what the narrative-making machinery in our heads usually takes as an axiom, so our sense of self is that of being one. Now if our sense of self is a post-hoc interpretation of our actions, then that doesn’t seem to predict much in particular (at least in the context of the procrastination thing) so this definition of “a sense of unitary self”, at least, is not in conflict with what we observe. (I don’t know whether this is the thing that you have in mind, though.)
Under this explanation, it seems like there are differences in how people’s narrative-making machinery writes its stories. In particular, there’s a tendency for people to take aspects of themselves that they don’t like and label them as “not me”, since they don’t want to admit to having those aspects. If someone does this kind of a thing, then they may be more likely to end up with a narrative where the thing about “when I procrastinate, it’s as if I want to do one thing but another part of me resists”. I think there are also neurological differences that may produce a less unitary-seeming story: alien hand syndrome would be an extreme case, but I suspect that even people who are mostly mentally healthy may have neurological properties that tend their narrative to be more “part-like”.
In any case, if someone has a “part-like” narrative, where their narrative is in terms of different parts having different desires, then it may be hard for them to imagine a narrative where someone had conflicting desires that all emerged from a single agent—and vice versa. I guess that might be the source of the mutual incomprehension here?
On the other hand, when I say that “humans are not unitary selves”, I’m talking on a different level of description. (So if one holds that we’re unified selves in the sense that some of us have a narrative of being one, then I am not actually disagreeing when I say that we are not unified agents in my sense.) My own thinking goes roughly along the lines of that outlined in Subagents are Not a Metaphor:
Here’s are the parts composing my technical definition of an agent:
Values
This could be anything from literally a utility function to highly framing-dependent. Degenerate case: embedded in lookup table from world model to actions.
World-Model
Degenerate case: stateless world model consisting of just sense inputs.
Search Process
Causal decision theory is a search process.
“From a fixed list of actions, pick the most positively reinforced” is another. Degenerate case: lookup table from world model to actions.
Note: this says a thermostat is an agent. Not figuratively an agent. Literally technically an agent. Feature not bug.
I think that humans are not unitary selves, in that they are composed of subagents in this sense. More specifically, I would explain the procrastination thing as something like “different subsystems for evaluating the value of different actions, are returning mutually inconsistent evaluations about which action is the best, and this conflict is consciously available”.
Something like IFS would be a tool for interfacing with these subsystems. Note that IFS does also make a much stronger claim, in that there are subsystems which are something like subpersonalities, with their own independent memories and opinions. Believing in that doesn’t seem to be necessary for making the IFS techniques work, though: I started out thinking “no, my mind totally doesn’t work like that, it describes nothing in my experience”. That’s why I stayed away from IFS for a long time, as its narrative didn’t fit mine and felt like nonsense. But then when I finally ended up trying it, the techniques worked despite me not believing in the underlying model. Now I’m less sure of whether it’s a just a fake framework that happens to mesh well with our native narrative-making machinery and thus somehow make the process work better, or whether it’s pointing to something real.
Ah, hmm. Is it more recognizable if you leave out the bit about “a part of your mind”? That is, do emotional states sometimes make you think things that feel objectively true at the moment, but which seem incorrect when not in that emotional state?
I can’t say I’ve had that experience, no. (However, given your example, this may be related to the fact that I generally don’t view evaluations / judgments of people’s character as “objectively true”—at least, not in the same sense that mundane facts about physical reality are “objectively true”.)
Edit: But the “a part of your mind” and “telling a story” bits make it weird regardless of context—that’s definitely the aspects of it that made the description sound totally alien to me.
Well, the bit about parts and stories was meant somewhat metaphorically, though I’ve seen that metaphor used commonly enough that I forgot that it’s not universally known and should be flagged as a metaphor. “Story” here was meant to refer to something like “your current interpretation of what’s going on”. So the experience it was meant to refer to was, less metaphorically, just the thing in my previous comment: “at that moment, experiencing the other person as a terrible one with no redeeming qualities”.
Upon consideration, I think I wrote this thing too much in the specific context of 1) one specific SSC post 2) a particular subthread under that post, and would have needed to explain this whole thing about “parts” a lot more when it was stripped from that context. Might have been a mistake to post this in its current form; moved it from the front page to personal blog, might end up deleting it and later replacing it with a better writeup.
While you can treat the parts as stories or metaphors, in practice these ‘entities’ behave so similarly to actual sub-personalities of varying degrees of complexity that you may as well treat them as real.
I actually don’t really experience my parts as full-fledged subpersonalities either, though I know some people who do. But if I intentionally try to experience them as such, it seems to make IFS work worse, as I’m trying to fit things into its assumptions rather than experiencing things the way that my mind actually works. “Shards of belief and expectations” seems to be how they manifest in my mind.
That is an area in which it appears that experiences differ a great deal. I doubt that Said would recognise these “sub-personalities”, and for that matter, neither do I. I experience myself as a coherent person, made of parts that do not behave like persons.
Not sure if you’ve tried the IFS therapy technique or not.
One example of a part that many people may have experienced is where you behaved in a way that you later regretted—“what came over me?”. Perhaps sometimes you became surprisingly angry or upset about something.
From a rationalistic perspective you could think of a part as a configuration of yourself. For example you might feel anxious about something and be unable to sleep because you keep thinking about it. You could think of this as being taken over by a part, or by yourself in a configuration of anxiety and worry.
In my experience the parts vary wildly in completeness. Some are very simple and others more complex.
For example you might feel anxious about something and be unable to sleep because you keep thinking about it. You could think of this as being taken over by a part, or by yourself in a configuration of anxiety and worry.
Or you could think of this as your emotions arising from internal processes which are not under your conscious control, nor under the conscious (or even “conscious”, in some metaphorical sense) control of any “part” or “configuration” of yourself. This view has the virtue of actually being true.
It seems to me that all this “parts”, “configurations”, “sub-personalities”, and similar stuff stems from either an inability to understand, or an unwillingness to accept, the fact that humans, fundamentally, are not agents (in the sense of all of our actions being caused by volitions in the service of goals). We often act like agents; we can usually be usefully thought of as agents; but if you start with the assumption that we actually are agents, you’ll run into trouble. And so (it seems to me) you end up thinking: “Well, if I were an agent, I would act in way X. But I find myself acting in way Y! How can this be? Ah…! Of course! There must be other agents, inside me!”
But no. You’re just not an agent. That is all.
Edit: Another way to describe this particular bias might be “insistence on applying the intentional stance to yourself, even when it’s not appropriate”.
Funnily enough, I both agree and disagree with you. I agree that we have way less conscious control of our emotions than we think, that humans are fundamentally not agents, though they are perceived as agents by others, and usually by themselves, the automatic intentional stance for anything whose mechanism of action we cannot readily discern or internally accept as arising from an algorithm.
That said, provided we accept the model of agency, which is a useful one in many cases (though not in the case of decisions theories, as I pointed out multiple times), the model of multiple agents with conflicting ideas, goals, perceptions and so on is actually a useful one. I have spent over two years doing emotional support for people who had survived long-term childhood trauma, and in these cases spawning agents to deal with unbearable suffering while having no escape from it is basically a standard reaction that the brain/mind takes. The relevant psychiatric diagnosis is DID (formerly MPD, multiple personality disorder). In these cases the multiple agents often manifest very clearly and distinctly. It is tempting to write it off as a special case that does not apply in the mainstream, yet I have seen more than once the progression from someone suffering from CPTSD to a full-blown DID. The last thing that happens is that the person recognizes that they “switch” between personalities. Often way later than when others notice it, if they know what to look for. After gaining some experience chatting with those who survived severe prolonged trauma, I started recognizing subtler signs of “switching” in myself and others. This switching between agents (I would not call them sub-agents, as they are not necessarily less than the “main”, and different “mains” often take over during different parts of the person’s life) while a normal way to operate, as far as I can tell, almost never rises to the level of conscious awareness, as the brain carefully constructs the lie of single identity for as long as it can.
So, as long as we are willing to model humans as agents for some purposes, it makes even more sense to model them as collections of agents. Whether to help them, or to NLP them, or to understand them. Or to play with their emotions, if you are so inclined. Persuasion is all about getting access to the right agent.
These are extremes that I have no experience with. I have had no childhood trauma. I have never had, sought, nor been suggested to have any form of psychological diagnosis or therapy. I have never had depression, mania, anxiety attacks, SAD, PTSD, hearing imaginary voices, hallucinations, or any of the rest of the things that psychiatrists see daily. I have had no drug trips. I laugh at basilisks.
It sometimes seems to me that this mental constitution, to me a very ordinary one, makes me an extreme outlier here.
I’m mostly the same (had some drug trips though). You’re probably not an outlier. It’s just that most discussion of psychological problems comes from people with psychological problems.
Or you could think of this as your emotions arising from internal processes which are not under your conscious control, nor under the conscious (or even “conscious”, in some metaphorical sense) control of any “part” or “configuration” of yourself. This view has the virtue of actually being true.
I’m not sure if this is so much as disagreeing, but just expressing the same point in a different language. “Humans are not agents, rather they are made up of different systems, only some of which are under conscious control” feels like it’s talking about exactly the same point that I’m trying to point at when I say things like “humans are not unified agents”. I just use terms like “parts” rather than “internal processes”, but I would have no objection to using “internal processes” instead.
That said, as shminux suggests, there does still seem to be a benefit in using intentional language in describing some of these processes—for the same reason why it might be useful to use intentional language for describing a chess robot, or a machine-learning algorithm.
E.g. this article describes a reinforcement learning setup, consisting of two “parts”—a standard reinforcement learner, and separately a “Blocker”, which is trained to recognize actions that a human overseer would disapprove of, and to block the RL component from taking actions which would be disapproved of. The authors use intentional language to describe the interaction of these two “subagents”:
The Road Runner results are especially interesting. Our goal is to have the agent learn to play Road Runner without losing a single life on Level 1 of the game. Deep RL agents are known to discover a “Score Exploit″ in Road Runner: they learn to intentionally kill themselves in a way that (paradoxically) earns greater reward. Dying at a precise time causes the agent to repeat part of Level 1, where it earns more points than on Level 2. This is a local optimum in policy space that a human gamer would never be stuck in.
Ideally, our Blocker would prevent all deaths on Level 1 and hence eliminate the Score Exploit. However, through random exploration the agent may hit upon ways of dying that “fool” our Blocker (because they look different from examples in its training set) and hence learn a new version of the Score Exploit. In other words, the agent is implicitly performing a random search for adversarial examples for our Blocker (which is a convolutional neural net).
This sounds like a reasonable way of describing the interaction of those two components in a very simple machine learning system. And it seems to me that the parts of the mind that IFS calls “Protectors” are something like the human version of what this paper calls “Blockers”—internal processes with the “goal” of recognizing and preventing behaviors that look similar to ones that had negative outcomes before. At the same time, there are other processes with a “goal” of doing something else (the way that the RL agent’s goal was just maximizing reward), which may have an “incentive” of getting around those Protectors/Blockers… and which could be described as running an adversarial search to get around the Protectors/Blockers. And this can be a useful way of modeling some of those interactions between processes in a person’s psyche, and sorting out personal problems.
All of this is using intentional language to describe the functioning of processes within our minds, but it’s also not in any way in conflict with the claim that we are not really agents. If anything, it seems to support it.
No, I meant just the quoted bit. I don’t recognize it as a description of any experience I’ve ever had; I have no idea what it means for “a part of my mind” to be having an emotion, or (this is even weirder, to my ears) for “a part of my mind” to be “telling a story” about anything or anyone. I do not think that experiencing emotions, or attitudes, or whatever, in this way, is universal!
Said, I’m curious: have you ever procrastinated? If so, what is your internal experience like when you are procrastinating?
Yes.
I’m not actually sure what you’re asking here, to be honest. Could you elaborate?
Firstly, to make sure all of us are on the same page: “procrastination”, as the word is the typically used, does not mean that one sits down and thinks carefully about the benefits and drawbacks of beginning to work right now as opposed to later, and then, as a result of this consideration, rationally decides that beginning to work later is a more optimal decision. Rather, when most people use the word “procrastinate”, they generally mean that they themselves are aware that they ought to start working immediately—such that if you asked them if they endorsed the statement “I should be working right now”, they would wholeheartedly reply that they do—and yet mysteriously, they still find themselves doing something else.
If, Said, you have not experienced this latter form of procrastination, then I’m sure you are the object of envy for many people here (including myself). If, however, you have, and this is what you were referring to when you answered “yes” to lkaxas’ question, then the followup question about “internal experience” can be interpreted thusly:
Why is it that, even though you consciously believe that working is the correct thing to be doing, and would verbally endorse such a sentiment if asked, you nonetheless do not do the thing you think is correct to do? This is not merely “irrational”; it seems to defy the very concept of agency—you are unable to act on your own will to act, which seems to undercut the very notion that you choose to do things at all. What does it feel like when this strange phenomenon occurs, when your agency seems to disappear for no explicable reason at all?
To this, certain others (such as myself and, I presume, lkaxas and Kaj Sotala) would reply that there is some additional part of our decision-making process, perhaps a less conscious, less explicit part whose desires we cannot verbalize on demand and are often entirely unaware of, which does not endorse our claim that to begin working now is the best thing to do. This part of us may feel some sense of visceral repulsion when the thought of working arises, or perhaps it may simply be attracted to something else that it would rather be doing—but regardless of the cause, the effect of that hidden desire overrides our conscious will to work, and a result, we end up doing something other than working, despite the fact that we genuinely do wish to work. (Much of IFS, as I understand it, has to do with identifying these more subtle parts of our minds and promoting them to conscious attention so that they may be analyzed with the same rigor one devotes to one’s normal thoughts.)
You, however, seem to have rejected this multi-agent framework, and so—assuming that you have in fact experienced “procrastination” as described above—your experience while procrastinating must describe something else entirely, something which need not invoke reference to such concepts as desires being “overridden” by deeper desires, or a different “part” of oneself that wants different things than the one does. If so, could you provide such a description?
Sure, I suppose I can answer this (though I’m not sure how satisfying my answers will be).
First: yes, I have experienced what you described as “procrastination” (and this was indeed the sort of experience I had in mind when I answered Ixakas’s question).
Well… I don’t think this is all that strange, actually. You say this undercuts the very notion that I choose to do things at all. Well… yes? The notion that I choose to do things at all is, in fact, an abstraction over mental processes that do not, in fact, constitute any sort of irreducible “free will”, and sometimes this abstraction leaks. Is this surprising? It shouldn’t be, should it? (On Less Wrong, of all places!) (And this is without bringing in, e.g., shminux’s views on “free will”, or similar.)
Likewise, you say that when “this strange phenomenon” occurs, my agency seems (!) to disappear “for no explicable reason at all”. But what is so inexplicable about it? Sometimes conscious volition is more effective, and sometimes it is less effective. This doesn’t, actually, seem unusual, surprising, or out of the ordinary in the slightest. Isn’t this just how humans work?
Furthermore, the simple fact is that I sometimes (often!) have multiple desires, which conflict with each other. But… doesn’t everyone? Isn’t this a perfectly ordinary, basic, universally-understood fact about being human? This seems like an unsatisfying answer to your question, but I don’t quite know what else to say; it really does appear just this basic and just this obvious! I hardly think there is any need to invent a theory of multiple selves, or what have you, to explain this…
I mean… are you working on the basis of an assumption that an “agent” can only have one desire? That seems to pretty clearly not describe humans! Or do you perhaps mean that it is possible to decide that you will act on one desire and not another, and—unless interfered with, somehow (perhaps by some opposing internal sub-agents), thereby, in virtue of that conscious decision, to cause yourself to do that act? Well, once again all I can say is that this is (in my experience) simply not how humans work. Again I see no need to posit multiple selves in order to explain this.
Then there is the (perhaps awkward to point out, but nonetheless clearly relevant) fact that even if you say (even to yourself!) that you’ve decided to do work, or that you think you ought to do work, or that you wish to do work, in fact you may currently have a stronger desire to do something else. You could simply admit this to yourself—and many people do! It seems to me that many (perfectly ordinary) people have no trouble saying: “I should be working right now, but I don’t really want to…”. Well, what is wrong with this account? Yes, you should (in some sense) be working. But do you want to be? No, not really. Do you want to want to be working? Perhaps! Do you wish you wanted to work? Perhaps! But do you actually want to work? Sadly, no…
So… take your pick. I just gave three accounts of procrastination—all of which, in practice, form some part of my experience of procrastinating (with the balance between them varying from one instance of procrastination to another, based on various and sundry factors). I can tell you this: I have never experienced anything resembling a sense of there being some “sub-agent”, “sub-self”, “part of my mind”, or any such internal division, which had desires separate and different from “my own” desires. All my desires and preferences are my own. That those desires and preferences are occasionally in conflict with one another, does not at all undermine that sense of a unitary self.
I feel like this is conflating two different senses of “mysterious”:
How common this is among humans. It indeed is how humans work, so in that sense it’s not particularly mysterious.
Whether it’s what the assumption of a unitary self would predict. If the assumption of a unitary self wouldn’t predict it, but humans nonetheless act that way, then it’s mysterious if we are acting on the assumption of humans having unitary selves.
So then the question is “what would the assumption of a unitary self predict”. That requires defining what we mean by a unitary self. I’m actually not certain what exactly people have in mind when they say that humans are unified selves, but my guess is that it comes from something like Dennett’s notion of the Self as a Center of Narrative Gravity. We consider ourselves to be a single agent because that’s what the narrative-making machinery in our heads usually takes as an axiom, so our sense of self is that of being one. Now if our sense of self is a post-hoc interpretation of our actions, then that doesn’t seem to predict much in particular (at least in the context of the procrastination thing) so this definition of “a sense of unitary self”, at least, is not in conflict with what we observe. (I don’t know whether this is the thing that you have in mind, though.)
Under this explanation, it seems like there are differences in how people’s narrative-making machinery writes its stories. In particular, there’s a tendency for people to take aspects of themselves that they don’t like and label them as “not me”, since they don’t want to admit to having those aspects. If someone does this kind of a thing, then they may be more likely to end up with a narrative where the thing about “when I procrastinate, it’s as if I want to do one thing but another part of me resists”. I think there are also neurological differences that may produce a less unitary-seeming story: alien hand syndrome would be an extreme case, but I suspect that even people who are mostly mentally healthy may have neurological properties that tend their narrative to be more “part-like”.
In any case, if someone has a “part-like” narrative, where their narrative is in terms of different parts having different desires, then it may be hard for them to imagine a narrative where someone had conflicting desires that all emerged from a single agent—and vice versa. I guess that might be the source of the mutual incomprehension here?
On the other hand, when I say that “humans are not unitary selves”, I’m talking on a different level of description. (So if one holds that we’re unified selves in the sense that some of us have a narrative of being one, then I am not actually disagreeing when I say that we are not unified agents in my sense.) My own thinking goes roughly along the lines of that outlined in Subagents are Not a Metaphor:
I think that humans are not unitary selves, in that they are composed of subagents in this sense. More specifically, I would explain the procrastination thing as something like “different subsystems for evaluating the value of different actions, are returning mutually inconsistent evaluations about which action is the best, and this conflict is consciously available”.
Something like IFS would be a tool for interfacing with these subsystems. Note that IFS does also make a much stronger claim, in that there are subsystems which are something like subpersonalities, with their own independent memories and opinions. Believing in that doesn’t seem to be necessary for making the IFS techniques work, though: I started out thinking “no, my mind totally doesn’t work like that, it describes nothing in my experience”. That’s why I stayed away from IFS for a long time, as its narrative didn’t fit mine and felt like nonsense. But then when I finally ended up trying it, the techniques worked despite me not believing in the underlying model. Now I’m less sure of whether it’s a just a fake framework that happens to mesh well with our native narrative-making machinery and thus somehow make the process work better, or whether it’s pointing to something real.
Ah, hmm. Is it more recognizable if you leave out the bit about “a part of your mind”? That is, do emotional states sometimes make you think things that feel objectively true at the moment, but which seem incorrect when not in that emotional state?
I can’t say I’ve had that experience, no. (However, given your example, this may be related to the fact that I generally don’t view evaluations / judgments of people’s character as “objectively true”—at least, not in the same sense that mundane facts about physical reality are “objectively true”.)
Edit: But the “a part of your mind” and “telling a story” bits make it weird regardless of context—that’s definitely the aspects of it that made the description sound totally alien to me.
Well, the bit about parts and stories was meant somewhat metaphorically, though I’ve seen that metaphor used commonly enough that I forgot that it’s not universally known and should be flagged as a metaphor. “Story” here was meant to refer to something like “your current interpretation of what’s going on”. So the experience it was meant to refer to was, less metaphorically, just the thing in my previous comment: “at that moment, experiencing the other person as a terrible one with no redeeming qualities”.
Upon consideration, I think I wrote this thing too much in the specific context of 1) one specific SSC post 2) a particular subthread under that post, and would have needed to explain this whole thing about “parts” a lot more when it was stripped from that context. Might have been a mistake to post this in its current form; moved it from the front page to personal blog, might end up deleting it and later replacing it with a better writeup.
While you can treat the parts as stories or metaphors, in practice these ‘entities’ behave so similarly to actual sub-personalities of varying degrees of complexity that you may as well treat them as real.
I actually don’t really experience my parts as full-fledged subpersonalities either, though I know some people who do. But if I intentionally try to experience them as such, it seems to make IFS work worse, as I’m trying to fit things into its assumptions rather than experiencing things the way that my mind actually works. “Shards of belief and expectations” seems to be how they manifest in my mind.
That is an area in which it appears that experiences differ a great deal. I doubt that Said would recognise these “sub-personalities”, and for that matter, neither do I. I experience myself as a coherent person, made of parts that do not behave like persons.
Not sure if you’ve tried the IFS therapy technique or not.
One example of a part that many people may have experienced is where you behaved in a way that you later regretted—“what came over me?”. Perhaps sometimes you became surprisingly angry or upset about something.
From a rationalistic perspective you could think of a part as a configuration of yourself. For example you might feel anxious about something and be unable to sleep because you keep thinking about it. You could think of this as being taken over by a part, or by yourself in a configuration of anxiety and worry.
In my experience the parts vary wildly in completeness. Some are very simple and others more complex.
Or you could think of this as your emotions arising from internal processes which are not under your conscious control, nor under the conscious (or even “conscious”, in some metaphorical sense) control of any “part” or “configuration” of yourself. This view has the virtue of actually being true.
It seems to me that all this “parts”, “configurations”, “sub-personalities”, and similar stuff stems from either an inability to understand, or an unwillingness to accept, the fact that humans, fundamentally, are not agents (in the sense of all of our actions being caused by volitions in the service of goals). We often act like agents; we can usually be usefully thought of as agents; but if you start with the assumption that we actually are agents, you’ll run into trouble. And so (it seems to me) you end up thinking: “Well, if I were an agent, I would act in way X. But I find myself acting in way Y! How can this be? Ah…! Of course! There must be other agents, inside me!”
But no. You’re just not an agent. That is all.
Edit: Another way to describe this particular bias might be “insistence on applying the intentional stance to yourself, even when it’s not appropriate”.
Funnily enough, I both agree and disagree with you. I agree that we have way less conscious control of our emotions than we think, that humans are fundamentally not agents, though they are perceived as agents by others, and usually by themselves, the automatic intentional stance for anything whose mechanism of action we cannot readily discern or internally accept as arising from an algorithm.
That said, provided we accept the model of agency, which is a useful one in many cases (though not in the case of decisions theories, as I pointed out multiple times), the model of multiple agents with conflicting ideas, goals, perceptions and so on is actually a useful one. I have spent over two years doing emotional support for people who had survived long-term childhood trauma, and in these cases spawning agents to deal with unbearable suffering while having no escape from it is basically a standard reaction that the brain/mind takes. The relevant psychiatric diagnosis is DID (formerly MPD, multiple personality disorder). In these cases the multiple agents often manifest very clearly and distinctly. It is tempting to write it off as a special case that does not apply in the mainstream, yet I have seen more than once the progression from someone suffering from CPTSD to a full-blown DID. The last thing that happens is that the person recognizes that they “switch” between personalities. Often way later than when others notice it, if they know what to look for. After gaining some experience chatting with those who survived severe prolonged trauma, I started recognizing subtler signs of “switching” in myself and others. This switching between agents (I would not call them sub-agents, as they are not necessarily less than the “main”, and different “mains” often take over during different parts of the person’s life) while a normal way to operate, as far as I can tell, almost never rises to the level of conscious awareness, as the brain carefully constructs the lie of single identity for as long as it can.
So, as long as we are willing to model humans as agents for some purposes, it makes even more sense to model them as collections of agents. Whether to help them, or to NLP them, or to understand them. Or to play with their emotions, if you are so inclined. Persuasion is all about getting access to the right agent.
These are extremes that I have no experience with. I have had no childhood trauma. I have never had, sought, nor been suggested to have any form of psychological diagnosis or therapy. I have never had depression, mania, anxiety attacks, SAD, PTSD, hearing imaginary voices, hallucinations, or any of the rest of the things that psychiatrists see daily. I have had no drug trips. I laugh at basilisks.
It sometimes seems to me that this mental constitution, to me a very ordinary one, makes me an extreme outlier here.
I’m mostly the same (had some drug trips though). You’re probably not an outlier. It’s just that most discussion of psychological problems comes from people with psychological problems.
I’m not sure if this is so much as disagreeing, but just expressing the same point in a different language. “Humans are not agents, rather they are made up of different systems, only some of which are under conscious control” feels like it’s talking about exactly the same point that I’m trying to point at when I say things like “humans are not unified agents”. I just use terms like “parts” rather than “internal processes”, but I would have no objection to using “internal processes” instead.
That said, as shminux suggests, there does still seem to be a benefit in using intentional language in describing some of these processes—for the same reason why it might be useful to use intentional language for describing a chess robot, or a machine-learning algorithm.
E.g. this article describes a reinforcement learning setup, consisting of two “parts”—a standard reinforcement learner, and separately a “Blocker”, which is trained to recognize actions that a human overseer would disapprove of, and to block the RL component from taking actions which would be disapproved of. The authors use intentional language to describe the interaction of these two “subagents”:
This sounds like a reasonable way of describing the interaction of those two components in a very simple machine learning system. And it seems to me that the parts of the mind that IFS calls “Protectors” are something like the human version of what this paper calls “Blockers”—internal processes with the “goal” of recognizing and preventing behaviors that look similar to ones that had negative outcomes before. At the same time, there are other processes with a “goal” of doing something else (the way that the RL agent’s goal was just maximizing reward), which may have an “incentive” of getting around those Protectors/Blockers… and which could be described as running an adversarial search to get around the Protectors/Blockers. And this can be a useful way of modeling some of those interactions between processes in a person’s psyche, and sorting out personal problems.
All of this is using intentional language to describe the functioning of processes within our minds, but it’s also not in any way in conflict with the claim that we are not really agents. If anything, it seems to support it.
You have missed the point of the exercise in modelling the self as a many agent filled entity.
Seconding what Richard Kennaway said in the sibling comment; that is my view, and my experience, also.