For me at least, the multiple agents framework isn’t the natural, obvious one, but rather a really useful theoretical frame that helps me solve problems that used to seem insoluble. Something like how it becomes much easier to precisely deal with change over time once you learn calculus. (As I use it more, it becomes more intuitive, again like calculus, but it’s still not my default frame.)
Before I did my first CFAR workshop, I had a lot of issues that felt like, “I’m really confused about this thing” or “I’m overwhelmed when I try to think about this thing” or “I know the right thing to do but I mysteriously don’t actually do it”. The CFAR IDC class recommended I model these situations as “I have precise and detailed beliefs and desires, I just happen to have many of them and they sometimes contradict each other.” When I tried out this framework, I found that a lot of previously unsolvable problems became surprisingly easy to solve. For example, “I’m really torn about my job” became, “I am really excited about precisely this aspect of my job, and really unhappy about precisely this aspect”. Then it’s possible to adjudicate between those two perspectives, find compromises or collaborations, etc.
It would be rude of me to assume that your mind works the same as mine, so take the following strictly as a hypothesis. But I would guess that what’s going on for you is that you identify really strongly with one set of preferences/desires/beliefs in your mind, and experience other preferences/desires/beliefs as “pain, pleasure, stupidity, and ignorance”. The experiment this suggests is to try spending a few minutes pretending those things are the “real you”, and the “agenty” part is the annoying external interloper caused by corrupted hardware. If I’m right, the sign would be that you find there is some detail and coherence to the “identity” of those things that feel like flaws, even if you’re not sure it’s an identity you approve of.
Note that I don’t think the multiple agents thing is the one true ontology. I find that as I learn to integrate the parts better, they start feeling more like a single working system. But it’s a really helpful theoretical tool for me.
I was going to respond with basically this as well. I too don’t intuitively experience myself as multiple agents; instead, I feel like a single agent beset by a whole bunch of internal conflicts that don’t resolve (at some point I found myself describing myself as “made of internal conflict”); and I’ve so far in my limited experience found IDC quite helpful at parsing out the internal conflict. I don’t experience IDC as uncovering separate agents that were there all along, but the personification is actually a pretty useful tool just because (a) it forces me to give sufficient airtime to each side (b) debate (when civil and thoughtful) is actually just a good format for clarifying any kind of disagreements.
For me at least, the multiple agents framework isn’t the natural, obvious one, but rather a really useful theoretical frame that helps me solve problems that used to seem insoluble. Something like how it becomes much easier to precisely deal with change over time once you learn calculus. (As I use it more, it becomes more intuitive, again like calculus, but it’s still not my default frame.)
Before I did my first CFAR workshop, I had a lot of issues that felt like, “I’m really confused about this thing” or “I’m overwhelmed when I try to think about this thing” or “I know the right thing to do but I mysteriously don’t actually do it”. The CFAR IDC class recommended I model these situations as “I have precise and detailed beliefs and desires, I just happen to have many of them and they sometimes contradict each other.” When I tried out this framework, I found that a lot of previously unsolvable problems became surprisingly easy to solve. For example, “I’m really torn about my job” became, “I am really excited about precisely this aspect of my job, and really unhappy about precisely this aspect”. Then it’s possible to adjudicate between those two perspectives, find compromises or collaborations, etc.
It would be rude of me to assume that your mind works the same as mine, so take the following strictly as a hypothesis. But I would guess that what’s going on for you is that you identify really strongly with one set of preferences/desires/beliefs in your mind, and experience other preferences/desires/beliefs as “pain, pleasure, stupidity, and ignorance”. The experiment this suggests is to try spending a few minutes pretending those things are the “real you”, and the “agenty” part is the annoying external interloper caused by corrupted hardware. If I’m right, the sign would be that you find there is some detail and coherence to the “identity” of those things that feel like flaws, even if you’re not sure it’s an identity you approve of.
Note that I don’t think the multiple agents thing is the one true ontology. I find that as I learn to integrate the parts better, they start feeling more like a single working system. But it’s a really helpful theoretical tool for me.
I was going to respond with basically this as well. I too don’t intuitively experience myself as multiple agents; instead, I feel like a single agent beset by a whole bunch of internal conflicts that don’t resolve (at some point I found myself describing myself as “made of internal conflict”); and I’ve so far in my limited experience found IDC quite helpful at parsing out the internal conflict. I don’t experience IDC as uncovering separate agents that were there all along, but the personification is actually a pretty useful tool just because (a) it forces me to give sufficient airtime to each side (b) debate (when civil and thoughtful) is actually just a good format for clarifying any kind of disagreements.