Well, when I talk with people at CFAR workshops, fairly often someone will have the problem of “akrasia” and they’ll conceptualize it, more or less, as “my system 1 is stupid and doesn’t understand that working harder at my job is the only thing that matters, and I need tools to force my S1 to do the right thing.”
So my response to that is to say, “ok, let’s get empirical about that. When does this happen, exactly? If you think about working harder right now, what happens?” Or, “What happens if you don’t work harder at your job?”
In other words, I immediately try to drop to a stimulus-response level, and reject all higher-level interpretive frameworks, except insofar as they give me ideas of where to drop my depth charges, so to speak. :)
And then I might suggest that they try on the frame where “the akrasia part”, is actually an intelligent “agent” trying to optimize for their own goals (instead of a foreign, stupid entity, that they have to subdue). If the akrasia was actually right, why would that be?
I usually don’t bring that kind of thing up until a point has been reached where the client can see that empirically. For example, if I’ve asked them to imagine what happens if they get their wish and are now working harder at their job… and they notice that they feel awful or whatever. And then I don’t need to address the intentionality at all.
And they realize that they hate their job, and obviously their life would be terrible if they spent more of their time working at their terrible job.
And sometimes, the real problem has nothing to do with the work and everything to do with a belief that they aren’t a good person unless they work more, so it doesn’t matter how terrible it is… but also, the very fact that they’re guilty about not working more may be precisely the thing they’re avoiding by not working!
In other words, sometimes an intentional model fails because brains are actually pretty stupid, and have design flaws such that trying to view them as having sensible or coherent goals simply doesn’t work.
For example, our action planning subsystem is really bad at prioritizing between things we feel good about doing vs. things we feel bad about not doing. It wants to avoid the things we feel bad about not doing, because when we think about them, we feel bad. That part of our brains doesn’t understand things like “logical negation” or “implicative reasoning”, it just processes things based on their emotional tags. (i.e., “bad = run away”)
[I’m obviously simplifying somewhat, but this exact pattern does come up over and over again at CFAR workshops.]
And I’m also not saying I never do anything that’s a modeling of intention. But I get there bottom-up, not top-down, and it only comes up in a few places.
Also, most of the intentional models I use are for things that pass through the brain’s intention-modeling system: i.e., our mental models of what other people think/thought about us!
For example, the SAMMSA pattern is all about pulling that stuff out, as is the MTF pattern (“meant to feel/made to feel”—a subset of SAMMSA dealing with learnings of how others intend for us to feel in certain circumstances).
The only other place I use quasi-intentional frames is in describing the evolutionary function or “intent” of our brain modules. For example, distress behavior is “intended” to generate caring responses from parents. But this isn’t about what the person intends, it’s about what their brain is built to do. When you were a crying baby, “you” didn’t even have anything that qualifies as intention yet, so how could we say you had a part with that intention?
And even then, I’m treating it as, “in this context, this behavior pattern would produce this result” (producing reinforcement or gene propagation), not “this thing is trying to produce this result, so it has this behavior pattern in this context.” Given the fact that my intention is always to reduce to the actual “wires” or “lines of code” producing a problem, intention modeling is going in the wrong direction most of the time.
My analogy about confusing a thermostat with something hot or cold underneath speaks to why: unlike IFS, I don’t assume that parts have positive, functional intentions, even if they arose out of the positive “design intentions” of the system as a whole. After all, the plan for achieving that original “intention” may no longer be valid! (insofar as there even was one to begin with.)
That’s why I don’t think of the thermostat as being something that “wants” temperature, because it would distract me from actually looking at the wall and the wiring and the sensors, which is the only way I can be certain that I’m always getting closer to a solution rather than guessing or going in circles. (That is, by always working with things I can test, like a programmer debugging a program. Rerunning it and inspecting, putting in different data values and seeing how the behavior changes, and so on.)
So my response to that is to say, “ok, let’s get empirical about that. When does this happen, exactly? If you think about working harder right now, what happens?” Or, “What happens if you don’t work harder at your job?”
In other words, I immediately try to drop to a stimulus-response level, and reject all higher-level interpretive frameworks, except insofar as they give me ideas of where to drop my depth charges, so to speak. :)
I usually don’t bring that kind of thing up until a point has been reached where the client can see that empirically. For example, if I’ve asked them to imagine what happens if they get their wish and are now working harder at their job… and they notice that they feel awful or whatever. And then I don’t need to address the intentionality at all.
And sometimes, the real problem has nothing to do with the work and everything to do with a belief that they aren’t a good person unless they work more, so it doesn’t matter how terrible it is… but also, the very fact that they’re guilty about not working more may be precisely the thing they’re avoiding by not working!
In other words, sometimes an intentional model fails because brains are actually pretty stupid, and have design flaws such that trying to view them as having sensible or coherent goals simply doesn’t work.
For example, our action planning subsystem is really bad at prioritizing between things we feel good about doing vs. things we feel bad about not doing. It wants to avoid the things we feel bad about not doing, because when we think about them, we feel bad. That part of our brains doesn’t understand things like “logical negation” or “implicative reasoning”, it just processes things based on their emotional tags. (i.e., “bad = run away”)
And I’m also not saying I never do anything that’s a modeling of intention. But I get there bottom-up, not top-down, and it only comes up in a few places.
Also, most of the intentional models I use are for things that pass through the brain’s intention-modeling system: i.e., our mental models of what other people think/thought about us!
For example, the SAMMSA pattern is all about pulling that stuff out, as is the MTF pattern (“meant to feel/made to feel”—a subset of SAMMSA dealing with learnings of how others intend for us to feel in certain circumstances).
The only other place I use quasi-intentional frames is in describing the evolutionary function or “intent” of our brain modules. For example, distress behavior is “intended” to generate caring responses from parents. But this isn’t about what the person intends, it’s about what their brain is built to do. When you were a crying baby, “you” didn’t even have anything that qualifies as intention yet, so how could we say you had a part with that intention?
And even then, I’m treating it as, “in this context, this behavior pattern would produce this result” (producing reinforcement or gene propagation), not “this thing is trying to produce this result, so it has this behavior pattern in this context.” Given the fact that my intention is always to reduce to the actual “wires” or “lines of code” producing a problem, intention modeling is going in the wrong direction most of the time.
My analogy about confusing a thermostat with something hot or cold underneath speaks to why: unlike IFS, I don’t assume that parts have positive, functional intentions, even if they arose out of the positive “design intentions” of the system as a whole. After all, the plan for achieving that original “intention” may no longer be valid! (insofar as there even was one to begin with.)
That’s why I don’t think of the thermostat as being something that “wants” temperature, because it would distract me from actually looking at the wall and the wiring and the sensors, which is the only way I can be certain that I’m always getting closer to a solution rather than guessing or going in circles. (That is, by always working with things I can test, like a programmer debugging a program. Rerunning it and inspecting, putting in different data values and seeing how the behavior changes, and so on.)