Can I refer to “conflicting motor programs” as “conflicting subagents” instead?
No. ;-)
More precisely, I would say that agency is an unnecessary hypothesis, and postulating agency seems to lead people to certain predictable failure patterns (like treating parts of the self as an enemy, or one’s self as the victim of these agents, trying to negotiate with them, and other anthropomorphic overkill).
I only restricted the present discussion to “motor” programs to limit distracting digressions on the topic of higher-level cognitive architecture. For modeling akrasia, it’s simply sufficient to assume that various programs can be activated in parallel, and that one of consciousness’s functions is to manage conflict between activated programs.
It maybe more useful in practice too, but like Rodolfo Llinás hypothesizes: all we can do, as humans, is to activate motor neurons. So thinking is fundamentally just internalized movement.
More precisely, I would say that agency is an unnecessary hypothesis, and postulating agency seems to lead people to certain predictable failure patterns
On the other hand, it forms the basis of entire forms of therapy (eg. Voice Dialog) that seem to work by reducing conflicts by raising self awareness acceptance of both sides of the conflict. Some people just find it a useful way to approach doing what you would call RMI.
At this point, we’re veering into stuff that gets terribly technical. You can get the brain to act as if it contains more than one “agent”… but if you allow this to confuse you into thinking there are multiple agents, you are headed for trouble. For example, Esther Hicks thinks she’s channeling a being from another dimension, but that doesn’t mean it’s actually there. Think, “agency simulation”, if you must, but it’s really more like we all have an ultra-sophisticated chatbot that can parrot speech and thought patterns of real or imagined characters.
All this has very little to do with actual agency or the workings of akrasia, though, and tends to interfere with the process of a person owning up to the goals that they want to dissociate from. By pretending it’s another agency that wants to surf the net, you get to maintain moral superiority… and still hang onto your problem. The goal of virtually any therapy that involves multiple agencies, is to integrate them, but the typical person on getting hold of the metaphor uses is to maintain the separation.
That’s why I say thinking that way leads to predicatable failure patterns. (You’ll notice I never said it was untrue, just unnecessary).
The goal of virtually any therapy that involves multiple agencies, is to integrate them, but the typical person on getting hold of the metaphor uses is to maintain the separation.
The situations in which the ‘selves’ metaphor seem to be most useful when people have already (without being aware of it) dissociated from the goals that they don’t want to acknowledge and, as you say, are trying to integrate them. Describing it more technically they would be going through a process of rewiring what you may refer to as gauges in a PCT network. Some people find it easier to use ‘RMI’ by harnessing their preexisting skills with compassion, empathy and acknowledgement of shared purpose using the ‘selves’ metaphor. This is not dissimilar with the way you talk about ‘monkeys and horses’ and ‘giants and tricksters’ as a teaching mechanism.
I can see why using an ‘agent’ metaphor would be a recipe for disaster, given the connotations that term has in the minds of many people. As for myself, I just imagine myself to be a complex multipart brain that lives in a body with further complex glands.
Some people find it easier to use ‘RMI’ by harnessing their preexisting skills with compassion, empathy and acknowledgement of shared purpose using the ‘selves’ metaphor. This is not dissimilar with the way you talk about ‘monkeys and horses’ and ‘giants and tricksters’ as a teaching mechanism.
Of course. And in that context, I also teach self-empathy (ala Vladimir’s example of dialog in other comments on this post). But I frame that in terms of behavior and metaphor, not stating that there “really are” such entities as giants and tricksters and monkeys and horses. The stories of the Giant and the Trickster, and the Planet of the Horse-Monkeys, were couched in fairy-tale language precisely because they are metaphor, not fact.
But I’ve mostly found that the flip side to the benefits of these metaphors is that the people who have the most problems are also the ones most likely to abuse these metaphors in a way that keeps them stuck. So, I am really conservative in what I want to say in a context where somebody is asking me (implicitly so here on LessWrong) about what is “true”.
Because what is true is that I don’t know what goes on in brains. I only know how to describe experience and behavior metaphorically, “as if” the brain had these parts.
I also know from experience that virtually any model you imagine the brain to behave “as if”, you can get people to make it come true, by thinking and acting “as if” it were true. This means you want to be exceptionally careful in the models you propose to people you are trying to help, and make sure that you define models that will help them, rather than ones that will keep them stuck.
You keep using the adjective “motor” here, what do you mean by it?
Can I refer to “conflicting motor programs” as “conflicting subagents” instead?
No. ;-)
More precisely, I would say that agency is an unnecessary hypothesis, and postulating agency seems to lead people to certain predictable failure patterns (like treating parts of the self as an enemy, or one’s self as the victim of these agents, trying to negotiate with them, and other anthropomorphic overkill).
I only restricted the present discussion to “motor” programs to limit distracting digressions on the topic of higher-level cognitive architecture. For modeling akrasia, it’s simply sufficient to assume that various programs can be activated in parallel, and that one of consciousness’s functions is to manage conflict between activated programs.
For a specific example of motor programs, see this other comment.
I like this way of putting it.
It maybe more useful in practice too, but like Rodolfo Llinás hypothesizes: all we can do, as humans, is to activate motor neurons. So thinking is fundamentally just internalized movement.
On the other hand, it forms the basis of entire forms of therapy (eg. Voice Dialog) that seem to work by reducing conflicts by raising self awareness acceptance of both sides of the conflict. Some people just find it a useful way to approach doing what you would call RMI.
At this point, we’re veering into stuff that gets terribly technical. You can get the brain to act as if it contains more than one “agent”… but if you allow this to confuse you into thinking there are multiple agents, you are headed for trouble. For example, Esther Hicks thinks she’s channeling a being from another dimension, but that doesn’t mean it’s actually there. Think, “agency simulation”, if you must, but it’s really more like we all have an ultra-sophisticated chatbot that can parrot speech and thought patterns of real or imagined characters.
All this has very little to do with actual agency or the workings of akrasia, though, and tends to interfere with the process of a person owning up to the goals that they want to dissociate from. By pretending it’s another agency that wants to surf the net, you get to maintain moral superiority… and still hang onto your problem. The goal of virtually any therapy that involves multiple agencies, is to integrate them, but the typical person on getting hold of the metaphor uses is to maintain the separation.
That’s why I say thinking that way leads to predicatable failure patterns. (You’ll notice I never said it was untrue, just unnecessary).
The situations in which the ‘selves’ metaphor seem to be most useful when people have already (without being aware of it) dissociated from the goals that they don’t want to acknowledge and, as you say, are trying to integrate them. Describing it more technically they would be going through a process of rewiring what you may refer to as gauges in a PCT network. Some people find it easier to use ‘RMI’ by harnessing their preexisting skills with compassion, empathy and acknowledgement of shared purpose using the ‘selves’ metaphor. This is not dissimilar with the way you talk about ‘monkeys and horses’ and ‘giants and tricksters’ as a teaching mechanism.
I can see why using an ‘agent’ metaphor would be a recipe for disaster, given the connotations that term has in the minds of many people. As for myself, I just imagine myself to be a complex multipart brain that lives in a body with further complex glands.
Of course. And in that context, I also teach self-empathy (ala Vladimir’s example of dialog in other comments on this post). But I frame that in terms of behavior and metaphor, not stating that there “really are” such entities as giants and tricksters and monkeys and horses. The stories of the Giant and the Trickster, and the Planet of the Horse-Monkeys, were couched in fairy-tale language precisely because they are metaphor, not fact.
But I’ve mostly found that the flip side to the benefits of these metaphors is that the people who have the most problems are also the ones most likely to abuse these metaphors in a way that keeps them stuck. So, I am really conservative in what I want to say in a context where somebody is asking me (implicitly so here on LessWrong) about what is “true”.
Because what is true is that I don’t know what goes on in brains. I only know how to describe experience and behavior metaphorically, “as if” the brain had these parts.
I also know from experience that virtually any model you imagine the brain to behave “as if”, you can get people to make it come true, by thinking and acting “as if” it were true. This means you want to be exceptionally careful in the models you propose to people you are trying to help, and make sure that you define models that will help them, rather than ones that will keep them stuck.
Agree, and find it mildly distracting. I’m sure there is a better phrase we could use.