I still don’t love the term “subagents”, despite everyone getting lots out of it, as well as personally agreeing with the intentional stance and the “alliances” you mention. I think my crux-net is something like
agents are strategic
fragments of our associative mental structures aren’t strategic except insofar as their output calls other game theoretic substructures or you are looking at something like the parliamentary moderator
if you think of these as agents, you will attribute false strategy to them and feel stuck more often, when in fact they are easily worked with if you think of their apparent strategy as “using highly simplistic native associations and reinforcements, albeit sometimes by pinging other fragments to do things outside their own purview, to accomplish their goal”
However, it does seem possible to me that the “calling other fragments” step does actually chain so far as to constitute real strategy and offer a useful level of abstraction for viewing such webs as subagents. I haven’t seen much evidence for this—does this framing make sense, and do you think it is clear there is something more like Turing-complete webs of strategy within subagents vs merely pseudostrategy? Wish I had a replacement word I liked better than subagent.
do you think it is clear there is something more like Turing-complete webs of strategy within subagents vs merely pseudostrategy ?
I don’t know. As suggested by this post, I move pretty freely between the subagent framing and the “associative belief structure” framing as seems appropriate to the situation. To me agentness doesn’t necessarily require the agents to be particularly strategic. (A thermostat is technically an agent, but not a very strategic one.)
IFS calls subagents just “parts”, which I prefer in some contexts; it has fewer connotations of being particularly strategic.
I still don’t love the term “subagents”, despite everyone getting lots out of it, as well as personally agreeing with the intentional stance and the “alliances” you mention. I think my crux-net is something like
agents are strategic
fragments of our associative mental structures aren’t strategic except insofar as their output calls other game theoretic substructures or you are looking at something like the parliamentary moderator
if you think of these as agents, you will attribute false strategy to them and feel stuck more often, when in fact they are easily worked with if you think of their apparent strategy as “using highly simplistic native associations and reinforcements, albeit sometimes by pinging other fragments to do things outside their own purview, to accomplish their goal”
However, it does seem possible to me that the “calling other fragments” step does actually chain so far as to constitute real strategy and offer a useful level of abstraction for viewing such webs as subagents. I haven’t seen much evidence for this—does this framing make sense, and do you think it is clear there is something more like Turing-complete webs of strategy within subagents vs merely pseudostrategy? Wish I had a replacement word I liked better than subagent.
I don’t know. As suggested by this post, I move pretty freely between the subagent framing and the “associative belief structure” framing as seems appropriate to the situation. To me agentness doesn’t necessarily require the agents to be particularly strategic. (A thermostat is technically an agent, but not a very strategic one.)
IFS calls subagents just “parts”, which I prefer in some contexts; it has fewer connotations of being particularly strategic.