It still seems plausible to me that you might have a mind made of many different parts, but there is a clear “agent” bit that actually has goals and is controlling all the other parts.
I suppose I can imagine an architecture that has something like a central planning agent that is capable of having a goal, observing the state of the world to check if the goal had been met, coming up with high level strategies to meet that goal, then delegating subtasks to a set of subordinate sub-agents (whilst making sure that these tasks are broken down enough that the sub-agents themselves don’t have to do much long time-horizon planning or goal directed behaviour).
With this architecture it seems like all the agent-y goal-directed stuff is done by a single central agent.
However I do agree that this may be less efficient or capable in practice than an architecture with more autonomous, decentralised sub-agents. But on the other hand it might be better at more consistently pursuing a stable goal, so that could compensate.
What would that look like in practice?
I suppose I can imagine an architecture that has something like a central planning agent that is capable of having a goal, observing the state of the world to check if the goal had been met, coming up with high level strategies to meet that goal, then delegating subtasks to a set of subordinate sub-agents (whilst making sure that these tasks are broken down enough that the sub-agents themselves don’t have to do much long time-horizon planning or goal directed behaviour).
With this architecture it seems like all the agent-y goal-directed stuff is done by a single central agent.
However I do agree that this may be less efficient or capable in practice than an architecture with more autonomous, decentralised sub-agents. But on the other hand it might be better at more consistently pursuing a stable goal, so that could compensate.