Ah. Now I understand why you’re going this direction.
I think a single human mind is modeled very poorly as a composite of multiple agents.
This notion is far more popular with computer scientists than with neuroscientists. We’ve known about it since Minsky and think about it; it just doesn’t seem to mostly be the case.
Sure you can model it that way, but it’s not doing much useful work.
I expect the same of our first AGIs as foundation model agents. They will have separate components, but those will not be well-modeled as agents. And they will have different capabilities and different tendencies, but neither of those are particularly agent-y either.
I guess the devil is in the details, and you might come up with a really useful analysis using the metaphor of subagents. But it seems like an inefficient direction.
Ah. Now I understand why you’re going this direction.
I think a single human mind is modeled very poorly as a composite of multiple agents.
This notion is far more popular with computer scientists than with neuroscientists. We’ve known about it since Minsky and think about it; it just doesn’t seem to mostly be the case.
Sure you can model it that way, but it’s not doing much useful work.
I expect the same of our first AGIs as foundation model agents. They will have separate components, but those will not be well-modeled as agents. And they will have different capabilities and different tendencies, but neither of those are particularly agent-y either.
I guess the devil is in the details, and you might come up with a really useful analysis using the metaphor of subagents. But it seems like an inefficient direction.