I think “agent” is probably analogous to a river: structurally and functionally real, but also ultimately an aggregate of smaller structures that are not themselves aligned with the agent. It’s convenient for us to be able to point at a flowing body of water much longer than it is wide and call it a river. Likewise, it is convenient for us to point to an entity that senses its environment and steers events adaptively toward outcomes for legible reasons and refer to it as exhibiting agency.
In that sense, AutoGPT is already an agent—it is just fairly incompetent at accomplishing the goals we can observe it pursuing, at least in the very short term.
As when we accept a functional definition of intelligence, this functional definition of agency lets us sidestep the unproductive debate over whether AutoGPT is “really” agentic or whether LLMs are “really” intelligent, and focus on the real questions: what capabilities do they have, what capabilities will they have, and when, and what sorts of functional goals will they behave as if they are pursuing? What behaviors will they exhibit in the pursuit of those functional goals?
Of course, there isn’t just one specific and definable goal that an agent has, and where we can be exactly right or exactly wrong in naming it. Understanding an agent’s functional goals help us predict its behavior. That is what makes it the agent’s goal: if we can predict the agent’s behavior based on our concept of its goal, then we can say we have accurately determined what its goal is. If our predictions turn out to be false, that could be because it failed, or because we misunderstood its goal—but the agent’s response to failure should help us figure out which, and our updated notion of its capabilities and goals should improve our ability to predict its further actions.
So talking about an agent’s goal is really an expression of a predictive model for a functionally agentic entity’s behavior. What will AutoGPT do? If you can answer that question, you are probably modeling both the goals and capabilities of an AutoGPT instance in order to do so, and so for you, AutoGPT is being modeled as an agent.
Functional Agency
I think “agent” is probably analogous to a river: structurally and functionally real, but also ultimately an aggregate of smaller structures that are not themselves aligned with the agent. It’s convenient for us to be able to point at a flowing body of water much longer than it is wide and call it a river. Likewise, it is convenient for us to point to an entity that senses its environment and steers events adaptively toward outcomes for legible reasons and refer to it as exhibiting agency.
In that sense, AutoGPT is already an agent—it is just fairly incompetent at accomplishing the goals we can observe it pursuing, at least in the very short term.
As when we accept a functional definition of intelligence, this functional definition of agency lets us sidestep the unproductive debate over whether AutoGPT is “really” agentic or whether LLMs are “really” intelligent, and focus on the real questions: what capabilities do they have, what capabilities will they have, and when, and what sorts of functional goals will they behave as if they are pursuing? What behaviors will they exhibit in the pursuit of those functional goals?
Of course, there isn’t just one specific and definable goal that an agent has, and where we can be exactly right or exactly wrong in naming it. Understanding an agent’s functional goals help us predict its behavior. That is what makes it the agent’s goal: if we can predict the agent’s behavior based on our concept of its goal, then we can say we have accurately determined what its goal is. If our predictions turn out to be false, that could be because it failed, or because we misunderstood its goal—but the agent’s response to failure should help us figure out which, and our updated notion of its capabilities and goals should improve our ability to predict its further actions.
So talking about an agent’s goal is really an expression of a predictive model for a functionally agentic entity’s behavior. What will AutoGPT do? If you can answer that question, you are probably modeling both the goals and capabilities of an AutoGPT instance in order to do so, and so for you, AutoGPT is being modeled as an agent.