The set of optimizing systems is smaller than the set of all AI services, but larger than the set of goal-directed agentic systems.
...
A tree is an optimizing system but not a goal-directed agent system.
I’m not sure this is true, at least not in the sense that we usually think about “goal-directed agent systems”.
You make a case that there’s no distinct subsystem of the tree which is “doing the optimizing”, but this isn’t obviously relevant to whether the tree is agenty. For instance, the tree presumably still needs to model its environment to some extent, and “make decisions” to optimize its growth within the environment—e.g. new branches/leaves growing toward sunlight and roots growing toward water, or the tree “predicting” when the seasons are turning and growing/dropping leaves accordingly.
One to think about whether “the set of optimizing systems is smaller than the set of all AI services, but larger than the set of goal-directed agentic systems” is that it’s equivalent to Scott’s (open) question does agent-like behavior imply agent-like architecture?
I’m not sure this is true, at least not in the sense that we usually think about “goal-directed agent systems”.
You make a case that there’s no distinct subsystem of the tree which is “doing the optimizing”, but this isn’t obviously relevant to whether the tree is agenty. For instance, the tree presumably still needs to model its environment to some extent, and “make decisions” to optimize its growth within the environment—e.g. new branches/leaves growing toward sunlight and roots growing toward water, or the tree “predicting” when the seasons are turning and growing/dropping leaves accordingly.
One to think about whether “the set of optimizing systems is smaller than the set of all AI services, but larger than the set of goal-directed agentic systems” is that it’s equivalent to Scott’s (open) question does agent-like behavior imply agent-like architecture?