The idea that “Agents are systems that would adapt their policy if their actions influenced the world in a different way.” works well on mechanised CIDs whose variables are neatly divided into object-level and mechanism nodes: we simply check for a path from a utility function F_U to a policy Pi_D. But to apply this to a physical system, we would need a way to obtain such a partition those variables. Specifically, we need to know (1) what counts as a policy, and (2) whether any of its antecedents count as representations of “influence” on the world (and after all, antecedents A of the policy can only be ‘representations’ of the influence, because in the real world, the agent’s actions cannot influence themselves by some D->A->Pi->D loop). Does a spinal reflex count as a policy? Does an ant’s decision to fight come from a representation of a desire to save its queen? How accurate does its belief about the forthcoming battle have to be before this representation counts? I’m not sure the paper answers these questions formally, nor am I sure that it’s even possible to do so. These questions don’t seem to have objectively right or wrong answers.
So we don’t really have any full procedure for “identifying agents”. I do think we gain some conceptual clarity. But on my reading, this clear definition serves to crystallise how hard it is to identify agents, moreso than it shows practically how it can be done.
(NB. I read this paper months ago, so apologies if I’ve got any of the details wrong.)
The idea … works well on mechanised CIDs whose variables are neatly divided into object-level and mechanism nodes. … But to apply this to a physical system, we would need a way to obtain such a partition those variables
Agree, the formalism relies on a division of variable. One thing that I think we should perhaps have highlighted much more is Appendix B in the paper, which shows how you get a natural partition of the variables from just knowing the object-level variables of a repeated game.
Does a spinal reflex count as a policy?
A spinal reflex would be different if humans had evolved in a different world. So it reflects an agentic decision by evolution. In this sense, it is similar to the thermostat, which inherits its agency from the humans that designed it.
Does an ant’s decision to fight come from a representation of a desire to save its queen?
Same as above.
How accurate does its belief about the forthcoming battle have to be before this representation counts?
One thing that I’m excited about to think further about is what we might call “proper agents”, that are agentic in themselves, rather than just inheriting their agency from the evolution / design / training process that made them. I think this is what you’re pointing at with the ant’s knowledge. Likely it wouldn’t quite be a proper agent (but a human would, as we are able to adapt without re-evolving in a new environment). I have some half-developed thoughts on this.
The idea that “Agents are systems that would adapt their policy if their actions influenced the world in a different way.” works well on mechanised CIDs whose variables are neatly divided into object-level and mechanism nodes: we simply check for a path from a utility function F_U to a policy Pi_D. But to apply this to a physical system, we would need a way to obtain such a partition those variables. Specifically, we need to know (1) what counts as a policy, and (2) whether any of its antecedents count as representations of “influence” on the world (and after all, antecedents A of the policy can only be ‘representations’ of the influence, because in the real world, the agent’s actions cannot influence themselves by some D->A->Pi->D loop). Does a spinal reflex count as a policy? Does an ant’s decision to fight come from a representation of a desire to save its queen? How accurate does its belief about the forthcoming battle have to be before this representation counts? I’m not sure the paper answers these questions formally, nor am I sure that it’s even possible to do so. These questions don’t seem to have objectively right or wrong answers.
So we don’t really have any full procedure for “identifying agents”. I do think we gain some conceptual clarity. But on my reading, this clear definition serves to crystallise how hard it is to identify agents, moreso than it shows practically how it can be done.
(NB. I read this paper months ago, so apologies if I’ve got any of the details wrong.)
Agree, the formalism relies on a division of variable. One thing that I think we should perhaps have highlighted much more is Appendix B in the paper, which shows how you get a natural partition of the variables from just knowing the object-level variables of a repeated game.
A spinal reflex would be different if humans had evolved in a different world. So it reflects an agentic decision by evolution. In this sense, it is similar to the thermostat, which inherits its agency from the humans that designed it.
Same as above.
One thing that I’m excited about to think further about is what we might call “proper agents”, that are agentic in themselves, rather than just inheriting their agency from the evolution / design / training process that made them. I think this is what you’re pointing at with the ant’s knowledge. Likely it wouldn’t quite be a proper agent (but a human would, as we are able to adapt without re-evolving in a new environment). I have some half-developed thoughts on this.