The idea … works well on mechanised CIDs whose variables are neatly divided into object-level and mechanism nodes. … But to apply this to a physical system, we would need a way to obtain such a partition those variables
Agree, the formalism relies on a division of variable. One thing that I think we should perhaps have highlighted much more is Appendix B in the paper, which shows how you get a natural partition of the variables from just knowing the object-level variables of a repeated game.
Does a spinal reflex count as a policy?
A spinal reflex would be different if humans had evolved in a different world. So it reflects an agentic decision by evolution. In this sense, it is similar to the thermostat, which inherits its agency from the humans that designed it.
Does an ant’s decision to fight come from a representation of a desire to save its queen?
Same as above.
How accurate does its belief about the forthcoming battle have to be before this representation counts?
One thing that I’m excited about to think further about is what we might call “proper agents”, that are agentic in themselves, rather than just inheriting their agency from the evolution / design / training process that made them. I think this is what you’re pointing at with the ant’s knowledge. Likely it wouldn’t quite be a proper agent (but a human would, as we are able to adapt without re-evolving in a new environment). I have some half-developed thoughts on this.
Agree, the formalism relies on a division of variable. One thing that I think we should perhaps have highlighted much more is Appendix B in the paper, which shows how you get a natural partition of the variables from just knowing the object-level variables of a repeated game.
A spinal reflex would be different if humans had evolved in a different world. So it reflects an agentic decision by evolution. In this sense, it is similar to the thermostat, which inherits its agency from the humans that designed it.
Same as above.
One thing that I’m excited about to think further about is what we might call “proper agents”, that are agentic in themselves, rather than just inheriting their agency from the evolution / design / training process that made them. I think this is what you’re pointing at with the ant’s knowledge. Likely it wouldn’t quite be a proper agent (but a human would, as we are able to adapt without re-evolving in a new environment). I have some half-developed thoughts on this.