Any policy can be model as a consequentialist agent, if you assume a contrived enough utility function. This statement is true, but not helpful.
The reason we care about the concept agency, is because there are certain things we expect from consequentialist agents, e.g. instrumental convergent goals, or just optimisation pressure in some consistent direction. We care about the concept of agency because it holds some predictive power.
[… some steps of reasoning I don’t know yet how to explain …]
Therefore, it’s better to use a concept of agency that depend on the internal properties of an algorithm/mind/policy-generator.
I don’t think agency can be made into a crisp concept. It’s either a fuzzy category or a leaky abstraction depending on how you apply the concept. But it does point to something important. I think it is worth tracking how agentic different systems are, because doing so has predictive power.
Any policy can be model as a consequentialist agent, if you assume a contrived enough utility function. This statement is true, but not helpful.
The reason we care about the concept agency, is because there are certain things we expect from consequentialist agents, e.g. instrumental convergent goals, or just optimisation pressure in some consistent direction. We care about the concept of agency because it holds some predictive power.
[… some steps of reasoning I don’t know yet how to explain …]
Therefore, it’s better to use a concept of agency that depend on the internal properties of an algorithm/mind/policy-generator.
I don’t think agency can be made into a crisp concept. It’s either a fuzzy category or a leaky abstraction depending on how you apply the concept. But it does point to something important. I think it is worth tracking how agentic different systems are, because doing so has predictive power.