My current favourite notion of agency, primarily based on Active Inference, which I refined upon reading “Discovering Agents”, is the following:
Agency is a property of a physical system from some observer’s subjective perspective. It stems from the observer’s generative model of the world (including the object in question), specifically whether the observer predicts the agent’s future trajectory in the state space by assuming that the agent has its own generative model which the agent uses to act. The agent’s own generative model also depends on (adapts to, is learned from, etc.) the agent’s environment. This last bit comes from “Discovering Agents”.
“Having own generative model” is the shakiest part. It probably means that storage, computation, and maintenance (updates, learning) of the model all happen within the agent’s boundaries: if not, the agent’s boundaries shall be widened, as in the example of “thermostat with its creation process” from “Discovering Agents”. The storage and computational substrate of the agent’s generative model is not important: it could be neuronal, digital, chemical, etc.
Now, the observer models the generative model inside the agent. Here’s where this Vingean veil comes from: if the observer has perfect observability of the agent’s internals, then it is possible to believe that your model of the agent exactly matches the agent’s own generative model, but usually, it will be less than perfect, due to limited observability.
However, even perfect observability doesn’t guarantee safety: the generative model might be large and effectively incompressible (the halting problem), so the only way to see what it will do may be to execute it.
The agent’s own generative model also depends on (adapts to, is learned from, etc.) the agent’s environment. This last bit comes from “Discovering Agents”.
“Having own generative model” is the shakiest part.
What it means for the agent to “have a generative model” is that the agent systematically corrects this model based on its experience (to within some tolerable competence!).
It probably means that storage, computation, and maintenance (updates, learning) of the model all happen within the agent’s boundaries: if not, the agent’s boundaries shall be widened,
A model/belief/representation depends on reference maintenance, but in general, the machinery of reference maintenance can and usually should extend far beyond the representation itself.
For example, an important book will tend to get edition updates, but the complex machinery which results in such an update extends far beyond the book’s author.
A telescope produces a representation of far-away space, but the empty space between the telescope and the stars is also instrumental in maintaining the reference (eg, it must remain clear of obstacles).
A student does a lot of work “within their own boundaries” to maintain their knowledge, but they also use notebooks, computers, etc. The student’s teachers are also heavily involved in the reference-maintenance.
My current favourite notion of agency, primarily based on Active Inference,
I’m not a big fan of active inference. It strikes me as, basically, a not-particularly-great scheme for injecting randomness into actions to encourage exploration.
My current favourite notion of agency, primarily based on Active Inference, which I refined upon reading “Discovering Agents”, is the following:
Agency is a property of a physical system from some observer’s subjective perspective. It stems from the observer’s generative model of the world (including the object in question), specifically whether the observer predicts the agent’s future trajectory in the state space by assuming that the agent has its own generative model which the agent uses to act. The agent’s own generative model also depends on (adapts to, is learned from, etc.) the agent’s environment. This last bit comes from “Discovering Agents”.
“Having own generative model” is the shakiest part. It probably means that storage, computation, and maintenance (updates, learning) of the model all happen within the agent’s boundaries: if not, the agent’s boundaries shall be widened, as in the example of “thermostat with its creation process” from “Discovering Agents”. The storage and computational substrate of the agent’s generative model is not important: it could be neuronal, digital, chemical, etc.
Now, the observer models the generative model inside the agent. Here’s where this Vingean veil comes from: if the observer has perfect observability of the agent’s internals, then it is possible to believe that your model of the agent exactly matches the agent’s own generative model, but usually, it will be less than perfect, due to limited observability.
However, even perfect observability doesn’t guarantee safety: the generative model might be large and effectively incompressible (the halting problem), so the only way to see what it will do may be to execute it.
The theory of mind is a closely related idea to all of the above, too.
What it means for the agent to “have a generative model” is that the agent systematically corrects this model based on its experience (to within some tolerable competence!).
A model/belief/representation depends on reference maintenance, but in general, the machinery of reference maintenance can and usually should extend far beyond the representation itself.
For example, an important book will tend to get edition updates, but the complex machinery which results in such an update extends far beyond the book’s author.
A telescope produces a representation of far-away space, but the empty space between the telescope and the stars is also instrumental in maintaining the reference (eg, it must remain clear of obstacles).
A student does a lot of work “within their own boundaries” to maintain their knowledge, but they also use notebooks, computers, etc. The student’s teachers are also heavily involved in the reference-maintenance.
I’m not a big fan of active inference. It strikes me as, basically, a not-particularly-great scheme for injecting randomness into actions to encourage exploration.