I’m curious if you have any thoughts on the computational foundations one would need to measure and predict cognitive work properly?
In Agent Foundations, you’ve got this idea of boundaries which can be seen as one way of saying a pattern that persists over time. One way that this is formalised in Active Inference is through Markov Blankets and the idea that any self-persistent entity could be described as a markov blanket minimizing the free energy of its environment.
My thinking here is that if we apply this properly it would allow us to generalise notions of agents beyond what we normally think of them and instead see them as any sort of system that follows this definition.
For example, we could look at an institution or a collective of AIs as a self-consistent entity applying cognitive work on the environment to survive. The way to detect these collectives would be to look at what self-consistent entities are changing the “optimisation landscape” or “free energy landscape” around it the most. This would then give us the most highly predictive agents in the local environment.
A nice thing for is that it centers the cognitive work/optimisation power applied in the analysis and so I’m thinking that it might be more predictive of future dynamics of cognitive systems as a consequence?
Another example is if we continue on the Critch train, some of his later work includes TASRA for example. We can see these as stories of human disempowerment, that is patterns that lose their relevance over time as they get less causal power over future states. In other words, entities that are not under the causal power of humans increasingly take over the cognitive work lightcone/the inputs to the free energy landscape.
As previously stated, I’m very interested to hear if you’ve got more thoughts on how to measure and model cognitive work.
Yes this seems like an important question but I admit I don’t have anything coherent to say yet. A basic intuition from thermodynamics is that if you can measure the change in the internal energy between two states, and the heat transfer, you can infer how much work was done even if you’re not sure how it was done. So maybe the problem is better thought of as learning to measure enough other quantities that one can infer how much cognitive work is being done.
For all I know there is a developed thermodynamic theory of learning agents out there which already does this, but I didn’t find it yet...
Good stuff!
I’m curious if you have any thoughts on the computational foundations one would need to measure and predict cognitive work properly?
In Agent Foundations, you’ve got this idea of boundaries which can be seen as one way of saying a pattern that persists over time. One way that this is formalised in Active Inference is through Markov Blankets and the idea that any self-persistent entity could be described as a markov blanket minimizing the free energy of its environment.
My thinking here is that if we apply this properly it would allow us to generalise notions of agents beyond what we normally think of them and instead see them as any sort of system that follows this definition.
For example, we could look at an institution or a collective of AIs as a self-consistent entity applying cognitive work on the environment to survive. The way to detect these collectives would be to look at what self-consistent entities are changing the “optimisation landscape” or “free energy landscape” around it the most. This would then give us the most highly predictive agents in the local environment.
A nice thing for is that it centers the cognitive work/optimisation power applied in the analysis and so I’m thinking that it might be more predictive of future dynamics of cognitive systems as a consequence?
Another example is if we continue on the Critch train, some of his later work includes TASRA for example. We can see these as stories of human disempowerment, that is patterns that lose their relevance over time as they get less causal power over future states. In other words, entities that are not under the causal power of humans increasingly take over the cognitive work lightcone/the inputs to the free energy landscape.
As previously stated, I’m very interested to hear if you’ve got more thoughts on how to measure and model cognitive work.
Yes this seems like an important question but I admit I don’t have anything coherent to say yet. A basic intuition from thermodynamics is that if you can measure the change in the internal energy between two states, and the heat transfer, you can infer how much work was done even if you’re not sure how it was done. So maybe the problem is better thought of as learning to measure enough other quantities that one can infer how much cognitive work is being done.
For all I know there is a developed thermodynamic theory of learning agents out there which already does this, but I didn’t find it yet...