I said that the limit of agency is already proposed, from the physical perspective (FEP). And this limit is not EU maximisation. So, methodologically, you should either criticise this proposal, or suggest an alternative theory that is better, or take the proposal seriously.
If you take the proposal seriously (I do): the limit appears to be “uninteresting”. A maximally entangled system is “nothing”, it’s perceptibly indistinguishable from its environment, for a third-person observer (let’s say, in Tegmark’s tripartite partition system-environment-observer). There is no other limit. Instrumental convergence is not the limit, a strong instrumentally convergent system is still far from the limit.
This suggests that unbounded analysis, “thinking to the limit” is not useful, in this particular situation.
Any physical theory of agency must ensure “reflective stability”, by construction. I definitely don’t sense anything “reflectively unstable” in Active Inference, because it’s basically the theory of self-evidencing, and wields instrumental convergence in service of this self-evidencing. Who wouldn’t “want” this, reflectively? Active Inference agents in some sense must want this by construction because they want to be themselves, as long as possible. However they redefine themselves, and at that very moment, they also want to be themselves (redefined). The only logical possibility out of this is to not want to exist at all at some point, i. e., commit suicide, which agents (e. g., humans) actually do sometimes. But conditioned on that they want to continue to exist, they are definitely reflectively stable.
I said that the limit of agency is already proposed, from the physical perspective (FEP). And this limit is not EU maximisation. So, methodologically, you should either criticise this proposal, or suggest an alternative theory that is better, or take the proposal seriously.
If you take the proposal seriously (I do): the limit appears to be “uninteresting”. A maximally entangled system is “nothing”, it’s perceptibly indistinguishable from its environment, for a third-person observer (let’s say, in Tegmark’s tripartite partition system-environment-observer). There is no other limit. Instrumental convergence is not the limit, a strong instrumentally convergent system is still far from the limit.
This suggests that unbounded analysis, “thinking to the limit” is not useful, in this particular situation.
Any physical theory of agency must ensure “reflective stability”, by construction. I definitely don’t sense anything “reflectively unstable” in Active Inference, because it’s basically the theory of self-evidencing, and wields instrumental convergence in service of this self-evidencing. Who wouldn’t “want” this, reflectively? Active Inference agents in some sense must want this by construction because they want to be themselves, as long as possible. However they redefine themselves, and at that very moment, they also want to be themselves (redefined). The only logical possibility out of this is to not want to exist at all at some point, i. e., commit suicide, which agents (e. g., humans) actually do sometimes. But conditioned on that they want to continue to exist, they are definitely reflectively stable.
I’m talking about reflective stability. Are you saying that all agents will eventually self modify into FEP, and FEP is a rock?
Reward is not Necessary: How to Create a Compositional Self-Preserving Agent for Life-Long Learning