While writing up this post, the Embedded Agents post went up. It seems like this work could be conceptually relevant to three of the four areas of interest identified in that post, with Embedded World-Models being the odd man out because they explicitly skip the question of internal models.
Looking at this paper again in that vein, I am immediately curious about things like whether we can apply this iteratively to sub-systems of the system of interest. It seems like the answer is almost certainly yes.
I also take it more-or-less for granted that we can use these same ideas to define semantic information in relation to some arbitrary goal, or set of goals. It seems like putting the framework in an information-theoretic context is very helpful for this purpose. It feels like there should be some correspondence between the viability function and the partial achievement of goals.
Leaning on the information-theoretic interpretation again, I’m not even sure it would require any different treatment to allow for non-continuous continuation of the system (or goal). This way things like the hibernation of a tardigrade, hiring a contractor at a future date, and an AI reloading itself from backup are all approachable.
But the devil is in the details, so I will table these speculations until after seeing whether the rest of the paper passes the sniff test.
While writing up this post, the Embedded Agents post went up. It seems like this work could be conceptually relevant to three of the four areas of interest identified in that post, with Embedded World-Models being the odd man out because they explicitly skip the question of internal models.
Looking at this paper again in that vein, I am immediately curious about things like whether we can apply this iteratively to sub-systems of the system of interest. It seems like the answer is almost certainly yes.
I also take it more-or-less for granted that we can use these same ideas to define semantic information in relation to some arbitrary goal, or set of goals. It seems like putting the framework in an information-theoretic context is very helpful for this purpose. It feels like there should be some correspondence between the viability function and the partial achievement of goals.
Leaning on the information-theoretic interpretation again, I’m not even sure it would require any different treatment to allow for non-continuous continuation of the system (or goal). This way things like the hibernation of a tardigrade, hiring a contractor at a future date, and an AI reloading itself from backup are all approachable.
But the devil is in the details, so I will table these speculations until after seeing whether the rest of the paper passes the sniff test.