That being said, I’m not an expert on Embedded Agency, and that’s definitely not the point of this post, so just writing stuff that are explicitly said in the corresponding sequence is good enough for my purpose. Notably, the section on Embedded World Models from Embedded Agency begins with:
One difficulty is that, since the agent is part of the environment, modeling the environment in every detail would require the agent to model itself in every detail, which would require the agent’s self-model to be as “big” as the whole agent. An agent can’t fit inside its own head.
Maybe that’s not correct/exact/the right perspective on the question. But once again, I’m literally giving a two sentence explanations of what the approach says, not the ground truth or a detailed investigation of the subject.
Yeah, that was sloppy of the article. In context, the quote makes a bit of sense, and the qualifier “in every detail” does useful work (though I don’t see how to make the argument clear just by defining what these words mean), but without context it’s invalid.
Sorry for my last comment, it was more a knee-jerk reaction than a rational conclusion.
My issue here is that I’m still not sure of what would be a good replacement for the above quote, that still keeps intact the value of having compressed representations of systems following goals. Do you have an idea?
Thanks for additional explanations.
That being said, I’m not an expert on Embedded Agency, and that’s definitely not the point of this post, so just writing stuff that are explicitly said in the corresponding sequence is good enough for my purpose. Notably, the section on Embedded World Models from Embedded Agency begins with:
Maybe that’s not correct/exact/the right perspective on the question. But once again, I’m literally giving a two sentence explanations of what the approach says, not the ground truth or a detailed investigation of the subject.
Yeah, that was sloppy of the article. In context, the quote makes a bit of sense, and the qualifier “in every detail” does useful work (though I don’t see how to make the argument clear just by defining what these words mean), but without context it’s invalid.
Sorry for my last comment, it was more a knee-jerk reaction than a rational conclusion.
My issue here is that I’m still not sure of what would be a good replacement for the above quote, that still keeps intact the value of having compressed representations of systems following goals. Do you have an idea?