One issue would be that it appears that the same argument can be used to argue for the troublesomeness of cyclic graphs.
Consider a graph that is mostly a tree, but one directed edge points to the root. What is the difference that makes your argument inapplicable to the graph, but applicable to a model of reality that contains a model of the model?
Thanks for trying to find a concrete example! Yet to be honest, I don’t get yours. I don’t see either a model or a world here. It seems that you consider the cyclic graph as a model of the unrolled graph, but there is no agent embedded in a world here.
Either way, I provided an explanation of what I really meant in this comment, which might solve the issue you’re seeing.
The quote sounds like an argument for non-existence of quines or of the context in which things like the diagonalization lemma are formulated. I think it obviously sounds like this, so raising nonspecific concern in my comment above should’ve been enough to draw attention to this issue. It’s also not a problem Agent Foundations explores, but it’s presented as such. Given your background and effort put into the post this interpretation of the quote seems unlikely (which is why I didn’t initially clarify, to give you the first move). So I’m confused. Everything is confusing here, including your comment above not taking the cue, positive voting on it, and negative voting on my comment. Maybe the intended meanings of “model” and “being exact” and “representation” are such that the argument makes sense and becomes related to Agent Foundations?
I do appreciate you pointing out this issue, and giving me the benefit of the doubt. That being said, I prefer that comments clarify the issue raised, if only so that I’m more sure of my interpretation. The up and downvotes in this thread are I think representative of this preference (not that I downvoted your post—I was glad for feedback).
About the quote itself, rereading it and rereading Embedded Agency, I think you’re right about what I write not being an Agents Foundation problem (at least not one I know of). What I had in mind was more about non-realizability and self-reference in the context of decision/game theory. I seem to have mixed the two with naive Gödelian self-reference in my head at the time of writing, which resulted in this quote.
Do you think that this proposed change solves your issues?
“This has many ramifications, including non-realizability (the impossibility of the agent to contain an exact model of the world, because it is inside the world and thus smaller), self-referential issues in the context of game theory (because the model is part of the agent which is part of the world, other agents can access it and exploit it), and the need to find an agent/world boundary (as it’s not given for free like in the dualistic perspective).”
Having an exact model of the world that contains the agent doesn’t require any explicit self-references or references to the agent. For example, if there are two programs whose behavior is equivalent, A and A’, and the agent correctly thinks of itself as A, then it can also know the world to be a program W(A’) with some subexpressions A’, but without subexpression A. To see the consequences of its actions in this world, it would be useful for the agent to figure out that A is equivalent to A’, but it is not necessary that this is known to the agent from the start, so any self-reference in this setting is implicit. Also, A’ can’t have W(A’) as a subexpression, for reasons that do admit an explanation given in the quote that started this thread, but at the same time A can have W(A’) as a subexpression. What is smaller here, the world or the agent?
(What’s naive Gödelian self-reference? I don’t recall this term, and googling didn’t help.)
Dealing with self-reference in definitions of agents and worlds does not require (or even particularly recommend) non-realizability. I don’t think it’s an issue specific to embedded agents, probably all puzzles that fall within this scope can be studied while requiring the world to be a finite program. It might be a good idea to look for other settings, but it’s not forced by the problem statement.
non-realizability (the impossibility of the agent to contain an exact model of the world, because it is inside the world and thus smaller)
Being inside the world does not make it impossible for the agent to contain the exact model of the world, does not require non-realizability in its reasoning about the world. This is the same error as in the original quote. In what way are quines not an intuitive counterexample to this reasoning? Specifically, the error is in saying “and thus smaller”. What does “smaller” mean, and how does being a part interact with it? Parts are not necessarily smaller than the whole, they can well be larger. Exact descriptions of worlds and agents are not just finite expressions, they are at least equivalence classes of expressions that behave in the same way, and elements of those equivalence classes can have vastly different syntactic size.
(Of course in some settings there are reasons for non-realizability to be necessary or to not be a problem.)
That being said, I’m not an expert on Embedded Agency, and that’s definitely not the point of this post, so just writing stuff that are explicitly said in the corresponding sequence is good enough for my purpose. Notably, the section on Embedded World Models from Embedded Agency begins with:
One difficulty is that, since the agent is part of the environment, modeling the environment in every detail would require the agent to model itself in every detail, which would require the agent’s self-model to be as “big” as the whole agent. An agent can’t fit inside its own head.
Maybe that’s not correct/exact/the right perspective on the question. But once again, I’m literally giving a two sentence explanations of what the approach says, not the ground truth or a detailed investigation of the subject.
Yeah, that was sloppy of the article. In context, the quote makes a bit of sense, and the qualifier “in every detail” does useful work (though I don’t see how to make the argument clear just by defining what these words mean), but without context it’s invalid.
Sorry for my last comment, it was more a knee-jerk reaction than a rational conclusion.
My issue here is that I’m still not sure of what would be a good replacement for the above quote, that still keeps intact the value of having compressed representations of systems following goals. Do you have an idea?
What’s the issue?
One issue would be that it appears that the same argument can be used to argue for the troublesomeness of cyclic graphs.
Consider a graph that is mostly a tree, but one directed edge points to the root. What is the difference that makes your argument inapplicable to the graph, but applicable to a model of reality that contains a model of the model?
Thanks for trying to find a concrete example! Yet to be honest, I don’t get yours. I don’t see either a model or a world here. It seems that you consider the cyclic graph as a model of the unrolled graph, but there is no agent embedded in a world here.
Either way, I provided an explanation of what I really meant in this comment, which might solve the issue you’re seeing.
The quote sounds like an argument for non-existence of quines or of the context in which things like the diagonalization lemma are formulated. I think it obviously sounds like this, so raising nonspecific concern in my comment above should’ve been enough to draw attention to this issue. It’s also not a problem Agent Foundations explores, but it’s presented as such. Given your background and effort put into the post this interpretation of the quote seems unlikely (which is why I didn’t initially clarify, to give you the first move). So I’m confused. Everything is confusing here, including your comment above not taking the cue, positive voting on it, and negative voting on my comment. Maybe the intended meanings of “model” and “being exact” and “representation” are such that the argument makes sense and becomes related to Agent Foundations?
I do appreciate you pointing out this issue, and giving me the benefit of the doubt. That being said, I prefer that comments clarify the issue raised, if only so that I’m more sure of my interpretation. The up and downvotes in this thread are I think representative of this preference (not that I downvoted your post—I was glad for feedback).
About the quote itself, rereading it and rereading Embedded Agency, I think you’re right about what I write not being an Agents Foundation problem (at least not one I know of). What I had in mind was more about non-realizability and self-reference in the context of decision/game theory. I seem to have mixed the two with naive Gödelian self-reference in my head at the time of writing, which resulted in this quote.
Do you think that this proposed change solves your issues?
“This has many ramifications, including non-realizability (the impossibility of the agent to contain an exact model of the world, because it is inside the world and thus smaller), self-referential issues in the context of game theory (because the model is part of the agent which is part of the world, other agents can access it and exploit it), and the need to find an agent/world boundary (as it’s not given for free like in the dualistic perspective).”
Having an exact model of the world that contains the agent doesn’t require any explicit self-references or references to the agent. For example, if there are two programs whose behavior is equivalent, A and A’, and the agent correctly thinks of itself as A, then it can also know the world to be a program W(A’) with some subexpressions A’, but without subexpression A. To see the consequences of its actions in this world, it would be useful for the agent to figure out that A is equivalent to A’, but it is not necessary that this is known to the agent from the start, so any self-reference in this setting is implicit. Also, A’ can’t have W(A’) as a subexpression, for reasons that do admit an explanation given in the quote that started this thread, but at the same time A can have W(A’) as a subexpression. What is smaller here, the world or the agent?
(What’s naive Gödelian self-reference? I don’t recall this term, and googling didn’t help.)
Dealing with self-reference in definitions of agents and worlds does not require (or even particularly recommend) non-realizability. I don’t think it’s an issue specific to embedded agents, probably all puzzles that fall within this scope can be studied while requiring the world to be a finite program. It might be a good idea to look for other settings, but it’s not forced by the problem statement.
Being inside the world does not make it impossible for the agent to contain the exact model of the world, does not require non-realizability in its reasoning about the world. This is the same error as in the original quote. In what way are quines not an intuitive counterexample to this reasoning? Specifically, the error is in saying “and thus smaller”. What does “smaller” mean, and how does being a part interact with it? Parts are not necessarily smaller than the whole, they can well be larger. Exact descriptions of worlds and agents are not just finite expressions, they are at least equivalence classes of expressions that behave in the same way, and elements of those equivalence classes can have vastly different syntactic size.
(Of course in some settings there are reasons for non-realizability to be necessary or to not be a problem.)
Thanks for additional explanations.
That being said, I’m not an expert on Embedded Agency, and that’s definitely not the point of this post, so just writing stuff that are explicitly said in the corresponding sequence is good enough for my purpose. Notably, the section on Embedded World Models from Embedded Agency begins with:
Maybe that’s not correct/exact/the right perspective on the question. But once again, I’m literally giving a two sentence explanations of what the approach says, not the ground truth or a detailed investigation of the subject.
Yeah, that was sloppy of the article. In context, the quote makes a bit of sense, and the qualifier “in every detail” does useful work (though I don’t see how to make the argument clear just by defining what these words mean), but without context it’s invalid.
Sorry for my last comment, it was more a knee-jerk reaction than a rational conclusion.
My issue here is that I’m still not sure of what would be a good replacement for the above quote, that still keeps intact the value of having compressed representations of systems following goals. Do you have an idea?