which stems from the assumption that you are able to carve an environment up into an agent and an environment and place the “same agent” in arbitrary environments. No such thing is possible in reality, as an agent cannot exist without its environment
I might be misunderstanding what you mean here, but carving up a world into agent vs environment is absolutely possible in reality, as is placing that agent in arbitrary environments to see what it does. You can think of the traditional RL setting as a concrete example of this: on one side we have an agent that is executing some policy π(a|s); and on the other side we have an environment that consists of state transition dynamics given by some distribution p(s|a,s′). One can in fact show (see the unidentifiability in IRL paper) that if an experimenter has the power to vary the environment p(s|a,s′) arbitrarily and look at the policies the agent pursues on each of those environments, then that experimenter can recover a reward function that is unique up to the usual affine transformations.
That recovered reward function is a fortiori a reliable invariant of the agent, since it is consistent with the agent’s actions under every possible environment the agent could be exposed to. (To be clear, this claim is also proved in the paper.) It also seems reasonable to identify that reward function with the mesa-objective of the agent, because any mesa-objective that is not identical with that reward function has to be inconsistent with the agent’s actions on at least one environment.
Admittedly there are some technical caveats to this particular result: off the top, 1) the set of states & actions is fixed across environments; 2) the result was proved only for finite sets of states & actions; and 3) optimal policy is assumed. I could definitely imagine taking issue with some of these caveats — is this the sort of thing you mean? Or perhaps you’re skeptical that a proof like this in the RL setting could generalize to the train/test framing we generally use for NNs?
in the OOD robustness literature you try to optimize worst-case performance over a perturbation set of possible environments.
Yeah that’s sensible because this is often all you can do in practice. Having an omnipotent experimenter is rarely realistic, but imo it’s still useful as a way to bootstrap a definition of the mesa-objective.
Btw, if you’re aware of any counterpoints to this — in particular anything like a clearly worked-out counterexample showing that one can’t carve up a world, or recover a consistent utility function through this sort of process — please let me know. I’m directly working on a generalization of this problem at the moment, and anything like that could significantly accelerate my execution.
Btw, if you’re aware of any counterpoints to this — in particular anything like a clearly worked-out counterexample showing that one can’t carve up a world, or recover a consistent utility function through this sort of process — please let me know. I’m directly working on a generalization of this problem at the moment, and anything like that could significantly accelerate my execution.
I’m not sure what would constitute a clearly-worked counterexample. To me, a high reliance on an agent/world boundary constitutes a “non-naturalistic” assumption, which simply makes me think a framework is more artificial/fragile.
For example, AIXI assumes a hard boundary between agent and environment. One manifestation of this assumption is how AIXI doesn’t predict its own future actions the way it predicts everything else, and instead, must explicitly plan its own future actions. This is necessary because AIXI is not computable, so treating the future self as part of the environment (and predicting it with the same predictive capabilities as usual) would violate the assumption of a computable environment. But this is unfortunate for a few reasons. First, it forces AIXI to have an arbitrary finite planning horizon, which is weird for something that is supposed to represent unbounded intelligence. Second, there is no reason to carry this sort of thing over to finite, computable agents; so it weakens the generality of the model, by introducing a design detail that’s very dependent on the specific infinite setting.
Another example would be game-theoretic reasoning. Suppose I am concerned about cooperative behavior in deployed AI systems. I might work on something like the equilibrium selection problem in game theory, looking for rationality concepts which can select cooperative equilibria where they exist. However, this kind of work will typically treat a “game” as something which inherently comes with a pointer to the other agents. This limits the real-world applicability of such results, because to apply it to real AI systems, those systems would need “agent pointers” as well. This is a difficult engineering problem (creating an AI system which identifies “agents” in its environment); and even assuming away the engineering challenges, there are serious philosophical difficulties (what really counts as an “agent”?).
We could try to tackle those difficulties, but my assumption will tend to be that it’ll result in fairly brittle abstractions with weird failure modes.
Instead, I would advocate for Pavlov-like strategies which do not depend on actually identifying “agents” in order to have cooperative properties. I expect these to be more robust and present fewer technical challenges.
Of course, this general heuristic may not turn out to apply in the specific case we are discussing. If you control the training process, then, for the duration of training, you control the agent and the environment, and these concepts seem unproblematic. However, it does seem unrealistic to really check every environment; so, it seems like to establish strong guarantees, you’d need to do worst-case reasoning over arbitrary environments, rather than checking environments in detail. This is how I was mainly interpreting jbkjr; perturbation sets could be a way to make things more feasible (at a cost).
I’m not sure what would constitute a clearly-worked counterexample. To me, a high reliance on an agent/world boundary constitutes a “non-naturalistic” assumption, which simply makes me think a framework is more artificial/fragile.
Oh for sure. I wouldn’t recommend having a Cartesian boundary assumption as the fulcrum of your alignment strategy, for example. But what could be interesting would be to look at an isolated dynamical system, draw one boundary, investigate possible objective functions in the context of that boundary; then erase that first boundary, draw a second boundary, investigate that; etc. And then see whether any patterns emerge that might fit an intuitive notion of agency. But the only fundamentally real object here is always going to be the whole system, absolutely.
As I understand, something like AIXI forces you to draw one particular boundary because of the way the setting is constructed (infinite on one side, finite on the other). So I’d agree that sort of thing is more fragile.
The multiagent setting is interesting though, because it gets you into the game of carving up your universe into more than 2 pieces. Again it would be neat to investigate a setting like this with different choices of boundaries and see if some choices have more interesting properties than others.
Btw, if you’re aware of any counterpoints to this — in particular anything like a clearly worked-out counterexample showing that one can’t carve up a world, or recover a consistent utility function through this sort of process — please let me know. I’m directly working on a generalization of this problem at the moment, and anything like that could significantly accelerate my execution.
I’m not saying you can’t reason under the assumption of a Cartesian boundary, I’m saying the results you obtain when doing so are of questionable relevance to reality, because “agents” and “environments” can only exist in a map, not the territory. The idea of trying to e.g. separate “your atoms” or whatever from those of “your environment,” so that you can drop them into those of “another environment,” is only a useful fiction, as in reality they’re entangled with everything else. I’m not aware of formal proof of this point that I’m trying to make; it’s just a pretty strongly held intuition. Isn’t this also kind of one of the key motivations for thinking about embedded agency?
Yes, the point about the Cartesian boundary is important. And it’s completely true that any agent / environment boundary we draw will always be arbitrary. But that doesn’t mean one can’t usefully draw such a boundary in the real world — and unless one does, it’s hard to imagine how one could ever generate a working definition of something like a mesa-objective. (Because you’d always be unable to answer the legitimate question: “the mesa-objective of what?”)
Of course the right question will always be: “what is the whole universe optimizing for?” But it’s hard to answer that! So in practice, we look at bits of the whole universe that we pretend are isolated. All I’m saying is that, to the extent you can meaningfully ask the question, “what is this bit of the universe optimizing for?”, you should be able to clearly demarcate which bit you’re asking about.
(i.e. I agree with you that duality is a useful fiction, just saying that we can still use it to construct useful definitions.)
I would further add that looking for difficulties created by the simplification seems very intellectually productive. (Solving “embedded agency problems” seems to genuinely allow you to do new things, rather than just soothing philosophical worries.) But yeah, I would agree that if we’re defining mesa-objective anyway, we’re already in the business of assuming some agent/environment boundary.
I would further add that looking for difficulties created by the simplification seems very intellectually productive.
Yep, strongly agree. And a good first step to doing this is to actually build as robust a simplification as you can, and then see where it breaks. (Working on it.)
(Because you’d always be unable to answer the legitimate question: “the mesa-objective of what?”)
All I’m saying is that, to the extent you can meaningfully ask the question, “what is this bit of the universe optimizing for?”, you should be able to clearly demarcate which bit you’re asking about.
I totally agree with this; I guess I’m just (very) wary about being able to “clearly demarcate” whichever bit we’re asking about and therefore fairly pessimistic we can “meaningfully” ask the question to begin with? Like, if you start asking yourself questions like “what am ‘I’ optimizing for?,” and then try to figure out exactly what the demarcation is between “you” and “everything else” in order to answer that question, you’re gonna have a real tough time finding anything close to a satisfactory answer.
Yeah I agree this is a legitimate concern, though it seems like it is definitely possible to make such a demarcation in toy universes (like in the example I gave above). And therefore it ought to be possible in principle to do so in our universe.
To try to understand a bit better: does your pessimism about this come from the hardness of the technical challenge of querying a zillion-particle entity for its objective function? Or does it come from the hardness of the definitional challenge of exhaustively labeling every one of those zillion particles to make sure the demarcation is fully specified? Or is there a reason you think constructing any such demarcation is impossible even in principle? Or something else?
To try to understand a bit better: does your pessimism about this come from the hardness of the technical challenge of querying a zillion-particle entity for its objective function? Or does it come from the hardness of the definitional challenge of exhaustively labeling every one of those zillion particles to make sure the demarcation is fully specified? Or is there a reason you think constructing any such demarcation is impossible even in principle? Or something else?
Probably something like the last one, although I think “even in principle” is doing some probably doing something suspicious in that statement. Like, sure, “in principle,” you can pretty much construct any demarcation you could possibly imagine, including the Cartesian one, but what I’m trying to say is something like, “all demarcations, by their very nature, exist only in the map, not the territory.” Carving reality is an operation that could only make sense within the context of a map, as reality simply is. Your concept of “agent” is defined in terms of other representations that similarly exist only within your world-model; other humans have a similar concept of “agent” because they have a similar representation built from correspondingly similar parts. If an AI is to understand the human notion of “agency,” it will need to also understand plenty of other “things” which are also only abstractions or latent variables within our world models, as well as what those variables “point to” (at least, what variables in the AI’s own world model they ‘point to,’ as by now I hope you’re seeing the problem with trying to talk about “things they point to” in external/‘objective’ reality!).
I’m with you on this, and I suspect we’d agree on most questions of fact around this topic. Of course demarcation is an operation on maps and not on territories.
But as a practical matter, the moment one starts talking about the definition of something such as a mesa-objective, one has already unfolded one’s map and started pointing to features on it. And frankly, that seems fine! Because historically, a great way to make forward progress on a conceptual question has been to work out a sequence of maps that give you successive degrees of approximation to the territory.
I’m not suggesting actually trying to imbue an AI with such concepts — that would be dangerous (for the reasons you alluded to) even if it wasn’t pointless (because prosaic systems will just learn the representations they need anyway). All I’m saying is that the moment we started playing the game of definitions, we’d already started playing the game of maps. So using an arbitrary demarcation to construct our definitions might be bad for any number of legitimate reasons, but it can’t be bad just because it caused us to start using maps: our earlier decision to talk about definitions already did that.
(I’m not 100% sure if I’ve interpreted your objection correctly, so please let me know if I haven’t.)
I might be misunderstanding what you mean here, but carving up a world into agent vs environment is absolutely possible in reality, as is placing that agent in arbitrary environments to see what it does. You can think of the traditional RL setting as a concrete example of this: on one side we have an agent that is executing some policy π(a|s); and on the other side we have an environment that consists of state transition dynamics given by some distribution p(s|a,s′). One can in fact show (see the unidentifiability in IRL paper) that if an experimenter has the power to vary the environment p(s|a,s′) arbitrarily and look at the policies the agent pursues on each of those environments, then that experimenter can recover a reward function that is unique up to the usual affine transformations.
That recovered reward function is a fortiori a reliable invariant of the agent, since it is consistent with the agent’s actions under every possible environment the agent could be exposed to. (To be clear, this claim is also proved in the paper.) It also seems reasonable to identify that reward function with the mesa-objective of the agent, because any mesa-objective that is not identical with that reward function has to be inconsistent with the agent’s actions on at least one environment.
Admittedly there are some technical caveats to this particular result: off the top, 1) the set of states & actions is fixed across environments; 2) the result was proved only for finite sets of states & actions; and 3) optimal policy is assumed. I could definitely imagine taking issue with some of these caveats — is this the sort of thing you mean? Or perhaps you’re skeptical that a proof like this in the RL setting could generalize to the train/test framing we generally use for NNs?
Yeah that’s sensible because this is often all you can do in practice. Having an omnipotent experimenter is rarely realistic, but imo it’s still useful as a way to bootstrap a definition of the mesa-objective.
Btw, if you’re aware of any counterpoints to this — in particular anything like a clearly worked-out counterexample showing that one can’t carve up a world, or recover a consistent utility function through this sort of process — please let me know. I’m directly working on a generalization of this problem at the moment, and anything like that could significantly accelerate my execution.
Thanks!!
Ah, I wasn’t aware of this!
I’m not sure what would constitute a clearly-worked counterexample. To me, a high reliance on an agent/world boundary constitutes a “non-naturalistic” assumption, which simply makes me think a framework is more artificial/fragile.
For example, AIXI assumes a hard boundary between agent and environment. One manifestation of this assumption is how AIXI doesn’t predict its own future actions the way it predicts everything else, and instead, must explicitly plan its own future actions. This is necessary because AIXI is not computable, so treating the future self as part of the environment (and predicting it with the same predictive capabilities as usual) would violate the assumption of a computable environment. But this is unfortunate for a few reasons. First, it forces AIXI to have an arbitrary finite planning horizon, which is weird for something that is supposed to represent unbounded intelligence. Second, there is no reason to carry this sort of thing over to finite, computable agents; so it weakens the generality of the model, by introducing a design detail that’s very dependent on the specific infinite setting.
Another example would be game-theoretic reasoning. Suppose I am concerned about cooperative behavior in deployed AI systems. I might work on something like the equilibrium selection problem in game theory, looking for rationality concepts which can select cooperative equilibria where they exist. However, this kind of work will typically treat a “game” as something which inherently comes with a pointer to the other agents. This limits the real-world applicability of such results, because to apply it to real AI systems, those systems would need “agent pointers” as well. This is a difficult engineering problem (creating an AI system which identifies “agents” in its environment); and even assuming away the engineering challenges, there are serious philosophical difficulties (what really counts as an “agent”?).
We could try to tackle those difficulties, but my assumption will tend to be that it’ll result in fairly brittle abstractions with weird failure modes.
Instead, I would advocate for Pavlov-like strategies which do not depend on actually identifying “agents” in order to have cooperative properties. I expect these to be more robust and present fewer technical challenges.
Of course, this general heuristic may not turn out to apply in the specific case we are discussing. If you control the training process, then, for the duration of training, you control the agent and the environment, and these concepts seem unproblematic. However, it does seem unrealistic to really check every environment; so, it seems like to establish strong guarantees, you’d need to do worst-case reasoning over arbitrary environments, rather than checking environments in detail. This is how I was mainly interpreting jbkjr; perturbation sets could be a way to make things more feasible (at a cost).
Oh for sure. I wouldn’t recommend having a Cartesian boundary assumption as the fulcrum of your alignment strategy, for example. But what could be interesting would be to look at an isolated dynamical system, draw one boundary, investigate possible objective functions in the context of that boundary; then erase that first boundary, draw a second boundary, investigate that; etc. And then see whether any patterns emerge that might fit an intuitive notion of agency. But the only fundamentally real object here is always going to be the whole system, absolutely.
As I understand, something like AIXI forces you to draw one particular boundary because of the way the setting is constructed (infinite on one side, finite on the other). So I’d agree that sort of thing is more fragile.
The multiagent setting is interesting though, because it gets you into the game of carving up your universe into more than 2 pieces. Again it would be neat to investigate a setting like this with different choices of boundaries and see if some choices have more interesting properties than others.
I’m not saying you can’t reason under the assumption of a Cartesian boundary, I’m saying the results you obtain when doing so are of questionable relevance to reality, because “agents” and “environments” can only exist in a map, not the territory. The idea of trying to e.g. separate “your atoms” or whatever from those of “your environment,” so that you can drop them into those of “another environment,” is only a useful fiction, as in reality they’re entangled with everything else. I’m not aware of formal proof of this point that I’m trying to make; it’s just a pretty strongly held intuition. Isn’t this also kind of one of the key motivations for thinking about embedded agency?
Ah I see! Thanks for clarifying.
Yes, the point about the Cartesian boundary is important. And it’s completely true that any agent / environment boundary we draw will always be arbitrary. But that doesn’t mean one can’t usefully draw such a boundary in the real world — and unless one does, it’s hard to imagine how one could ever generate a working definition of something like a mesa-objective. (Because you’d always be unable to answer the legitimate question: “the mesa-objective of what?”)
Of course the right question will always be: “what is the whole universe optimizing for?” But it’s hard to answer that! So in practice, we look at bits of the whole universe that we pretend are isolated. All I’m saying is that, to the extent you can meaningfully ask the question, “what is this bit of the universe optimizing for?”, you should be able to clearly demarcate which bit you’re asking about.
(i.e. I agree with you that duality is a useful fiction, just saying that we can still use it to construct useful definitions.)
I would further add that looking for difficulties created by the simplification seems very intellectually productive. (Solving “embedded agency problems” seems to genuinely allow you to do new things, rather than just soothing philosophical worries.) But yeah, I would agree that if we’re defining mesa-objective anyway, we’re already in the business of assuming some agent/environment boundary.
Yep, strongly agree. And a good first step to doing this is to actually build as robust a simplification as you can, and then see where it breaks. (Working on it.)
I totally agree with this; I guess I’m just (very) wary about being able to “clearly demarcate” whichever bit we’re asking about and therefore fairly pessimistic we can “meaningfully” ask the question to begin with? Like, if you start asking yourself questions like “what am ‘I’ optimizing for?,” and then try to figure out exactly what the demarcation is between “you” and “everything else” in order to answer that question, you’re gonna have a real tough time finding anything close to a satisfactory answer.
Yeah I agree this is a legitimate concern, though it seems like it is definitely possible to make such a demarcation in toy universes (like in the example I gave above). And therefore it ought to be possible in principle to do so in our universe.
To try to understand a bit better: does your pessimism about this come from the hardness of the technical challenge of querying a zillion-particle entity for its objective function? Or does it come from the hardness of the definitional challenge of exhaustively labeling every one of those zillion particles to make sure the demarcation is fully specified? Or is there a reason you think constructing any such demarcation is impossible even in principle? Or something else?
Probably something like the last one, although I think “even in principle” is doing some probably doing something suspicious in that statement. Like, sure, “in principle,” you can pretty much construct any demarcation you could possibly imagine, including the Cartesian one, but what I’m trying to say is something like, “all demarcations, by their very nature, exist only in the map, not the territory.” Carving reality is an operation that could only make sense within the context of a map, as reality simply is. Your concept of “agent” is defined in terms of other representations that similarly exist only within your world-model; other humans have a similar concept of “agent” because they have a similar representation built from correspondingly similar parts. If an AI is to understand the human notion of “agency,” it will need to also understand plenty of other “things” which are also only abstractions or latent variables within our world models, as well as what those variables “point to” (at least, what variables in the AI’s own world model they ‘point to,’ as by now I hope you’re seeing the problem with trying to talk about “things they point to” in external/‘objective’ reality!).
I’m with you on this, and I suspect we’d agree on most questions of fact around this topic. Of course demarcation is an operation on maps and not on territories.
But as a practical matter, the moment one starts talking about the definition of something such as a mesa-objective, one has already unfolded one’s map and started pointing to features on it. And frankly, that seems fine! Because historically, a great way to make forward progress on a conceptual question has been to work out a sequence of maps that give you successive degrees of approximation to the territory.
I’m not suggesting actually trying to imbue an AI with such concepts — that would be dangerous (for the reasons you alluded to) even if it wasn’t pointless (because prosaic systems will just learn the representations they need anyway). All I’m saying is that the moment we started playing the game of definitions, we’d already started playing the game of maps. So using an arbitrary demarcation to construct our definitions might be bad for any number of legitimate reasons, but it can’t be bad just because it caused us to start using maps: our earlier decision to talk about definitions already did that.
(I’m not 100% sure if I’ve interpreted your objection correctly, so please let me know if I haven’t.)