Thanks for the reply. This comment is mostly me disagreeing with you.[1] But I really wish someone had said the following things to me before I spent thousands of hours thinking about optimal policies.
I agree that learning a goal from the training-compatible set is a strong assumption that might not hold.
My point is not just that this post has made a strong assumption which may not hold. My point is rather that these results are not probable because the assumption won’t hold. The assumptions are already known to not be good approximations of trained policies, in at least some prototypical RL situations. I also think that there is not good a priori reason to have expected “training-compatible” “goals” to be learned. According to me, “learning and optimizing a reward function” is both unclear communication and doesn’t actually seem to happen in practice.
This post assumes a standard RL setup and is not intended to apply to LLMs
I don’t see any formal assumption which excludes LLM finetuning. Which assumption do you think should exclude them? EDIT: Someone privately pointed out that LLM finetuning uses KL, which isn’t present for your results. In that case I would agree your results don’t apply to LLMs for that reason.
I agree that reward functions are not the best way to refer to possible goals. This post builds on the formalism in the power-seeking paper which is based on reward functions, so it was easiest to stick with this terminology.
I can talk about utility functions instead (which would be equivalent to value functions in this case)
I disagree that these are equivalent, and expect the policy and value function to come apart in practice. Indeed, that was observed in the original goal misgeneralization paper (3.3, actor-critic inconsistency).
Anyways, we can talk about utility functions, but then we’re going to lose claim to probable-ness, no? Why should we assume that the network will internally represent a scalar function over observations, consistent with a historical training signal’s scalar values (and let’s not get into nonstationary reward), such that the network will maximize discounted sum return of this internally represented function? That seems highly improbable to me, and I don’t think reality will be “basically that” either.
I think it is pretty clear in the post that I’m not talking about reinforcement functions and the training reward is not the optimization target, but I could clarify this further if needed.
I agree that you don’t assume the network will optimize the training reward. But that’s not the critique I intended to communicate. The post wrote (emphasis added):
Suppose an agent is trained using reinforcement learning with reward function θ∗. We assume that the agent learns a goal during the training process: a set of internal representations of favored and disfavored outcomes. For simplicity, we assume this is equivalent to learning a reward function, which is not necessarily the same as the training reward function θ∗. We consider the set of reward functions that are consistent with the training rewards received by the agent, in the sense that agent’s behavior on the training data is optimal for these reward functions. We call this the training-compatible goal set, and we expect that the agent is most likely to learn a reward function from this set.
This is talking about the reward/reinforcement function θ∗, no? And assuming that the policy will be optimal on training? As I currently understand it, this post makes unsupported and probably-wrong claims/assumptions about the role and effect of the reinforcement function. (EG assuming that using a reinforcement function on the network, means that the network learns an internally represented reinforcement function which it maximizes and whose optimization is behaviorally consistent with optimizing historically observed reinforcements.)
I think utility functions are still the best formalism we have to represent goals, and I don’t have a clear sense of the alternative you are proposing.
To be clear, I’m not proposing an alternative formalism. None of my comment intended to make positive shard theory claims. Whether or not we know of an alternative formalism, I currently feel confident that your results are not probable and furthermore cast RL in an unrealistic light. This is inconvenient since I don’t have a better formalism to suggest, but I think it’s still true.
I think you’re making a mistake: policies can be reward-optimal even if there’s not an obvious box labelled “reward” that they’re optimal with respect to the outputs of. Similarly, the formalism of “reward” can be useful even if this box doesn’t exist, or even if the policy isn’t behaving the way you would expect if you identified that box with the reward function. To be fair, the post sort of makes this mistake by talking about “internal representations”, but I think everything goes thru if you strike out that talk.
The main thing I want to talk about
I can talk about utility functions instead (which would be equivalent to value functions in this case)
I disagree that these are equivalent, and expect the policy and value function to come apart in practice. Indeed, that was observed in the original goal misgeneralization paper (3.3, actor-critic inconsistency).
I think you’re the one who’s imposing a type error here. For “value functions” to be useful in modelling a policy, it doesn’t have to be the case that the policy is acting optimally with respect to a suggestively-labeled critic—it just has to be the case that the agent is acting consistently with some value function. Analogously, momentum is conserved in classical mechanics, even if objects have labels on them that inaccurately say “my momentum is 23 kg m/s”.
Anyways, we can talk about utility functions, but then we’re going to lose claim to predictiveness, no? Why should we assume that the network will internally represent a scalar function over observations, consistent with a historical training signal’s scalar values (and let’s not get into nonstationary reward), such that the network will maximize discounted sum return of this internally represented function? That seems highly improbable to me, and I don’t think reality will be “basically that” either.
The utility function formalism doesn’t require agents to “internally represent a scalar function over observations”. You’ll notice that this isn’t one of the conclusions of the VNM theorem.
Another thing I don’t get
My point is rather that these results are not predictive because the assumption won’t hold. The assumptions are already known to not be good approximations of trained policies, in at least some prototypical RL situations.
What part of the post you link rules this out? As far as I can tell, the thing you’re saying is that a few factors influence the decisions of the maze-solving agent, which isn’t incompatible with the agent acting optimally with respect to some reward function such that it produces training-reward-optimal behaviour on the training set.
I think you’re the one who’s imposing a type error here. For “value functions” to be useful in modelling a policy, it doesn’t have to be the case that the policy is acting optimally with respect to a suggestively-labeled critic—it just has to be the case that the agent is acting consistently with some value function.
Can you say more? Maybe give an example of what this looks like in the maze-solving regime?
What part of the post you link rules this out? As far as I can tell, the thing you’re saying is that a few factors influence the decisions of the maze-solving agent, which isn’t incompatible with the agent acting optimally with respect to some reward function such that it produces training-reward-optimal behaviour on the training set.
This is a fair question, because I left a lot to the reader. I’ll clarify now.
I was not claiming that you can’t, after the fact, rationalize observed behavior using the extremely flexible reward-maximization framework.
I was responding to the specific claim of assuming internal representation of a ‘training-compatible’ reward function. In evaluating this claim, we shouldn’t just see whether this claim is technically compatible with empirical results, but we should instead reason probabilistically. How strongly does this claim predict observed data, relative to other models of policy formation?
In the maze setting, the cheese was always in the top-right 5x5 corner. The reward was sparse and only used to update the network when the mouse hit the cheese. The “training compatible goal set” is unconstrained on the test set. An example element might agree with the training reward on the training distribution, and then outside of the training distribution, assign 1 reward iff the mouse is on the bottom-left square.
The vast majority of such unconstrained functions will not involve pursuing cheese reliably across levels, and most of these reward functions will not be optimized by going to the top-right part of the maze. So this “training-compatible” hypothesis barely assigns any probability to the observed generalization of the network.
However, other hypotheses—like “the policy develops motivations related to obvious correlates of its historical reinforcement signals”[1] -- predict things like “the policy tends to go to the top-right 5x5, and searches for cheese more strongly once there.” I registered such a prediction before seeing any of the generalization behavior. This hypothesis assigns high probability to the observed results.
So this paper’s assumption is simply losing out in a predictive sense, and that’s what I was critiquing. One can nearly always rationalize behavior as optimizing some reward function which you come up with after the fact. But if you want to predict generalization ahead of time, you shouldn’t use this assumption in your reasoning.
Second, I think the network does not internally represent and optimize a reward function. I think that this representation claim is in some (but not total and undeniable) tension with our interpretability results. I am willing to take bets against you on the internal structure of the maze-solving nets.
Thanks Daniel for the detailed response (which I agree with), and thanks Alex for the helpful clarification.
I agree that the training-compatible set is not predictive for how the neural network generalizes (at least under the “strong distributional shift” assumption in this post where the test set is disjoint from the training set, which I think could be weakened in future work). The point of this post is that even though you can’t generally predict behavior in new situations based on the training-compatible set alone, you can still predict power-seeking tendencies. That’s why the title says “power-seeking can be predictive” not “training-compatible goals can be predictive”.
The hypothesis you mentioned seems compatible with the assumptions of this post. When you say “the policy develops motivations related to obvious correlates of its historical reinforcement signals”, these “motivations” seem like a kind of training-compatible goals (if defined more broadly than in this post). I would expect that a system that pursues these motivations in new situations would exhibit some power-seeking tendencies because those correlate with a lot of reinforcement signals.
I suspect a lot of the disagreement here comes from different interpretations of the “internal representations of goals” assumption, I will try to rephrase that part better.
That’s why the title says “power-seeking can be predictive” not “training-compatible goals can be predictive”.
You’re right. I was critiquing “power-seeking due to your assumptions isn’t probable, because I think your assumptions won’t hold” and not “power-seeking isn’t predictive.” I had misremembered the predictive/probable split, as introduced in Definitions of “objective” should be Probable and Predictive:
I don’t see a notion of “objective” that can be confidently claimed is:
Probable: there is a good argument that the systems we build will have an “objective”, and
Predictive: If I know that a system has an “objective”, and I know its behavior on a limited set of training data, I can predict significant aspects of the system’s behavior in novel situations (e.g. whether it will execute a treacherous turn once it has the ability to do so successfully).
Sorry for the confusion. I agree that power-seeking is predictive given your assumptions. I disagree that power-seeking is probable due to your assumptions being probable. The argument I gave above was actually:
The assumptions used in the post (“learns a randomly-selected training-compatible goal”) assign low probability to experimental results, relative to other predictions which I generated (and thus relative to other ways of reasoning about generalization),
Therefore the assumptions become less probable
Therefore power-seeking becomes less probable (at least, due to these specific assumptions becoming less probable; I still think P(power-seeking) is reasonably large)
I suspect that you agree that “learns a training-compatible goal” isn’t very probable/realistic. My point is then that the conclusions of the current work are weakened; maybe now more work has to go into the “can” in “Power-seeking can be probable and predictive.”
The issue with being informal is that it’s hard to tell whether you are right. You use words like “motivations” without defining what you mean, and this makes your statements vague enough that it’s not clear whether or how they are in tension with other claims. (E.g. what I have read so far doesn’t seems to rule out that shards can be modeled as contextually activated subagents with utility functions.)
An upside of formalism is that you can tell when it’s wrong, and thus it can help make our thinking more precise even if it makes assumptions that may not apply. I think defining your terms and making your arguments more formal should be a high priority. I’m not advocating spending hundreds of hours proving theorems, but moving in the direction of formalizing definitions and claims would be quite valuable.
It seems like a bad sign that the most clear and precise summary of shard theory claims was written by someone outside your team. I highly agree with this takeaway from that post: “Making a formalism for shard theory (even one that’s relatively toy) would probably help substantially with both communicating key ideas and also making research progress.” This work has a lot of research debt, and paying it off would really help clarify the disagreements around these topics.
The issue with being informal is that it’s hard to tell whether you are right. You use words like “motivations” without defining what you mean, and this makes your statements vague enough that it’s not clear whether or how they are in tension with other claims.
It seems worth pointing out: the informality is in the hypothesis, which comprises a set of somewhat illegible intuitions and theories I use to reason about generalization. However, the prediction itself is what needs to be graded in order to see whether I was right. I made a prediction fairly like “the policy tends to go to the top-right 5x5, and searches for cheese once there, because that’s where the cheese-seeking computations were more strongly historically reinforced” and “the policy sometimes pursues cheese and sometimes navigates to the top-right 5x5 corner.” These predictions are (informally) gradable, even if the underlying intuitions are informal.
As it pertains to shard theory more broadly, though, I agree that more precision is needed. Increasing precision and formalism is the reason I proposed and executed the project underpinning Understanding and controlling a maze-solving policy network. I wanted to understand more about realistic motivational circuitry and model internals in the real world. I think the last few months have given me headway on a more mechanistic definition of a “shard-based agent.”
What part of the post you link rules this out? As far as I can tell, the thing you’re saying is that a few factors influence the decisions of the maze-solving agent, which isn’t incompatible with the agent acting optimally with respect to some reward function such that it produces training-reward-optimal behaviour on the training set.
We think the complex influence of spatial distances on the network’s decision-making might favor a ‘shard-like’ description: a description of the network’s decisions as coalitions between heuristic submodules whose voting-power varies based on context. While this is still an underdeveloped hypothesis, it’s motivated by two lines of thinking.
First, we weakly suspect that the agent may be systematically dynamically inconsistent from a utility-theoretic perspective. That is, the effects of dstep(mouse,cheese) and (potentially) dEuclidean(cheese,top-right) might turn out to call for a behavior model where the agent’s priorities in a given maze change based on the agent’s current location.
Second, we suspect that if the agent is dynamically consistent, a shard-like description may allow for a more compact and natural statement of an otherwise very gerrymandered-sounding utility function that fixes the value of cheese and top-right in a maze based on a “strange” mixture of maze properties. It may be helpful to look at these properties in terms of similarities to the historical activation conditions of different submodules that favor different plans.
While we consider our evidence suggestive in these directions, it’s possible that some simple but clever utility function will turn out to be predictively successful. For example, consider our two strongly observed effects: dEuclidean(cheese,top-right)and dstep(decision-square,cheese). We might explain these effects by stipulating that:
On each turn, the agent receives value inverse to the agent’s distance from the top-right,
Sharing a square with the cheese adds constant value,
The agent doesn’t know that getting to the cheese ends the game early, and
The agent time-discounts.
We’re somewhat skeptical that models of this kind will hold up once you crunch the numbers and look at scenario-predictions, but they deserve a fair shot.
We hope to revisit these questions rigorously when our mechanistic understanding of the network has matured.
To be fair, the post sort of makes this mistake by talking about “internal representations”, but I think everything goes thru if you strike out that talk.
I’m responding to this post, so why should I strike that out?
The utility function formalism doesn’t require agents to “internally represent a scalar function over observations”. You’ll notice that this isn’t one of the conclusions of the VNM theorem.
The post is talking about internal representations.
The core claim of this post is that if you train a network in some environment, the agent will not generalize optimally with respect to the reward function you trained it on, but will instead be optimal with respect to some other reward function in a way that is compatible with training-reward-optimality, and that this means that it is likely to avoid shutdown in new environments. The idea that this happens because reward functions are “internally represented” isn’t necessary for those results. You’re right that the post uses the phrase “internal representation” once at the start, and some very weak form of “representation” is presumably necessary for the policy to be optimal for a reward function (at least in the sense that you can derive a bunch of facts about a reward function from the optimal policy for that reward function), but that doesn’t mean that they’re central to the post.
Thanks Daniel, this is a great summary. I agree that internal representation of the reward function is not load-bearing for the claim. The weak form of representation that you mentioned is what I was trying to point at. I will rephrase the sentence to clarify this, e.g. something like “We assume that the agent learns a goal during the training process: some form of implicit internal representation of desired state features or concepts”.
The internal representations assumption was meant to be pretty broad, I didn’t mean that the network is explicitly representing a scalar reward function over observations or anything like that—e.g. these can be implicit representations of state features. I think this would also include the kind of representations you are assuming in the maze-solving post, e.g. cheese shards / circuits.
Thanks for the reply. This comment is mostly me disagreeing with you.[1] But I really wish someone had said the following things to me before I spent thousands of hours thinking about optimal policies.
My point is not just that this post has made a strong assumption which may not hold. My point is rather that these results are not probable because the assumption won’t hold. The assumptions are already known to not be good approximations of trained policies, in at least some prototypical RL situations. I also think that there is not good a priori reason to have expected “training-compatible” “goals” to be learned. According to me, “learning and optimizing a reward function” is both unclear communication and doesn’t actually seem to happen in practice.
I don’t see any formal assumption which excludes LLM finetuning. Which assumption do you think should exclude them? EDIT: Someone privately pointed out that LLM finetuning uses KL, which isn’t present for your results. In that case I would agree your results don’t apply to LLMs for that reason.
This point is, in large part, my fault. As I argued in my original comment, this terminology makes readers actively worse at reasoning about realistic trained systems. I regret each of the thousands of hours I spent on the power-seeking work, and sometimes fantasize about retracting one or both papers.
I disagree that these are equivalent, and expect the policy and value function to come apart in practice. Indeed, that was observed in the original goal misgeneralization paper (3.3, actor-critic inconsistency).
Anyways, we can talk about utility functions, but then we’re going to lose claim to probable-ness, no? Why should we assume that the network will internally represent a scalar function over observations, consistent with a historical training signal’s scalar values (and let’s not get into nonstationary reward), such that the network will maximize discounted sum return of this internally represented function? That seems highly improbable to me, and I don’t think reality will be “basically that” either.
I agree that you don’t assume the network will optimize the training reward. But that’s not the critique I intended to communicate. The post wrote (emphasis added):
This is talking about the reward/reinforcement function θ∗, no? And assuming that the policy will be optimal on training? As I currently understand it, this post makes unsupported and probably-wrong claims/assumptions about the role and effect of the reinforcement function. (EG assuming that using a reinforcement function on the network, means that the network learns an internally represented reinforcement function which it maximizes and whose optimization is behaviorally consistent with optimizing historically observed reinforcements.)
To be clear, I’m not proposing an alternative formalism. None of my comment intended to make positive shard theory claims. Whether or not we know of an alternative formalism, I currently feel confident that your results are not probable and furthermore cast RL in an unrealistic light. This is inconvenient since I don’t have a better formalism to suggest, but I think it’s still true.
ETA: For the record, I upvoted both of your replies to me in this thread, and appreciate your engagement and effort.
I think you’re making a mistake: policies can be reward-optimal even if there’s not an obvious box labelled “reward” that they’re optimal with respect to the outputs of. Similarly, the formalism of “reward” can be useful even if this box doesn’t exist, or even if the policy isn’t behaving the way you would expect if you identified that box with the reward function. To be fair, the post sort of makes this mistake by talking about “internal representations”, but I think everything goes thru if you strike out that talk.
The main thing I want to talk about
I think you’re the one who’s imposing a type error here. For “value functions” to be useful in modelling a policy, it doesn’t have to be the case that the policy is acting optimally with respect to a suggestively-labeled critic—it just has to be the case that the agent is acting consistently with some value function. Analogously, momentum is conserved in classical mechanics, even if objects have labels on them that inaccurately say “my momentum is 23 kg m/s”.
The utility function formalism doesn’t require agents to “internally represent a scalar function over observations”. You’ll notice that this isn’t one of the conclusions of the VNM theorem.
Another thing I don’t get
What part of the post you link rules this out? As far as I can tell, the thing you’re saying is that a few factors influence the decisions of the maze-solving agent, which isn’t incompatible with the agent acting optimally with respect to some reward function such that it produces training-reward-optimal behaviour on the training set.
Can you say more? Maybe give an example of what this looks like in the maze-solving regime?
This is a fair question, because I left a lot to the reader. I’ll clarify now.
I was not claiming that you can’t, after the fact, rationalize observed behavior using the extremely flexible reward-maximization framework.
I was responding to the specific claim of assuming internal representation of a ‘training-compatible’ reward function. In evaluating this claim, we shouldn’t just see whether this claim is technically compatible with empirical results, but we should instead reason probabilistically. How strongly does this claim predict observed data, relative to other models of policy formation?
In the maze setting, the cheese was always in the top-right 5x5 corner. The reward was sparse and only used to update the network when the mouse hit the cheese. The “training compatible goal set” is unconstrained on the test set. An example element might agree with the training reward on the training distribution, and then outside of the training distribution, assign 1 reward iff the mouse is on the bottom-left square.
The vast majority of such unconstrained functions will not involve pursuing cheese reliably across levels, and most of these reward functions will not be optimized by going to the top-right part of the maze. So this “training-compatible” hypothesis barely assigns any probability to the observed generalization of the network.
However, other hypotheses—like “the policy develops motivations related to obvious correlates of its historical reinforcement signals”[1] -- predict things like “the policy tends to go to the top-right 5x5, and searches for cheese more strongly once there.” I registered such a prediction before seeing any of the generalization behavior. This hypothesis assigns high probability to the observed results.
So this paper’s assumption is simply losing out in a predictive sense, and that’s what I was critiquing. One can nearly always rationalize behavior as optimizing some reward function which you come up with after the fact. But if you want to predict generalization ahead of time, you shouldn’t use this assumption in your reasoning.
Second, I think the network does not internally represent and optimize a reward function. I think that this representation claim is in some (but not total and undeniable) tension with our interpretability results. I am willing to take bets against you on the internal structure of the maze-solving nets.
You might respond “but this is informal.” Yes. My answer is that it’s better to be informal and right than to be formal and wrong.
Thanks Daniel for the detailed response (which I agree with), and thanks Alex for the helpful clarification.
I agree that the training-compatible set is not predictive for how the neural network generalizes (at least under the “strong distributional shift” assumption in this post where the test set is disjoint from the training set, which I think could be weakened in future work). The point of this post is that even though you can’t generally predict behavior in new situations based on the training-compatible set alone, you can still predict power-seeking tendencies. That’s why the title says “power-seeking can be predictive” not “training-compatible goals can be predictive”.
The hypothesis you mentioned seems compatible with the assumptions of this post. When you say “the policy develops motivations related to obvious correlates of its historical reinforcement signals”, these “motivations” seem like a kind of training-compatible goals (if defined more broadly than in this post). I would expect that a system that pursues these motivations in new situations would exhibit some power-seeking tendencies because those correlate with a lot of reinforcement signals.
I suspect a lot of the disagreement here comes from different interpretations of the “internal representations of goals” assumption, I will try to rephrase that part better.
You’re right. I was critiquing “power-seeking due to your assumptions isn’t probable, because I think your assumptions won’t hold” and not “power-seeking isn’t predictive.” I had misremembered the predictive/probable split, as introduced in Definitions of “objective” should be Probable and Predictive:
Sorry for the confusion. I agree that power-seeking is predictive given your assumptions. I disagree that power-seeking is probable due to your assumptions being probable. The argument I gave above was actually:
The assumptions used in the post (“learns a randomly-selected training-compatible goal”) assign low probability to experimental results, relative to other predictions which I generated (and thus relative to other ways of reasoning about generalization),
Therefore the assumptions become less probable
Therefore power-seeking becomes less probable (at least, due to these specific assumptions becoming less probable; I still think P(power-seeking) is reasonably large)
I suspect that you agree that “learns a training-compatible goal” isn’t very probable/realistic. My point is then that the conclusions of the current work are weakened; maybe now more work has to go into the “can” in “Power-seeking can be probable and predictive.”
The issue with being informal is that it’s hard to tell whether you are right. You use words like “motivations” without defining what you mean, and this makes your statements vague enough that it’s not clear whether or how they are in tension with other claims. (E.g. what I have read so far doesn’t seems to rule out that shards can be modeled as contextually activated subagents with utility functions.)
An upside of formalism is that you can tell when it’s wrong, and thus it can help make our thinking more precise even if it makes assumptions that may not apply. I think defining your terms and making your arguments more formal should be a high priority. I’m not advocating spending hundreds of hours proving theorems, but moving in the direction of formalizing definitions and claims would be quite valuable.
It seems like a bad sign that the most clear and precise summary of shard theory claims was written by someone outside your team. I highly agree with this takeaway from that post: “Making a formalism for shard theory (even one that’s relatively toy) would probably help substantially with both communicating key ideas and also making research progress.” This work has a lot of research debt, and paying it off would really help clarify the disagreements around these topics.
It seems worth pointing out: the informality is in the hypothesis, which comprises a set of somewhat illegible intuitions and theories I use to reason about generalization. However, the prediction itself is what needs to be graded in order to see whether I was right. I made a prediction fairly like “the policy tends to go to the top-right 5x5, and searches for cheese once there, because that’s where the cheese-seeking computations were more strongly historically reinforced” and “the policy sometimes pursues cheese and sometimes navigates to the top-right 5x5 corner.” These predictions are (informally) gradable, even if the underlying intuitions are informal.
As it pertains to shard theory more broadly, though, I agree that more precision is needed. Increasing precision and formalism is the reason I proposed and executed the project underpinning Understanding and controlling a maze-solving policy network. I wanted to understand more about realistic motivational circuitry and model internals in the real world. I think the last few months have given me headway on a more mechanistic definition of a “shard-based agent.”
In addition to my other comment, I’ll further quote Behavioural statistics for a maze-solving agent:
I’m responding to this post, so why should I strike that out?
The post is talking about internal representations.
The core claim of this post is that if you train a network in some environment, the agent will not generalize optimally with respect to the reward function you trained it on, but will instead be optimal with respect to some other reward function in a way that is compatible with training-reward-optimality, and that this means that it is likely to avoid shutdown in new environments. The idea that this happens because reward functions are “internally represented” isn’t necessary for those results. You’re right that the post uses the phrase “internal representation” once at the start, and some very weak form of “representation” is presumably necessary for the policy to be optimal for a reward function (at least in the sense that you can derive a bunch of facts about a reward function from the optimal policy for that reward function), but that doesn’t mean that they’re central to the post.
Thanks Daniel, this is a great summary. I agree that internal representation of the reward function is not load-bearing for the claim. The weak form of representation that you mentioned is what I was trying to point at. I will rephrase the sentence to clarify this, e.g. something like “We assume that the agent learns a goal during the training process: some form of implicit internal representation of desired state features or concepts”.
Great, this sounds much better!
The internal representations assumption was meant to be pretty broad, I didn’t mean that the network is explicitly representing a scalar reward function over observations or anything like that—e.g. these can be implicit representations of state features. I think this would also include the kind of representations you are assuming in the maze-solving post, e.g. cheese shards / circuits.