So the reward function can’t be the policy’s objective – one cannot be pursuing something one has no direct access to.
One question I’ve been wondering about recently is what happens if you actually do give an agent access to its reward during training. (Analogy for humans: a little indicator in the corner of our visual field that lights up whenever we do something that increases the number or fitness of our descendants). Unless the reward is dense and highly shaped, the agent still has to come up with plans to do well on difficult tasks, it can’t just delegate those decisions to the reward information. Yet its judgement about which things are promising will presumably be better-tuned because of this extra information (although eventually you’ll need to get rid of it in order for the agent to do well unsupervised).
On the other hand, adding reward to the agent’s observations also probably makes the agent more likely to tamper with the physical implementation of its reward, since it will be more likely to develop goals aimed at the reward itself, rather than just the things the reward is indicating. (Analogy for humans: because we didn’t have a concept of genetic fitness while evolving, it was hard for evolution to make us care about that directly. But if we’d had the indicator light, we might have developed motivations specifically directed towards it, and then later found out that the light was “actually” the output of some physical reward calculation).
I’ve actually been thinking about the exact same thing recently! I have a post coming up soon about some of the sorts of concrete experiments I would be excited about re inner alignment that includes an entry on what happens when you give an RL agent access to its reward as part of its observation.
(Edit: I figured I would just publish the post now so you can take a look at it. You can find it here.)
How do you envision the reward indicator being computed? Is it some kind of approximate proxy (if so what kind?) or magically accurate? Also, how do you deal with the problem that whether an action is a good idea depends on the agent’s policy or what it plans to do later. For example whether joining a start-up increased my fitness depends a lot on what I do after I join it.
In the context of reinforcement learning, it’s literally just the reward provided by the environment, which is currently fed only to the optimiser, not to the agent. How to make those rewards good ones is a separate question being answered by research directions like reward modelling and IDA.
But the reward fed to the optimizer may only be known/computed at the end of each training episode, and it would be too late to show it to the agent at that point. Are you assuming that the reward is computed in a cumulative way, like a video game score, so it can be shown to the agent during the episode?
Yes, I’m assuming cumulatively-calculated reward. In general this is a fairly standard assumption (rewards being defined for every timestep is part of the definition of MDPs and POMDPs, and given that I don’t see much advantage in delaying computing it until the end of the episode). For agents like AlphaGo observing these rewards obviously won’t be very helpful though since those rewards are all 0 until the last timestep. But in general I expect rewards to occur multiple times per episode when training advanced agents, especially as episodes get longer.
Hmm, I was surprised when it turned out that AlphaStar did not use any reward shaping and just got a reward at the end of each game/episode, and may have over-updated on that.
If you “expect rewards to occur multiple times per episode when training advanced agents” then sure, I understand your suggestion in light of that.
ETA: It occurs to me that your idea can be applied even if rewards are only available at the end of each episode. Just concatenate several episodes together into larger episodes during training, then within a single concatenated episode, the rewards from the earlier episodes can be shown to the agent during the later episodes.
One question I’ve been wondering about recently is what happens if you actually do give an agent access to its reward during training. (Analogy for humans: a little indicator in the corner of our visual field that lights up whenever we do something that increases the number or fitness of our descendants). Unless the reward is dense and highly shaped, the agent still has to come up with plans to do well on difficult tasks, it can’t just delegate those decisions to the reward information. Yet its judgement about which things are promising will presumably be better-tuned because of this extra information (although eventually you’ll need to get rid of it in order for the agent to do well unsupervised).
On the other hand, adding reward to the agent’s observations also probably makes the agent more likely to tamper with the physical implementation of its reward, since it will be more likely to develop goals aimed at the reward itself, rather than just the things the reward is indicating. (Analogy for humans: because we didn’t have a concept of genetic fitness while evolving, it was hard for evolution to make us care about that directly. But if we’d had the indicator light, we might have developed motivations specifically directed towards it, and then later found out that the light was “actually” the output of some physical reward calculation).
I’ve actually been thinking about the exact same thing recently! I have a post coming up soon about some of the sorts of concrete experiments I would be excited about re inner alignment that includes an entry on what happens when you give an RL agent access to its reward as part of its observation.
(Edit: I figured I would just publish the post now so you can take a look at it. You can find it here.)
How do you envision the reward indicator being computed? Is it some kind of approximate proxy (if so what kind?) or magically accurate? Also, how do you deal with the problem that whether an action is a good idea depends on the agent’s policy or what it plans to do later. For example whether joining a start-up increased my fitness depends a lot on what I do after I join it.
In the context of reinforcement learning, it’s literally just the reward provided by the environment, which is currently fed only to the optimiser, not to the agent. How to make those rewards good ones is a separate question being answered by research directions like reward modelling and IDA.
But the reward fed to the optimizer may only be known/computed at the end of each training episode, and it would be too late to show it to the agent at that point. Are you assuming that the reward is computed in a cumulative way, like a video game score, so it can be shown to the agent during the episode?
Yes, I’m assuming cumulatively-calculated reward. In general this is a fairly standard assumption (rewards being defined for every timestep is part of the definition of MDPs and POMDPs, and given that I don’t see much advantage in delaying computing it until the end of the episode). For agents like AlphaGo observing these rewards obviously won’t be very helpful though since those rewards are all 0 until the last timestep. But in general I expect rewards to occur multiple times per episode when training advanced agents, especially as episodes get longer.
Hmm, I was surprised when it turned out that AlphaStar did not use any reward shaping and just got a reward at the end of each game/episode, and may have over-updated on that.
If you “expect rewards to occur multiple times per episode when training advanced agents” then sure, I understand your suggestion in light of that.
ETA: It occurs to me that your idea can be applied even if rewards are only available at the end of each episode. Just concatenate several episodes together into larger episodes during training, then within a single concatenated episode, the rewards from the earlier episodes can be shown to the agent during the later episodes.