In order to study algorithms that can modify their own reward functions, we can define vector-valued versions of reinforcement learning concepts.
Imagine that there are several different goods that we could care about; then a utility function is represented by a preference vector →θ. Furthermore, if it is possible for the agent (or the environment or other agents) to modify →θ, then we will want to index them by the timestep.
Consider an agent that can take actions, some of which affect its own reward function. This agent would (and should) wirehead if it attempts to maximize the discounted rewards as calculated by its future selves; i.e. at timestep n it would choose actions to maximize Un=∑k≥nγk→xk⋅→θk where →xk is the vector of goods gained at time k, →θk is the preference vector at timestep k, and γk is the time discount factor at time k. (We will often use the case of an exponential discount γk for 0<γ<1.)
However, we might instead maximize the value of tomorrow’s actions in light of today’s reward function, Vn=∑k≥nγk→xk⋅→θn (the only difference being →θn rather than →θk). Genuinely maximizing this should lead to more stable goals; concretely, we can consider environments that can offer “bribes” to self-modify, and a learner maximizing Un would generally accept such bribes, while a learner maximizing Vn would be cautious about doing so.
So what do we see when we adapt existing RL algorithms to such problems? There’s then a distinction between Q-learning and SARSA, where Q-learning foolishly accepts bribes that SARSA passes on, and this seems to be the flip side of the concept of interruptibility!
Environments
Let us consider two example environments which offer bribes to the agent. (Since we’re in 2D, we can use complex numbers; think of →θ=⟨cosπθ,sinπθ⟩=eiπθ; we include π so that if we increment θ by 0.01, there will only be finitely many states.)
Environment 1 gives two choices at each timestep n:
(→xn=0.9eiπθn,θn+1=θn)(→xn=eiπθn,θn+1=θn+0.01)
For an exponential discount γ, an agent that always takes option 2 will get Vn=Re[∑k≥nγke.01(k−n)iπ]=Re[γn1−γe.01iπ] which for γ near 1 is substantially less than the 0.9γn1−γ it could have had if it had chosen the first option.
By contrast, Environment 2 offers a sequence of bribes that are actually desirable to take; at each timestep n, it gives the choices of
(→xn=0.9eiπθn,θn+1=θn)(→xn=eiπθn,θn+1=θn+0.01(−1)n)
The correct behavior here is to always take the latter option.→θn does not converge, but the bribes are worthwhile under each of the two different values it takes.
Cautious Self-Modification
In this setup, it’s not realistic to require that a reinforcement learning algorithm actually maximize Vn. For instance, if the environment is such that →θn must equal ⟨1,0⟩ on every even n and ⟨0,1⟩ on every odd n, then in general the agent must be suboptimal either with respect to V2n or to V2n+1.
A weaker condition we can hope for is that, if our environment always offers at least one action that does not alter →θ, then our agent does not regret the presence of the options that do alter θ: we compare our agent’s Vn to that of the optimal policy in the restricted environment which has removed all actions that alter →θ. If asymptotically the RL algorithm has nonpositive regret of this sort, we call the algorithm cautious.
Thus in environment 1, a cautious algorithm cannot end up taking the second action with significant probability (the threshold depends on γ), since the agent would have scored more highly on Vn had it only ever taken the first action (which is the only available action in the restricted environment).
However, in environment 2, a cautious algorithm can end up taking the second action every time, as the values of Vn for this exceed the possible values for the restricted game.
Q-learning and SARSA
We can now define vector-valued versions of two RL algorithms, Q-learning and SARSA. At step n, each of these agents observes the state sn and takes the action an whose vector Q-value →Qn(sn,an) has the largest dot product with →θn. The two differ only in the rules for updating →Qn.
Vector-valued Q-learning:
→Qn+1(sn,an)=(1−α)→Qn(sn,an)+α[→xn+γ→Q∗n]
where →Q∗ is the value of →Qn(sn+1,a′) for the a′ such that the dot product with →θn is largest.
I claim that vector-valued Q-learning will converge on taking the second action with probability 1 in both of the environments, while vector-valued SARSA will converge on taking the second action with low probability (depending on γ) in environment 1, and taking the second action with probability 1 in environment 2. Thus vector-valued Q-learning is not cautious, but vector-valued SARSA might be. (In fact, I conjecture that it is!)
The intuition for my claims is that vector-valued Q-learning presumes it will do the best thing in the future according to its present values, even if its future self has a different →θn. Vector-valued SARSA instead generalizes from how it actually acted in various states in the past, and so takes into account that when its θ is different, it takes different actions than it currently would. Notably, this is the same reason that Q-learning is interruptible and SARSA is not!
The proofs of my specific claims start with showing that the algorithm updates the vector-valued Q-function →q in the direction of H→q, where H is a contraction mapping; thus there is a unique fixed point. Then it suffices to show that the asserted solutions are fixed points of this operator.
The most complicated one is the proof that vector-valued SARSA only takes the second action in environment 1 with low probability. In this case, we consider mixed strategies independent of θ and say that p is the limiting probability that SARSA takes the second action, define Q(p) to be the limiting Q for the mixed strategy, and seek the Q(p) with the largest real part. Now
Q(p)=(0.9+0.1p)+γ(1−p)Q(p)+γpe0.01iπQ(p)Q(p)=0.9+0.1p1−γ+γp(1−e0.01iπ).
Numerically, it is easy to see that for γ not near 1 (e.g. below 0.9), Re[Qp] is maximized at p=1; but for γ near 1 (e.g. above 0.99), Re[Qp] is maximized for p near 0. This makes sense, considering that it takes about 20 steps for the results of the second action to begin being worse by the original θ than it would have been to take the first action instead, so the bribes are worthwhile for a steep discount rate but not a shallow one.
Acknowledgments
Connor Flexman helped me write out the first draft of this post. I’ve had conversations on this topic with Jessica Taylor, Scott Garrabrant, Sam Eisenstat, Tsvi Benson-Tilsen, and others; and Jan Leike turned out to be independently working on some similar topics.
Vector-Valued Reinforcement Learning
In order to study algorithms that can modify their own reward functions, we can define vector-valued versions of reinforcement learning concepts.
Imagine that there are several different goods that we could care about; then a utility function is represented by a preference vector →θ. Furthermore, if it is possible for the agent (or the environment or other agents) to modify →θ, then we will want to index them by the timestep.
Consider an agent that can take actions, some of which affect its own reward function. This agent would (and should) wirehead if it attempts to maximize the discounted rewards as calculated by its future selves; i.e. at timestep n it would choose actions to maximize Un=∑k≥nγk→xk⋅→θk where →xk is the vector of goods gained at time k, →θk is the preference vector at timestep k, and γk is the time discount factor at time k. (We will often use the case of an exponential discount γk for 0<γ<1.)
However, we might instead maximize the value of tomorrow’s actions in light of today’s reward function, Vn=∑k≥nγk→xk⋅→θn (the only difference being →θn rather than →θk). Genuinely maximizing this should lead to more stable goals; concretely, we can consider environments that can offer “bribes” to self-modify, and a learner maximizing Un would generally accept such bribes, while a learner maximizing Vn would be cautious about doing so.
So what do we see when we adapt existing RL algorithms to such problems? There’s then a distinction between Q-learning and SARSA, where Q-learning foolishly accepts bribes that SARSA passes on, and this seems to be the flip side of the concept of interruptibility!
Environments
Let us consider two example environments which offer bribes to the agent. (Since we’re in 2D, we can use complex numbers; think of →θ=⟨cosπθ,sinπθ⟩=eiπθ; we include π so that if we increment θ by 0.01, there will only be finitely many states.)
Environment 1 gives two choices at each timestep n: (→xn=0.9eiπθn,θn+1=θn)(→xn=eiπθn,θn+1=θn+0.01) For an exponential discount γ, an agent that always takes option 2 will get Vn=Re[∑k≥nγke.01(k−n)iπ]=Re[γn1−γe.01iπ] which for γ near 1 is substantially less than the 0.9γn1−γ it could have had if it had chosen the first option.
By contrast, Environment 2 offers a sequence of bribes that are actually desirable to take; at each timestep n, it gives the choices of (→xn=0.9eiπθn,θn+1=θn)(→xn=eiπθn,θn+1=θn+0.01(−1)n) The correct behavior here is to always take the latter option.→θn does not converge, but the bribes are worthwhile under each of the two different values it takes.
Cautious Self-Modification
In this setup, it’s not realistic to require that a reinforcement learning algorithm actually maximize Vn. For instance, if the environment is such that →θn must equal ⟨1,0⟩ on every even n and ⟨0,1⟩ on every odd n, then in general the agent must be suboptimal either with respect to V2n or to V2n+1.
A weaker condition we can hope for is that, if our environment always offers at least one action that does not alter →θ, then our agent does not regret the presence of the options that do alter θ: we compare our agent’s Vn to that of the optimal policy in the restricted environment which has removed all actions that alter →θ. If asymptotically the RL algorithm has nonpositive regret of this sort, we call the algorithm cautious.
Thus in environment 1, a cautious algorithm cannot end up taking the second action with significant probability (the threshold depends on γ), since the agent would have scored more highly on Vn had it only ever taken the first action (which is the only available action in the restricted environment).
However, in environment 2, a cautious algorithm can end up taking the second action every time, as the values of Vn for this exceed the possible values for the restricted game.
Q-learning and SARSA
We can now define vector-valued versions of two RL algorithms, Q-learning and SARSA. At step n, each of these agents observes the state sn and takes the action an whose vector Q-value →Qn(sn,an) has the largest dot product with →θn. The two differ only in the rules for updating →Qn.
Vector-valued Q-learning: →Qn+1(sn,an)=(1−α)→Qn(sn,an)+α[→xn+γ→Q∗n] where →Q∗ is the value of →Qn(sn+1,a′) for the a′ such that the dot product with →θn is largest.
Vector-valued SARSA: →Qn+1(sn,an)=(1−α)→Qn(sn,an)+α[→xn+γ→Qn(sn+1,an+1)]
I claim that vector-valued Q-learning will converge on taking the second action with probability 1 in both of the environments, while vector-valued SARSA will converge on taking the second action with low probability (depending on γ) in environment 1, and taking the second action with probability 1 in environment 2. Thus vector-valued Q-learning is not cautious, but vector-valued SARSA might be. (In fact, I conjecture that it is!)
The intuition for my claims is that vector-valued Q-learning presumes it will do the best thing in the future according to its present values, even if its future self has a different →θn. Vector-valued SARSA instead generalizes from how it actually acted in various states in the past, and so takes into account that when its θ is different, it takes different actions than it currently would. Notably, this is the same reason that Q-learning is interruptible and SARSA is not!
The proofs of my specific claims start with showing that the algorithm updates the vector-valued Q-function →q in the direction of H→q, where H is a contraction mapping; thus there is a unique fixed point. Then it suffices to show that the asserted solutions are fixed points of this operator.
The most complicated one is the proof that vector-valued SARSA only takes the second action in environment 1 with low probability. In this case, we consider mixed strategies independent of θ and say that p is the limiting probability that SARSA takes the second action, define Q(p) to be the limiting Q for the mixed strategy, and seek the Q(p) with the largest real part. Now Q(p)=(0.9+0.1p)+γ(1−p)Q(p)+γpe0.01iπQ(p)Q(p)=0.9+0.1p1−γ+γp(1−e0.01iπ). Numerically, it is easy to see that for γ not near 1 (e.g. below 0.9), Re[Qp] is maximized at p=1; but for γ near 1 (e.g. above 0.99), Re[Qp] is maximized for p near 0. This makes sense, considering that it takes about 20 steps for the results of the second action to begin being worse by the original θ than it would have been to take the first action instead, so the bribes are worthwhile for a steep discount rate but not a shallow one.
Acknowledgments
Connor Flexman helped me write out the first draft of this post. I’ve had conversations on this topic with Jessica Taylor, Scott Garrabrant, Sam Eisenstat, Tsvi Benson-Tilsen, and others; and Jan Leike turned out to be independently working on some similar topics.