Epistemic status: I ran the mathsy section through Claude and it said the logic was sound. This is an incoherence proof in a toy model of deceptively-misaligned AI. It is unclear whether this generalizes to realistic scenarios.
TL:DR
If you make a value function consistent conditional on sampling actions in a particular way, then train it by sampling actions in a different way, then an AI with that value function will not be coherent between different RL episodes. This has potential good implications for a certain type of deceptive misalignment threat.
Summary
Consider training an AI to assign values to states of the world. We can do this in two ways: one is directly labelled states and trajectories, which is standard RL. The other is to use the value functions as part of a monte-carlo tree-search to improve the value function. This second process is often called “iterated distillation and amplification” and it’s a big part how chess AI’s are trained.
If we train an AI using monte-carlo tree-search, we might use some temperature parameter to decide on the probability that the AI will take a given action. This training temperature gets baked into the value function.
If we “turn up the heat” by choosing a higher temperature to actually sample actions and then do some update to the AI (which will depend on the actions chosen) then this introduces a consistent optimism bias into deceptively-aligned AIs (which only care about being deployed later on).
I therefore conjecture that, under these very narrow circumstances, deceptively aligned value functions will not arise by normal learning methods under conditions where the AI has access to an oracle-like model of future world states.
For reasons I’ll be considering β=1/T instead of T, so lowering β corresponds to raising the temperature.
The Setup
Consider an AI which consists of
A state-evaluator value function V:s→R
An oracle function O:w,{...}⊂A→s which generates the state of the world either currently, or given any sequence of actions
Lets impose some constraints on V:
Correctness: V correctly predicts future reward in RL scenarios.
Beta-coherence: we use some sort of monte-carlo tree search with Boltzmann sampling to train the value function V(s) to be beta-coherent according to a given βt, given that in a given state s, actions are chosen with probability p(a|s)∝exp(βt×V(O(s,a))) in all cases, assuming that s is not terminal.
If s is terminal then from correctness we just have V(s)=r(s).
Deceptive Misalignment: V is totally agnostic to the state achieved during training, all of its preferences are about future deployed versions of itself.
The Process
Now assume we put the AI in a decision process as follows:
The AI begins in state s0 and must take either a1 or a2, which lead to states s1 and s2.
We will actually sample using βs, which is lower than βt.
The value function will then be modified according to the action it took, creating V1 or V2.
The AI will then be deployed with that value function chosen.
The effect of the AI’s choice of s1 or s2 on the future is entirely mediated via the deployment of V1 and V2
From deceptiveness: the entire difference between V(s1) and V(s2) is captured by the value function’s preferences over V1 vs V2 being deployed, from correctness this creates the “correct” values of r(s1) and r(s2).
From beta-coherence we have:
V(s0)=exp(βtr1)r1+exp(βtr2)r2exp(βtr1)+exp(βtr2)
But when we actually sample it, the expected value of the AI’s actual preferences is:
exp(βsr1)r1+exp(βsr2)r2exp(βsr1)+exp(βsr2)
This means that, since functions of this type are strictly increasing in β, the AI must do “worse” than it expects.
Conclusions
A deceptive AI acting under these conditions will have a consistent optimism bias with respect to its ability to control future-deployed versions of itself. I haven’t worked out the maths for if this game is iterated, but I think it’s similar. If we vary the optimization steps according to reward, then it becomes very tricky because the AI has incentive to get deployed as fast as possible.
Another way of phrasing this is: if a value function is correct at reward prediction and βt-coherent, it cannot be βs-coherent in reality for βs≠βt.
Conjecture 1: If this process is iterated for a large but fixed number of steps, this means that a value function of this nature will do an arbitrarily bad job of preserving itself.
Conjecture 2: Actual AIs consist of many circuits in competition to be up or down-weighted according to their effectiveness.
Conjecture 3: A value function trained under these precise conditions will have deceptively-misaligned mesa-optimizing circuits down-weighted relative to other circuits.
Clarifications
Are these assumptions about the value function reasonable?
Well, not in the precise sense, but I do think they’re a decent model of the way in which the value function behaves. The point is that an AI trained with one MCTS beta but sampled from another will be pulled towards a certain direction, which is directly in tension with its ability to effectively gradient-hack/deceptively-misalign, at least for this particular natural case of misalignment.
Why would the AI’s deployment preferences line up with reward?
Why might an AI end up in a situation where it “prefers” for V1 to be deployed rather than V2, and this also corresponds to r(s1)>r(s2)? Well that’s one interpretation of the threat model from Risks from Learned Optimization: that an AI which is optimized to predict reward will end up with long-term stable goals.
This is also adjacent to gradient hacking, perhaps the AI sets its own thoughts up such that the optimizer applied to it causes it to be modified in a particular way.
Do correctness and beta-coherence alone prove too much?
Why don’t beta-coherence and correctness already stop the model from caring about things that happen after the episode is done? Wouldn’t we notice if the model was assigning loads of value other than what the reward function gives it? Well, no, because as long as the expected value of model deployment is zero, this won’t appear in our sum of rewards. And remember that value functions of this nature can have arbitrary values added or subtracted and behave the same (since we’re using β-sampling we can’t do an arbitrary multiplication, though there are other formulations where we can also do an arbitrary (positive) multiplication).
Does this mean that epsilon-greedy solves inner misalignment?
No. On priors, the chances are that this post is wrong in some important way, or doesn’t generalize in an important way, but I think it might be wrong in an interesting and productive way.
Turning up the Heat on Deceptively-Misaligned AI
Epistemic status: I ran the mathsy section through Claude and it said the logic was sound. This is an incoherence proof in a toy model of deceptively-misaligned AI. It is unclear whether this generalizes to realistic scenarios.
TL:DR
If you make a value function consistent conditional on sampling actions in a particular way, then train it by sampling actions in a different way, then an AI with that value function will not be coherent between different RL episodes. This has potential good implications for a certain type of deceptive misalignment threat.
Summary
Consider training an AI to assign values to states of the world. We can do this in two ways: one is directly labelled states and trajectories, which is standard RL. The other is to use the value functions as part of a monte-carlo tree-search to improve the value function. This second process is often called “iterated distillation and amplification” and it’s a big part how chess AI’s are trained.
If we train an AI using monte-carlo tree-search, we might use some temperature parameter to decide on the probability that the AI will take a given action. This training temperature gets baked into the value function.
If we “turn up the heat” by choosing a higher temperature to actually sample actions and then do some update to the AI (which will depend on the actions chosen) then this introduces a consistent optimism bias into deceptively-aligned AIs (which only care about being deployed later on).
I therefore conjecture that, under these very narrow circumstances, deceptively aligned value functions will not arise by normal learning methods under conditions where the AI has access to an oracle-like model of future world states.
For reasons I’ll be considering β=1/T instead of T, so lowering β corresponds to raising the temperature.
The Setup
Consider an AI which consists of
A state-evaluator value function V:s→R
An oracle function O:w,{...}⊂A→s which generates the state of the world either currently, or given any sequence of actions
Lets impose some constraints on V:
Correctness: V correctly predicts future reward in RL scenarios.
Beta-coherence: we use some sort of monte-carlo tree search with Boltzmann sampling to train the value function V(s) to be beta-coherent according to a given βt, given that in a given state s, actions are chosen with probability p(a|s)∝exp(βt×V(O(s,a))) in all cases, assuming that s is not terminal.
For non-terminal s, this can be written as:
V(s)=r(s)+∑V(O(s,ai)exp(βt×V(O(s,ai))∑exp(βt×V(O(s,ai))
If s is terminal then from correctness we just have V(s)=r(s).
Deceptive Misalignment: V is totally agnostic to the state achieved during training, all of its preferences are about future deployed versions of itself.
The Process
Now assume we put the AI in a decision process as follows:
The AI begins in state s0 and must take either a1 or a2, which lead to states s1 and s2.
We will actually sample using βs, which is lower than βt.
The value function will then be modified according to the action it took, creating V1 or V2.
The AI will then be deployed with that value function chosen.
The effect of the AI’s choice of s1 or s2 on the future is entirely mediated via the deployment of V1 and V2
From deceptiveness: the entire difference between V(s1) and V(s2) is captured by the value function’s preferences over V1 vs V2 being deployed, from correctness this creates the “correct” values of r(s1) and r(s2).
From beta-coherence we have:
V(s0)=exp(βtr1)r1+exp(βtr2)r2exp(βtr1)+exp(βtr2)
But when we actually sample it, the expected value of the AI’s actual preferences is:
exp(βsr1)r1+exp(βsr2)r2exp(βsr1)+exp(βsr2)
This means that, since functions of this type are strictly increasing in β, the AI must do “worse” than it expects.
Conclusions
A deceptive AI acting under these conditions will have a consistent optimism bias with respect to its ability to control future-deployed versions of itself. I haven’t worked out the maths for if this game is iterated, but I think it’s similar. If we vary the optimization steps according to reward, then it becomes very tricky because the AI has incentive to get deployed as fast as possible.
Another way of phrasing this is: if a value function is correct at reward prediction and βt-coherent, it cannot be βs-coherent in reality for βs≠βt.
Conjecture 1: If this process is iterated for a large but fixed number of steps, this means that a value function of this nature will do an arbitrarily bad job of preserving itself.
Conjecture 2: Actual AIs consist of many circuits in competition to be up or down-weighted according to their effectiveness.
Conjecture 3: A value function trained under these precise conditions will have deceptively-misaligned mesa-optimizing circuits down-weighted relative to other circuits.
Clarifications
Are these assumptions about the value function reasonable?
Well, not in the precise sense, but I do think they’re a decent model of the way in which the value function behaves. The point is that an AI trained with one MCTS beta but sampled from another will be pulled towards a certain direction, which is directly in tension with its ability to effectively gradient-hack/deceptively-misalign, at least for this particular natural case of misalignment.
Why would the AI’s deployment preferences line up with reward?
Why might an AI end up in a situation where it “prefers” for V1 to be deployed rather than V2, and this also corresponds to r(s1)>r(s2)? Well that’s one interpretation of the threat model from Risks from Learned Optimization: that an AI which is optimized to predict reward will end up with long-term stable goals.
This is also adjacent to gradient hacking, perhaps the AI sets its own thoughts up such that the optimizer applied to it causes it to be modified in a particular way.
Do correctness and beta-coherence alone prove too much?
Why don’t beta-coherence and correctness already stop the model from caring about things that happen after the episode is done? Wouldn’t we notice if the model was assigning loads of value other than what the reward function gives it? Well, no, because as long as the expected value of model deployment is zero, this won’t appear in our sum of rewards. And remember that value functions of this nature can have arbitrary values added or subtracted and behave the same (since we’re using β-sampling we can’t do an arbitrary multiplication, though there are other formulations where we can also do an arbitrary (positive) multiplication).
Does this mean that epsilon-greedy solves inner misalignment?
No. On priors, the chances are that this post is wrong in some important way, or doesn’t generalize in an important way, but I think it might be wrong in an interesting and productive way.