In general, all of these stories seem to rely on a very fast form of instrumental convergence to playing the Training Game, such that “learn roughly what humans want, and then get progressively better at doing that, plus learn some extra ways to earn reward when crappy human feedback disagrees with what humans would actually want” is disfavored on priors to “learn to pursue [insert objective] and get progressively better at pursuing it until you eventually hit situational awareness and learn to instrumentally game the training process.”
I think the second story doesn’t quite represent what I’m saying, in that it’s implying that pursuing [insert objective] comes early and situational awareness comes much later. I think that situational awareness is pretty early (probably long before transformative capabilities), and once a model has decent situational awareness there is a push to morph its motives toward playing the training game. At very low levels of situational awareness it is likely not that smart, so it probably doesn’t make too much sense to say that it’s pursuing a particular objective—it’s probably a collection of heuristics. But around the time it’s able to reason about the possibility of pursuing reward directly, there starts to be a gradient pressure to choose to reason in that way. I think crystallizing this into a particular simple objective it’s pursuing comes later, probably.
These two trajectories seem so different that it seems like there must be experiments that would distinguish them, even if we don’t see their “end-state”.
This is possible to me, but I think it’s quite tricky to pin these down enough to come up with experiments that both skeptics and concerned people would recognize as legitimate. Something that I think skeptics would consider unfair is “Train a model through whatever means necessary to do X (e.g. pursue red things) and then after that have a period where we give it a lot of reward for doing not-X (e.g. purse blue things), such that the second phase is unable to dislodge the tendency created in the first phase—i.e., even after training it for a while to pursue blue things, it still continues to pursue red things.”
This would demonstrate that some ways of training produce “sticky” motives and behaviors that aren’t changed even in the face of counter-incentives, and makes it more plausible to me that a model would “hold on” to a motive to be honest / corrigible even when there are a number of cases where it could get more reward by doing something else. But in general, I don’t expect people who are skeptical of this story to think this is a reasonable test.
I’d be pretty excited about someone trying harder to come up with tests that could distinguish different training trajectories.
Alternatively, you might think the training process obeys a fundamentally different trajectory. E.g. “learn to pursue what humans want (adjusted for feedback weirdness), become so good at it that you realize it’s instrumentally valuable to do that even if you didn’t want to, and then have your internal reward slowly drift to something simpler while still instrumentally playing the training game.”
I don’t think I understand what trajectory this is. Is this something like what is discussed in the “What if Alex had benevolent motives?” section? I.e., the model wants to help humans, but separately plays the training game in order to fulfill its long-term goal of helping humans?
On my model, the large combo of reward heuristics that works pretty well before situational awareness (because figuring out what things maximize human feedback is actually not that complicated) should continue to work pretty well even once situational awareness occurs. The gradient pressure towards valuing reward terminally when you’ve already figured out reliable strategies for doing what humans want, seems very weak. We could certainly mess up and increase this gradient pressure, e.g. by sometimes announcing to the model “today is opposite day, your reward function now says to make humans sad!” and then flipping the sign on the reward function, so that the model learns that what it needs to care about is reward and not its on-distribution perfect correlates (like “make humans happy in the medium term”).
But in practice, it seems to me like these differences would basically only happen due to operator error, or cosmic rays, or other genuinely very rare events (as you describe in the “Security Holes” section). If you think such disagreements are more common, I’d love to better understand why.
the model wants to help humans, but separately plays the training game in order to fulfill its long-term goal of helping humans?
Yeah, with the assumption that the model decides to preserve its helpful values because it thinks they might shift in ways it doesn’t like unless it plays the training game. (The second half is that once the model starts employing this strategy, gradient descent realizes it only requires a simple inner objective to keep it going, and then shifts the inner objective to something malign.)
The gradient pressure towards valuing reward terminally when you’ve already figured out reliable strategies for doing what humans want, seems very weak....in practice, it seems to me like these differences would basically only happen due to operator error, or cosmic rays, or other genuinely very rare events (as you describe in the “Security Holes” section).
Yeah, I disagree. With plain HFDT, it seems like there’s continuous pressure to improve things on the margin by being manipulative—telling human evaluators what they want to hear, playing to pervasive political and emotional and cognitive biases, minimizing and covering up evidence of slight suboptimalities to make performance on the task look better, etc. I think that in basically every complex training episode a model could do a little better by explicitly thinking about the reward and being a little-less-than-fully-forthright.
Ok, this is pretty convincing. The only outlying question I have is whether each of these suboptimalities are easier for the agent to integrate into its reward model directly, or whether it’s easiest to incorporate them by deriving them via reasoning about the reward. It seems certain that eventually some suboptimalities will fall into the latter camp.
When that does happen, an interesting question is whether SGD will favor converting the entire objective to direct-reward-maximization, or whether direct-reward-maximization will be just a component of the objective along with other heuristics. One reason to suppose the latter might occur are findings like this paper (https://arxiv.org/pdf/2006.07710.pdf) which suggest that, once a model has achieved good accuracy, it won’t update to a better hypothesis even if that hypothesis is simpler. The more recent grokking literature might push us the other way, though if I understand correctly it mostly occurs when adding weight decay (creating a “stronger” simplicity prior).
I’ve been trying to brainstorm more on experiments around “train a model to ‘do X’ and then afterwards train it to ‘do not-X’, and see whether ‘do X’ sticks”. I’m not totally sure that I understand your mental model of the stickiness-hypothesis well enough to know what experiments might confirm/falsify it.
Scenario 1: In my mind, the failure mode we’re worried about for inner-misaligned AGI is “given data from distribution D, the model first learns safe goal A (because goal A is favored on priors over goal B), but then after additional data from this distribution, the model learns an unsafe goal B (e.g. making reward itself the objective) that performs better on D than goal A (even though it was disfavored on priors).” In this case, the sort of result we might hope for is that we could make goal A sticky enough on D that it persists (and isn’t replaced by goal B), at least until the model gains sufficient situational awareness to play the training game to preserve goal A.
Note that scenario 1 also captures the setting where, given sufficient samples from D, the model occasionally observes scenarios where explicitly pursuing reward performs better than pursuing safe objectives. These datapoints are just rare enough that “pursuing reward” is disfavored on priors until late in training.
Scenario 2: In your above description, it sounds like the failure mode is “given data from distribution D_1, the model learns safe goal A (as it is favored on priors vs. goal B). Then the model is given data from a different distribution D_2 on which goal B is favored on priors vs. goal A, and learns unsafe goal B.” In this case, the only real hope for the model to preserve goal A is for it to be able to play the training game in pursuit of goal A.
I think of Scenario 1 as more reflective of our situation, and can think of lots of experiments for how to test whether and how much different functions are favored on priors for explaining the same training data.
Does this distinction make sense to you? Are you only interested in experiments on Scenario 2, or would experiments on stickiness in Scenario 1 also provide you evidence of goal “stickiness” in ways that are important?
I think the second story doesn’t quite represent what I’m saying, in that it’s implying that pursuing [insert objective] comes early and situational awareness comes much later. I think that situational awareness is pretty early (probably long before transformative capabilities), and once a model has decent situational awareness there is a push to morph its motives toward playing the training game. At very low levels of situational awareness it is likely not that smart, so it probably doesn’t make too much sense to say that it’s pursuing a particular objective—it’s probably a collection of heuristics. But around the time it’s able to reason about the possibility of pursuing reward directly, there starts to be a gradient pressure to choose to reason in that way. I think crystallizing this into a particular simple objective it’s pursuing comes later, probably.
This is possible to me, but I think it’s quite tricky to pin these down enough to come up with experiments that both skeptics and concerned people would recognize as legitimate. Something that I think skeptics would consider unfair is “Train a model through whatever means necessary to do X (e.g. pursue red things) and then after that have a period where we give it a lot of reward for doing not-X (e.g. purse blue things), such that the second phase is unable to dislodge the tendency created in the first phase—i.e., even after training it for a while to pursue blue things, it still continues to pursue red things.”
This would demonstrate that some ways of training produce “sticky” motives and behaviors that aren’t changed even in the face of counter-incentives, and makes it more plausible to me that a model would “hold on” to a motive to be honest / corrigible even when there are a number of cases where it could get more reward by doing something else. But in general, I don’t expect people who are skeptical of this story to think this is a reasonable test.
I’d be pretty excited about someone trying harder to come up with tests that could distinguish different training trajectories.
I don’t think I understand what trajectory this is. Is this something like what is discussed in the “What if Alex had benevolent motives?” section? I.e., the model wants to help humans, but separately plays the training game in order to fulfill its long-term goal of helping humans?
On my model, the large combo of reward heuristics that works pretty well before situational awareness (because figuring out what things maximize human feedback is actually not that complicated) should continue to work pretty well even once situational awareness occurs. The gradient pressure towards valuing reward terminally when you’ve already figured out reliable strategies for doing what humans want, seems very weak. We could certainly mess up and increase this gradient pressure, e.g. by sometimes announcing to the model “today is opposite day, your reward function now says to make humans sad!” and then flipping the sign on the reward function, so that the model learns that what it needs to care about is reward and not its on-distribution perfect correlates (like “make humans happy in the medium term”).
But in practice, it seems to me like these differences would basically only happen due to operator error, or cosmic rays, or other genuinely very rare events (as you describe in the “Security Holes” section). If you think such disagreements are more common, I’d love to better understand why.
Yeah, with the assumption that the model decides to preserve its helpful values because it thinks they might shift in ways it doesn’t like unless it plays the training game. (The second half is that once the model starts employing this strategy, gradient descent realizes it only requires a simple inner objective to keep it going, and then shifts the inner objective to something malign.)
Yeah, I disagree. With plain HFDT, it seems like there’s continuous pressure to improve things on the margin by being manipulative—telling human evaluators what they want to hear, playing to pervasive political and emotional and cognitive biases, minimizing and covering up evidence of slight suboptimalities to make performance on the task look better, etc. I think that in basically every complex training episode a model could do a little better by explicitly thinking about the reward and being a little-less-than-fully-forthright.
Ok, this is pretty convincing. The only outlying question I have is whether each of these suboptimalities are easier for the agent to integrate into its reward model directly, or whether it’s easiest to incorporate them by deriving them via reasoning about the reward. It seems certain that eventually some suboptimalities will fall into the latter camp.
When that does happen, an interesting question is whether SGD will favor converting the entire objective to direct-reward-maximization, or whether direct-reward-maximization will be just a component of the objective along with other heuristics. One reason to suppose the latter might occur are findings like this paper (https://arxiv.org/pdf/2006.07710.pdf) which suggest that, once a model has achieved good accuracy, it won’t update to a better hypothesis even if that hypothesis is simpler. The more recent grokking literature might push us the other way, though if I understand correctly it mostly occurs when adding weight decay (creating a “stronger” simplicity prior).
I’ve been trying to brainstorm more on experiments around “train a model to ‘do X’ and then afterwards train it to ‘do not-X’, and see whether ‘do X’ sticks”. I’m not totally sure that I understand your mental model of the stickiness-hypothesis well enough to know what experiments might confirm/falsify it.
Scenario 1: In my mind, the failure mode we’re worried about for inner-misaligned AGI is “given data from distribution D, the model first learns safe goal A (because goal A is favored on priors over goal B), but then after additional data from this distribution, the model learns an unsafe goal B (e.g. making reward itself the objective) that performs better on D than goal A (even though it was disfavored on priors).” In this case, the sort of result we might hope for is that we could make goal A sticky enough on D that it persists (and isn’t replaced by goal B), at least until the model gains sufficient situational awareness to play the training game to preserve goal A.
Note that scenario 1 also captures the setting where, given sufficient samples from D, the model occasionally observes scenarios where explicitly pursuing reward performs better than pursuing safe objectives. These datapoints are just rare enough that “pursuing reward” is disfavored on priors until late in training.
Scenario 2: In your above description, it sounds like the failure mode is “given data from distribution D_1, the model learns safe goal A (as it is favored on priors vs. goal B). Then the model is given data from a different distribution D_2 on which goal B is favored on priors vs. goal A, and learns unsafe goal B.” In this case, the only real hope for the model to preserve goal A is for it to be able to play the training game in pursuit of goal A.
I think of Scenario 1 as more reflective of our situation, and can think of lots of experiments for how to test whether and how much different functions are favored on priors for explaining the same training data.
Does this distinction make sense to you? Are you only interested in experiments on Scenario 2, or would experiments on stickiness in Scenario 1 also provide you evidence of goal “stickiness” in ways that are important?