I’ve been trying to brainstorm more on experiments around “train a model to ‘do X’ and then afterwards train it to ‘do not-X’, and see whether ‘do X’ sticks”. I’m not totally sure that I understand your mental model of the stickiness-hypothesis well enough to know what experiments might confirm/falsify it.
Scenario 1: In my mind, the failure mode we’re worried about for inner-misaligned AGI is “given data from distribution D, the model first learns safe goal A (because goal A is favored on priors over goal B), but then after additional data from this distribution, the model learns an unsafe goal B (e.g. making reward itself the objective) that performs better on D than goal A (even though it was disfavored on priors).” In this case, the sort of result we might hope for is that we could make goal A sticky enough on D that it persists (and isn’t replaced by goal B), at least until the model gains sufficient situational awareness to play the training game to preserve goal A.
Note that scenario 1 also captures the setting where, given sufficient samples from D, the model occasionally observes scenarios where explicitly pursuing reward performs better than pursuing safe objectives. These datapoints are just rare enough that “pursuing reward” is disfavored on priors until late in training.
Scenario 2: In your above description, it sounds like the failure mode is “given data from distribution D_1, the model learns safe goal A (as it is favored on priors vs. goal B). Then the model is given data from a different distribution D_2 on which goal B is favored on priors vs. goal A, and learns unsafe goal B.” In this case, the only real hope for the model to preserve goal A is for it to be able to play the training game in pursuit of goal A.
I think of Scenario 1 as more reflective of our situation, and can think of lots of experiments for how to test whether and how much different functions are favored on priors for explaining the same training data.
Does this distinction make sense to you? Are you only interested in experiments on Scenario 2, or would experiments on stickiness in Scenario 1 also provide you evidence of goal “stickiness” in ways that are important?
I’ve been trying to brainstorm more on experiments around “train a model to ‘do X’ and then afterwards train it to ‘do not-X’, and see whether ‘do X’ sticks”. I’m not totally sure that I understand your mental model of the stickiness-hypothesis well enough to know what experiments might confirm/falsify it.
Scenario 1: In my mind, the failure mode we’re worried about for inner-misaligned AGI is “given data from distribution D, the model first learns safe goal A (because goal A is favored on priors over goal B), but then after additional data from this distribution, the model learns an unsafe goal B (e.g. making reward itself the objective) that performs better on D than goal A (even though it was disfavored on priors).” In this case, the sort of result we might hope for is that we could make goal A sticky enough on D that it persists (and isn’t replaced by goal B), at least until the model gains sufficient situational awareness to play the training game to preserve goal A.
Note that scenario 1 also captures the setting where, given sufficient samples from D, the model occasionally observes scenarios where explicitly pursuing reward performs better than pursuing safe objectives. These datapoints are just rare enough that “pursuing reward” is disfavored on priors until late in training.
Scenario 2: In your above description, it sounds like the failure mode is “given data from distribution D_1, the model learns safe goal A (as it is favored on priors vs. goal B). Then the model is given data from a different distribution D_2 on which goal B is favored on priors vs. goal A, and learns unsafe goal B.” In this case, the only real hope for the model to preserve goal A is for it to be able to play the training game in pursuit of goal A.
I think of Scenario 1 as more reflective of our situation, and can think of lots of experiments for how to test whether and how much different functions are favored on priors for explaining the same training data.
Does this distinction make sense to you? Are you only interested in experiments on Scenario 2, or would experiments on stickiness in Scenario 1 also provide you evidence of goal “stickiness” in ways that are important?