I don’t think all work of that form would measure misalignment, but some work of that form might, here’s a description of some stuff in that space that would count as measuring misalignment.
Let A be some task (e.g. add 1 digit numbers), B be a task that is downstream of A (to do B, you need to be able to do A, e.g. add 3 digit numbers), M is the original model, M1 is the model after finetuning.
If the training on a downstream task was minimal, so we think it’s revealing what the model knew before finetuning rather than adding knew knowledge, then better performance of M1 than M on A would demonstrated misalignment (don’t have a precise definition of what would make finetuning minimal in this way, would be good to have a clearer criteria for that).
If M1 does better on B after finetuning in a way that implicitly demonstrates better knowledge of A, but does not do better on A when asked to do it explicitly, that would demonstrate that the finetuned M1 is misaligned (I think we might expect some version of this to happen by default though, since M1 might overfit to only doing tasks of type B. Maybe if you have a training procedure where M1 generally doesn’t get worse at any tasks then I might hope that it would get better on A and be disappointed if it doesn’t).
I don’t think all work of that form would measure misalignment, but some work of that form might, here’s a description of some stuff in that space that would count as measuring misalignment.
Let A be some task (e.g. add 1 digit numbers), B be a task that is downstream of A (to do B, you need to be able to do A, e.g. add 3 digit numbers), M is the original model, M1 is the model after finetuning.
If the training on a downstream task was minimal, so we think it’s revealing what the model knew before finetuning rather than adding knew knowledge, then better performance of M1 than M on A would demonstrated misalignment (don’t have a precise definition of what would make finetuning minimal in this way, would be good to have a clearer criteria for that).
If M1 does better on B after finetuning in a way that implicitly demonstrates better knowledge of A, but does not do better on A when asked to do it explicitly, that would demonstrate that the finetuned M1 is misaligned (I think we might expect some version of this to happen by default though, since M1 might overfit to only doing tasks of type B. Maybe if you have a training procedure where M1 generally doesn’t get worse at any tasks then I might hope that it would get better on A and be disappointed if it doesn’t).