This is an interesting point, but it doesn’t undermine the case that deceptive alignment is unlikely. Suppose that a model doesn’t have the correct abstraction for the base goal, but its internal goal is the closest abstraction it has to the base goal. Because the model doesn’t understand the correct abstraction, it can’t instrumentally optimize for the correct abstraction rather than its flawed abstraction, so it can’t be deceptively aligned. When it messes up due to having a flawed goal, that should push its abstraction closer to the correct abstraction. The model’s goal will still point to that, and its alignment will improve. This should continue to happen until the base abstraction is correct. For more details, see my comment here.
This is an interesting point, but it doesn’t undermine the case that deceptive alignment is unlikely. Suppose that a model doesn’t have the correct abstraction for the base goal, but its internal goal is the closest abstraction it has to the base goal. Because the model doesn’t understand the correct abstraction, it can’t instrumentally optimize for the correct abstraction rather than its flawed abstraction, so it can’t be deceptively aligned. When it messes up due to having a flawed goal, that should push its abstraction closer to the correct abstraction. The model’s goal will still point to that, and its alignment will improve. This should continue to happen until the base abstraction is correct. For more details, see my comment here.