I think LLMs show some deceptive alignment, but it has the different nature. They are not from LLM consciously trying to deceive the trainer, but from RLHF “aligning” only certain scenarios of LLM’s behaviour, which were not generalized enough to make that alignement more fundamental.
I think LLMs show some deceptive alignment, but it has the different nature. They are not from LLM consciously trying to deceive the trainer, but from RLHF “aligning” only certain scenarios of LLM’s behaviour, which were not generalized enough to make that alignement more fundamental.