I’m not sure if I can find it easily, but I recall Eliezer pointing out (several years ago) that he thought that Value Identification was the “easy part” of the alignment problem, with the getting it to care part being something like an order of magnitude more difficult. He seemed to think (IIRC) this itself could still be somewhat difficult, as you point out. Additionally, the difficulty was always considered in the context of having an alignable AGI (i.e. something you can point in a specific direction), which GPT-N is not under this paradigm.
I’m not sure if I can find it easily, but I recall Eliezer pointing out (several years ago) that he thought that Value Identification was the “easy part” of the alignment problem, with the getting it to care part being something like an order of magnitude more difficult. He seemed to think (IIRC) this itself could still be somewhat difficult, as you point out. Additionally, the difficulty was always considered in the context of having an alignable AGI (i.e. something you can point in a specific direction), which GPT-N is not under this paradigm.