I would instead say that ELK is a component of getting good human feedback, and the more ambitious the ELK is (e.g. requiring the human to actually understand something correctly for it to count as reported), the more it’s sort of trying to do all the work needed to get good human feedback (which may involve a lot of subtle work), and making it so that you only need a very simple wrapper around it (e.g. approval-directedness, or RLHF, or human-guided self-modification, or automated alignment research) to get good outcomes.
I would instead say that ELK is a component of getting good human feedback, and the more ambitious the ELK is (e.g. requiring the human to actually understand something correctly for it to count as reported), the more it’s sort of trying to do all the work needed to get good human feedback (which may involve a lot of subtle work), and making it so that you only need a very simple wrapper around it (e.g. approval-directedness, or RLHF, or human-guided self-modification, or automated alignment research) to get good outcomes.