Thanks for the feedback and corrections! You’re right, I was definitely confusing IRL, which is one approach to value learning, with the value learning project as a whole. I think you’re also right that most of the “Outer alignment concerns” section doesn’t really apply to RLHF as it’s currently written, or at least it’s not immediately clear how it does. Here’s another attempt:
RLHF attempts to infer a reward function from human comparisons of task completions. But it’s possible that a reward function learned from these stated preferences might not be the “actual” reward function—even if we could perfectly predict the human preference ordering on the training set of task completions, it’s hard to guarantee that the learned reward model will generalize to all task completions. We also have to consider that the stated human preferences might be irrational: they could be intransitive or cyclical, for instance. It seems possible to me that a reward model learned from human feedback still has to account for human biases, just as a reward function learned through IRL does.
Thanks for the feedback and corrections! You’re right, I was definitely confusing IRL, which is one approach to value learning, with the value learning project as a whole. I think you’re also right that most of the “Outer alignment concerns” section doesn’t really apply to RLHF as it’s currently written, or at least it’s not immediately clear how it does. Here’s another attempt:
RLHF attempts to infer a reward function from human comparisons of task completions. But it’s possible that a reward function learned from these stated preferences might not be the “actual” reward function—even if we could perfectly predict the human preference ordering on the training set of task completions, it’s hard to guarantee that the learned reward model will generalize to all task completions. We also have to consider that the stated human preferences might be irrational: they could be intransitive or cyclical, for instance. It seems possible to me that a reward model learned from human feedback still has to account for human biases, just as a reward function learned through IRL does.
How’s that for a start?