Doing multiple rounds of DPO where you sample from the LLM to get comparison pairs seems totally possible and might be the best way to use DPO in many cases.
You can of course use DPO on data obtained from sources other than the LLM itself.
Interesting. I’m thinking that with “many cases” you mean cases where either manually annotating the data over multiple rounds is possible (cheap), or cases where the model is powerful enough to label the comparison pairs, and we get something like the DPO version of RLAIF. That does sound more like RL.
manually annotating the data over multiple rounds is possible (cheap)
I intended this.
This is the same as normal RLHF. In practice the sample efficiency of DPO might be higher or lower than (e.g.) PPO based RLHF in various different cases.
Doing multiple rounds of DPO where you sample from the LLM to get comparison pairs seems totally possible and might be the best way to use DPO in many cases.
You can of course use DPO on data obtained from sources other than the LLM itself.
Interesting. I’m thinking that with “many cases” you mean cases where either manually annotating the data over multiple rounds is possible (cheap), or cases where the model is powerful enough to label the comparison pairs, and we get something like the DPO version of RLAIF. That does sound more like RL.
I intended this.
This is the same as normal RLHF. In practice the sample efficiency of DPO might be higher or lower than (e.g.) PPO based RLHF in various different cases.