Without some form of regularization, some forms of RL can lead to trajectories that have zero probability wrt the base distribution (e.g. because they break a correlation that occurs on the pretraining distribution with 100% accuracy). However, sampling cannot lead to trajectories with zero probability?
As stated, this claim is false for LMs without top-p sampling or floating point rounding errors, since every token has a logit greater than negative infinity and thus a probability greater than actual 0. So with enough sampling, you’ll find the RL trajectories.
This is obviously a super pedantic point: RL finds sentences with cross entropy of 30+ nats wrt to the base distribution all the time, while you’ll never do Best-of-exp(30)~=1e13. And there’s an empirical question of how much performance you get versus how far your new policy is from the old one, e.g. if you look at Leo Gao’s recent RLHF paper, you’ll see that RL is more off distribution than BoN at equal proxy rewards.
That being said, I do think you need to make more points than just “RL can result in incredibly implausible trajectories” in order to claim that BoN is safer than RL, since I claim that Best-of-exp(30) is not clearly safe either!
No, I’m not claiming that. What I am claiming is something more like: there are plausible ways in which applying 30 nats of optimization via RLHF leads to worse results than best-of-exp(30) sampling, because RLHF might find a different solution that scores that highly on reward.
Toy example: say we have two jointly Gaussian random variables X and Y that are positively correlated (but not perfectly). I could sample 1000 pairs and pick the one with the highest X-value. This would very likely also give me an unusually high Y-value (how high depends on the correlation). Or I could change the parameters of the distribution such that a single sample will typically have an X-value as high as the 99.9th percentile of the old distribution. In that case, the Y-value I typically get will depend a lot on how I changed the parameters. E.g. if I just shifted the X-component of the mean and nothing else, I won’t get higher Y-values at all.
I’m pretty unsure what kinds of parameter changes RLHF actually induces, I’m just saying that parameter updates can destroy correlations in a way that conditioning doesn’t. This is with the same amount of selection pressure on the proxy in both cases.
I think your claim is something like:
As stated, this claim is false for LMs without top-p sampling or floating point rounding errors, since every token has a logit greater than negative infinity and thus a probability greater than actual 0. So with enough sampling, you’ll find the RL trajectories.
This is obviously a super pedantic point: RL finds sentences with cross entropy of 30+ nats wrt to the base distribution all the time, while you’ll never do Best-of-exp(30)~=1e13. And there’s an empirical question of how much performance you get versus how far your new policy is from the old one, e.g. if you look at Leo Gao’s recent RLHF paper, you’ll see that RL is more off distribution than BoN at equal proxy rewards.
That being said, I do think you need to make more points than just “RL can result in incredibly implausible trajectories” in order to claim that BoN is safer than RL, since I claim that Best-of-exp(30) is not clearly safe either!
No, I’m not claiming that. What I am claiming is something more like: there are plausible ways in which applying 30 nats of optimization via RLHF leads to worse results than best-of-exp(30) sampling, because RLHF might find a different solution that scores that highly on reward.
Toy example: say we have two jointly Gaussian random variables X and Y that are positively correlated (but not perfectly). I could sample 1000 pairs and pick the one with the highest X-value. This would very likely also give me an unusually high Y-value (how high depends on the correlation). Or I could change the parameters of the distribution such that a single sample will typically have an X-value as high as the 99.9th percentile of the old distribution. In that case, the Y-value I typically get will depend a lot on how I changed the parameters. E.g. if I just shifted the X-component of the mean and nothing else, I won’t get higher Y-values at all.
I’m pretty unsure what kinds of parameter changes RLHF actually induces, I’m just saying that parameter updates can destroy correlations in a way that conditioning doesn’t. This is with the same amount of selection pressure on the proxy in both cases.
Cool, I don’t think we disagree here.