Yep, thanks, this makes sense. I still don’t have an intuitive picture of what differences to expect in the trained result, though.
Clarifying Concrete Algorithms
In the most basic possible shoggoth+face training setup, you sample CoT+summary, you score the summary somehow, and then you do this lots of times and throw away a worse-scoring fraction of these samples. Then you fine-tune the shoggoth on the CoT parts of these, and fine-tune the face on the summary part. I’ll call this ‘basic trainig’.
The most basic implementation of your thing is similar, except it discards the worst in this more complex way you’ve defined, instead of just global score. I’ll call this ‘your proposal’. (Although, it ignores the contrastive part of your proposal.)
(For both basic training and your proposal, I’m going to pretend there is only one question/prompt; really we have across-prompt issues similar to the across-summary issues you’re trying to solve, IE, by discarding the globally worst-scoring CoT+summary we discard answers to hard questions, meaning we would never train to do better on those questions. So really we need to at least take multiple samples for each question and discard samples based on performance relative to a question. Easier to focus on one question to avoid describing too many algorithmic details.)
Your proposal splits into the CoT part (which you call the n part), and the summary part (which you call the k part).
Analyzing CoT Gradient
For the CoT part, basic training keeps the CoTs which resulted in the best summaries. Your proposal instead averages the top 3 summaries for each CoT. This de-noises somewhat, at the cost of taking more summary samples per CoT. I’m not sure how to think about the trade-off there. More summary samples per CoT means less CoT samples overall, so we’re doing a better job scoring the CoTs, but we get less scores overall, so we learn less. Maybe the gradient step-size can be increased a little because we’re more confident in the gradient steps being taken, but we still explore less of the CoT space.
Another variation would be to average all the summary samples for that CoT to score a CoT sample. This de-noises more, for the same price in number of samples. This way, we’re simply scoring CoTs based on how well they make the summaries perform, with several samples to de-noise the gradient.
A possible argument for your proposal over this alternative is that we’re also training the summary to improve it, so the top 3 summaries are actually a better estimator of the kind of performance we care about than the total average.
This argument, if valid, similarly recommends your proposal over the basic training, because basic training is also looking at average summary score per-CoT. (It’s also looking at top, but it only samples one sumary per CoT, so these are the same).
Analyzing Summary Gradient
For the summary-gradient part, you take the highest-scoring summary from the best CoT (as defined previously), rather than the highest-scoring summary overall. This could help train the summary to be good for the type of CoT we’re steering towards, instead of than training it to be good for high-variance CoTs which produce one outlier great summary. As with the previous section, this helps to steer towards policies which are going to be better after learning, rather than just what’s best currently.
Conclusion
I’m curious to hear whether you think I’m missing any big factors here. I don’t have a lot of practical experience with this type of machine learning, so my guesses about what’s important vs unimportant here are not very informed.
My uninformed guess is that the trade-off between denoising the gradient vs getting more samples favors getting more samples, so the denoising argument for your proposal isn’t a great one. The main argument in favor, then, is the way your proposal trains the face to respond well to how the shoggoth is expected to behave after training, and trains the shoggoth to respond well to how the face is expected to behave after training—whereas the basic proposal instead trains the face to perform well for the current shoggoth and vice versa.
I’m not sure how big of a deal this is. If the gradient steps are small and the training is iterated a lot of times, maybe it’s not a very big deal at all?
IE, for the CoT gradient, the mistake made by the basic proposal is that it will sometimes discard a good CoT (and gradient towards a worse one) based on random poor performance from face. However, maybe face’s performance will improve fast enough that this won’t matter too much in the end?
Similarly, for the summary gradient, the problem with the basic proposal is that it’ll sometimes update based on an anomalously good summary of a poor CoT, meaning it is learning to cope with bad CoT behavior which it won’t have to deal with very often post-training. But maybe it won’t matter too much, if the bad CoT behavior is eliminated during training anyway?
Overall, it seems plausible to me that these problems with the basic proposal would work against what Daniel wants, and your proposal would help push towards what Daniel wants. However, I’m very uncertain, especially about the size of the effect.
Yep, thanks, this makes sense. I still don’t have an intuitive picture of what differences to expect in the trained result, though.
Clarifying Concrete Algorithms
In the most basic possible shoggoth+face training setup, you sample CoT+summary, you score the summary somehow, and then you do this lots of times and throw away a worse-scoring fraction of these samples. Then you fine-tune the shoggoth on the CoT parts of these, and fine-tune the face on the summary part. I’ll call this ‘basic trainig’.
The most basic implementation of your thing is similar, except it discards the worst in this more complex way you’ve defined, instead of just global score. I’ll call this ‘your proposal’. (Although, it ignores the contrastive part of your proposal.)
(For both basic training and your proposal, I’m going to pretend there is only one question/prompt; really we have across-prompt issues similar to the across-summary issues you’re trying to solve, IE, by discarding the globally worst-scoring CoT+summary we discard answers to hard questions, meaning we would never train to do better on those questions. So really we need to at least take multiple samples for each question and discard samples based on performance relative to a question. Easier to focus on one question to avoid describing too many algorithmic details.)
Your proposal splits into the CoT part (which you call the n part), and the summary part (which you call the k part).
Analyzing CoT Gradient
For the CoT part, basic training keeps the CoTs which resulted in the best summaries. Your proposal instead averages the top 3 summaries for each CoT. This de-noises somewhat, at the cost of taking more summary samples per CoT. I’m not sure how to think about the trade-off there. More summary samples per CoT means less CoT samples overall, so we’re doing a better job scoring the CoTs, but we get less scores overall, so we learn less. Maybe the gradient step-size can be increased a little because we’re more confident in the gradient steps being taken, but we still explore less of the CoT space.
Another variation would be to average all the summary samples for that CoT to score a CoT sample. This de-noises more, for the same price in number of samples. This way, we’re simply scoring CoTs based on how well they make the summaries perform, with several samples to de-noise the gradient.
A possible argument for your proposal over this alternative is that we’re also training the summary to improve it, so the top 3 summaries are actually a better estimator of the kind of performance we care about than the total average.
This argument, if valid, similarly recommends your proposal over the basic training, because basic training is also looking at average summary score per-CoT. (It’s also looking at top, but it only samples one sumary per CoT, so these are the same).
Analyzing Summary Gradient
For the summary-gradient part, you take the highest-scoring summary from the best CoT (as defined previously), rather than the highest-scoring summary overall. This could help train the summary to be good for the type of CoT we’re steering towards, instead of than training it to be good for high-variance CoTs which produce one outlier great summary. As with the previous section, this helps to steer towards policies which are going to be better after learning, rather than just what’s best currently.
Conclusion
I’m curious to hear whether you think I’m missing any big factors here. I don’t have a lot of practical experience with this type of machine learning, so my guesses about what’s important vs unimportant here are not very informed.
My uninformed guess is that the trade-off between denoising the gradient vs getting more samples favors getting more samples, so the denoising argument for your proposal isn’t a great one. The main argument in favor, then, is the way your proposal trains the face to respond well to how the shoggoth is expected to behave after training, and trains the shoggoth to respond well to how the face is expected to behave after training—whereas the basic proposal instead trains the face to perform well for the current shoggoth and vice versa.
I’m not sure how big of a deal this is. If the gradient steps are small and the training is iterated a lot of times, maybe it’s not a very big deal at all?
IE, for the CoT gradient, the mistake made by the basic proposal is that it will sometimes discard a good CoT (and gradient towards a worse one) based on random poor performance from face. However, maybe face’s performance will improve fast enough that this won’t matter too much in the end?
Similarly, for the summary gradient, the problem with the basic proposal is that it’ll sometimes update based on an anomalously good summary of a poor CoT, meaning it is learning to cope with bad CoT behavior which it won’t have to deal with very often post-training. But maybe it won’t matter too much, if the bad CoT behavior is eliminated during training anyway?
Overall, it seems plausible to me that these problems with the basic proposal would work against what Daniel wants, and your proposal would help push towards what Daniel wants. However, I’m very uncertain, especially about the size of the effect.