Good question, I shouldn’t have assumed it would be clear what I meant.
Say that you have, for example, n = 3 and k = 10. This gives you 30 total answers.
You have some choices about how you want to handle this, based on your assumptions about best answer versus average answer, and whether the model is consistently getting the answer right mostly or wrong mostly.
In this example, let’s say we take the top 3 k from each set of 10 k for each of the n. Compare the average of those top 3 k. That gives you a ranking among the n. You can then do some form of contrastive learning which rewards the best n in contrast to the worst n.
To get a contrastive pair of the k answers, you simply choose the best and worst of the set of 10 k corresponding to the best n. Why choose both from the set of the best n instead of the global best and worst from the set of 30? Because k is dependent on n, there’s a limit to how good k can be if n is flawed. You want to train the k model on doing well in the case of the n model doing well. It’s not the goal to have the k model do well when the n model does poorly, since that would put disproportionate responsibility and optimization pressure on k.
Yep, thanks, this makes sense. I still don’t have an intuitive picture of what differences to expect in the trained result, though.
Clarifying Concrete Algorithms
In the most basic possible shoggoth+face training setup, you sample CoT+summary, you score the summary somehow, and then you do this lots of times and throw away a worse-scoring fraction of these samples. Then you fine-tune the shoggoth on the CoT parts of these, and fine-tune the face on the summary part. I’ll call this ‘basic trainig’.
The most basic implementation of your thing is similar, except it discards the worst in this more complex way you’ve defined, instead of just global score. I’ll call this ‘your proposal’. (Although, it ignores the contrastive part of your proposal.)
(For both basic training and your proposal, I’m going to pretend there is only one question/prompt; really we have across-prompt issues similar to the across-summary issues you’re trying to solve, IE, by discarding the globally worst-scoring CoT+summary we discard answers to hard questions, meaning we would never train to do better on those questions. So really we need to at least take multiple samples for each question and discard samples based on performance relative to a question. Easier to focus on one question to avoid describing too many algorithmic details.)
Your proposal splits into the CoT part (which you call the n part), and the summary part (which you call the k part).
Analyzing CoT Gradient
For the CoT part, basic training keeps the CoTs which resulted in the best summaries. Your proposal instead averages the top 3 summaries for each CoT. This de-noises somewhat, at the cost of taking more summary samples per CoT. I’m not sure how to think about the trade-off there. More summary samples per CoT means less CoT samples overall, so we’re doing a better job scoring the CoTs, but we get less scores overall, so we learn less. Maybe the gradient step-size can be increased a little because we’re more confident in the gradient steps being taken, but we still explore less of the CoT space.
Another variation would be to average all the summary samples for that CoT to score a CoT sample. This de-noises more, for the same price in number of samples. This way, we’re simply scoring CoTs based on how well they make the summaries perform, with several samples to de-noise the gradient.
A possible argument for your proposal over this alternative is that we’re also training the summary to improve it, so the top 3 summaries are actually a better estimator of the kind of performance we care about than the total average.
This argument, if valid, similarly recommends your proposal over the basic training, because basic training is also looking at average summary score per-CoT. (It’s also looking at top, but it only samples one sumary per CoT, so these are the same).
Analyzing Summary Gradient
For the summary-gradient part, you take the highest-scoring summary from the best CoT (as defined previously), rather than the highest-scoring summary overall. This could help train the summary to be good for the type of CoT we’re steering towards, instead of than training it to be good for high-variance CoTs which produce one outlier great summary. As with the previous section, this helps to steer towards policies which are going to be better after learning, rather than just what’s best currently.
Conclusion
I’m curious to hear whether you think I’m missing any big factors here. I don’t have a lot of practical experience with this type of machine learning, so my guesses about what’s important vs unimportant here are not very informed.
My uninformed guess is that the trade-off between denoising the gradient vs getting more samples favors getting more samples, so the denoising argument for your proposal isn’t a great one. The main argument in favor, then, is the way your proposal trains the face to respond well to how the shoggoth is expected to behave after training, and trains the shoggoth to respond well to how the face is expected to behave after training—whereas the basic proposal instead trains the face to perform well for the current shoggoth and vice versa.
I’m not sure how big of a deal this is. If the gradient steps are small and the training is iterated a lot of times, maybe it’s not a very big deal at all?
IE, for the CoT gradient, the mistake made by the basic proposal is that it will sometimes discard a good CoT (and gradient towards a worse one) based on random poor performance from face. However, maybe face’s performance will improve fast enough that this won’t matter too much in the end?
Similarly, for the summary gradient, the problem with the basic proposal is that it’ll sometimes update based on an anomalously good summary of a poor CoT, meaning it is learning to cope with bad CoT behavior which it won’t have to deal with very often post-training. But maybe it won’t matter too much, if the bad CoT behavior is eliminated during training anyway?
Overall, it seems plausible to me that these problems with the basic proposal would work against what Daniel wants, and your proposal would help push towards what Daniel wants. However, I’m very uncertain, especially about the size of the effect.
Good question, I shouldn’t have assumed it would be clear what I meant.
Say that you have, for example, n = 3 and k = 10. This gives you 30 total answers.
You have some choices about how you want to handle this, based on your assumptions about best answer versus average answer, and whether the model is consistently getting the answer right mostly or wrong mostly.
In this example, let’s say we take the top 3 k from each set of 10 k for each of the n. Compare the average of those top 3 k. That gives you a ranking among the n. You can then do some form of contrastive learning which rewards the best n in contrast to the worst n.
To get a contrastive pair of the k answers, you simply choose the best and worst of the set of 10 k corresponding to the best n. Why choose both from the set of the best n instead of the global best and worst from the set of 30? Because k is dependent on n, there’s a limit to how good k can be if n is flawed. You want to train the k model on doing well in the case of the n model doing well. It’s not the goal to have the k model do well when the n model does poorly, since that would put disproportionate responsibility and optimization pressure on k.
Yep, thanks, this makes sense. I still don’t have an intuitive picture of what differences to expect in the trained result, though.
Clarifying Concrete Algorithms
In the most basic possible shoggoth+face training setup, you sample CoT+summary, you score the summary somehow, and then you do this lots of times and throw away a worse-scoring fraction of these samples. Then you fine-tune the shoggoth on the CoT parts of these, and fine-tune the face on the summary part. I’ll call this ‘basic trainig’.
The most basic implementation of your thing is similar, except it discards the worst in this more complex way you’ve defined, instead of just global score. I’ll call this ‘your proposal’. (Although, it ignores the contrastive part of your proposal.)
(For both basic training and your proposal, I’m going to pretend there is only one question/prompt; really we have across-prompt issues similar to the across-summary issues you’re trying to solve, IE, by discarding the globally worst-scoring CoT+summary we discard answers to hard questions, meaning we would never train to do better on those questions. So really we need to at least take multiple samples for each question and discard samples based on performance relative to a question. Easier to focus on one question to avoid describing too many algorithmic details.)
Your proposal splits into the CoT part (which you call the n part), and the summary part (which you call the k part).
Analyzing CoT Gradient
For the CoT part, basic training keeps the CoTs which resulted in the best summaries. Your proposal instead averages the top 3 summaries for each CoT. This de-noises somewhat, at the cost of taking more summary samples per CoT. I’m not sure how to think about the trade-off there. More summary samples per CoT means less CoT samples overall, so we’re doing a better job scoring the CoTs, but we get less scores overall, so we learn less. Maybe the gradient step-size can be increased a little because we’re more confident in the gradient steps being taken, but we still explore less of the CoT space.
Another variation would be to average all the summary samples for that CoT to score a CoT sample. This de-noises more, for the same price in number of samples. This way, we’re simply scoring CoTs based on how well they make the summaries perform, with several samples to de-noise the gradient.
A possible argument for your proposal over this alternative is that we’re also training the summary to improve it, so the top 3 summaries are actually a better estimator of the kind of performance we care about than the total average.
This argument, if valid, similarly recommends your proposal over the basic training, because basic training is also looking at average summary score per-CoT. (It’s also looking at top, but it only samples one sumary per CoT, so these are the same).
Analyzing Summary Gradient
For the summary-gradient part, you take the highest-scoring summary from the best CoT (as defined previously), rather than the highest-scoring summary overall. This could help train the summary to be good for the type of CoT we’re steering towards, instead of than training it to be good for high-variance CoTs which produce one outlier great summary. As with the previous section, this helps to steer towards policies which are going to be better after learning, rather than just what’s best currently.
Conclusion
I’m curious to hear whether you think I’m missing any big factors here. I don’t have a lot of practical experience with this type of machine learning, so my guesses about what’s important vs unimportant here are not very informed.
My uninformed guess is that the trade-off between denoising the gradient vs getting more samples favors getting more samples, so the denoising argument for your proposal isn’t a great one. The main argument in favor, then, is the way your proposal trains the face to respond well to how the shoggoth is expected to behave after training, and trains the shoggoth to respond well to how the face is expected to behave after training—whereas the basic proposal instead trains the face to perform well for the current shoggoth and vice versa.
I’m not sure how big of a deal this is. If the gradient steps are small and the training is iterated a lot of times, maybe it’s not a very big deal at all?
IE, for the CoT gradient, the mistake made by the basic proposal is that it will sometimes discard a good CoT (and gradient towards a worse one) based on random poor performance from face. However, maybe face’s performance will improve fast enough that this won’t matter too much in the end?
Similarly, for the summary gradient, the problem with the basic proposal is that it’ll sometimes update based on an anomalously good summary of a poor CoT, meaning it is learning to cope with bad CoT behavior which it won’t have to deal with very often post-training. But maybe it won’t matter too much, if the bad CoT behavior is eliminated during training anyway?
Overall, it seems plausible to me that these problems with the basic proposal would work against what Daniel wants, and your proposal would help push towards what Daniel wants. However, I’m very uncertain, especially about the size of the effect.