English-language math proof, it is not clear how to detect correctness,
Well the final answer is easy to evaluate. And like in rStar-Math, you can have a reward model that checks if each step is likely to be critical to a correct answer, then it assigns and implied value to the step.
summarizing a book
I think tasks outside math and code might be hard. But summarizing a book is actually easy. You just ask “how easy is it to reconstruct the book if given the summary”. So it’s an unsupervised compression-decompression task.
Another interesting domain is “building a simulator”. This is an expensive thing to generate solutions for, but easy to verify that it predicts the thing you are simulating. I can see this being an expensive but valuable domain for this paradime. This would include fusion reactors, and robotics (which OAI is once again hiring for!)
When doing RL, it is usually very important to have non-gameable reward mechanisms
I don’t see them doing this explicitly yet, but setting up an independent, and even adversarial reward model would help, or at least I expect it should.
Well the final answer is easy to evaluate. And like in rStar-Math, you can have a reward model that checks if each step is likely to be critical to a correct answer, then it assigns and implied value to the step.
Why is the final answer easy to evaluate? Let’s say we generate the problem “number of distinct solutions to x^3+y^3+xyz=0 modulo 17^17” or something. How do you know what the right answer is?
I agree that you can do this in a supervised way (a human puts in the right answer). Is that what you mean?
What about if the task is “prove that every integer can be written as the sum of at most 1000 different 11-th powers”? You can check such a proof in Lean, but how do you check it in English?
And like in rStar-Math, you can have a reward model that checks if each step is likely to be critical to a correct answer, then it assigns and implied value to the step.
My question is where the external feedback comes from. “Likely to be critical to a correct answer” according to whom? A model? Because then you don’t get the recursive self-improvement past what that model knows. You need an external source of feedback somewhere in the training loop.
I agree that you can do this in a supervised way (a human puts in the right answer). Is that what you mean?
I’m not 100% sure, but you could have a look at math-shepard for an example. I haven’t read the whole thing yet. I imagine it works back from a known solution.
“Likely to be critical to a correct answer” according to whom?
Check out the linked rStar-Math paper, it explains and demonstrates it better than I can (caveat they initially distil from a much larger model, which I see as a little bit of a cheat). tldr: yes a model, and a tree of possible solutions. Given a tree with values on the leaves, they can look at what nodes seem to have causal power.
(caveat they initially distil from a much larger model, which I see as a little bit of a cheat)
Another little bit of a cheat is that they only train Qwen2.5-Math-7B according to the procedure described. In contrast, for the other three models (smaller than Qwen2.5-Math-7B), they instead use the fine-tuned Qwen2.5-Math-7B to generate the training data to bootstrap round 4. (Basically, they distill from DeepSeek in round 1 and then they distill from fine-tuned Qwen in round 4.)
They justify:
Due to limited GPU resources, we performed 4 rounds of self-evolution exclusively on Qwen2.5-Math-7B, yielding 4 evolved policy SLMs (Table 3) and 4 PPMs (Table 4). For the other 3 policy LLMs, we fine-tune them using step-by-step verified trajectories generated from Qwen2.5-Math-7B’s 4th round. The final PPM from this round is then used as the reward model for the 3 policy SLMs.
TBH I’m not sure how this helps them with saving on GPU resources. For some reason it’s cheaper to generate a lot of big/long rollouts with the Qwen2.5-Math-7B-r4 than three times with [smaller model]-r3?)
It doesn’t make sense to me either, but it does seem to invalidate the “bootstrapping” results for the other 3 models. Maybe it’s because they could batch all reward model requests into one instance.
When MS doesn’t have enough compute to do their evals, the rest of us may struggle!
Well the final answer is easy to evaluate. And like in rStar-Math, you can have a reward model that checks if each step is likely to be critical to a correct answer, then it assigns and implied value to the step.
I think tasks outside math and code might be hard. But summarizing a book is actually easy. You just ask “how easy is it to reconstruct the book if given the summary”. So it’s an unsupervised compression-decompression task.
Another interesting domain is “building a simulator”. This is an expensive thing to generate solutions for, but easy to verify that it predicts the thing you are simulating. I can see this being an expensive but valuable domain for this paradime. This would include fusion reactors, and robotics (which OAI is once again hiring for!)
I don’t see them doing this explicitly yet, but setting up an independent, and even adversarial reward model would help, or at least I expect it should.
Why is the final answer easy to evaluate? Let’s say we generate the problem “number of distinct solutions to x^3+y^3+xyz=0 modulo 17^17” or something. How do you know what the right answer is?
I agree that you can do this in a supervised way (a human puts in the right answer). Is that what you mean?
What about if the task is “prove that every integer can be written as the sum of at most 1000 different 11-th powers”? You can check such a proof in Lean, but how do you check it in English?
My question is where the external feedback comes from. “Likely to be critical to a correct answer” according to whom? A model? Because then you don’t get the recursive self-improvement past what that model knows. You need an external source of feedback somewhere in the training loop.
I’m not 100% sure, but you could have a look at math-shepard for an example. I haven’t read the whole thing yet. I imagine it works back from a known solution.
Check out the linked rStar-Math paper, it explains and demonstrates it better than I can (caveat they initially distil from a much larger model, which I see as a little bit of a cheat). tldr: yes a model, and a tree of possible solutions. Given a tree with values on the leaves, they can look at what nodes seem to have causal power.
A seperate approach is to teach a model to supervise using human process supervision data , then ask it to be the judge. This paper also cheats a little by distilling, but I think the method makes sense.
Another little bit of a cheat is that they only train Qwen2.5-Math-7B according to the procedure described. In contrast, for the other three models (smaller than Qwen2.5-Math-7B), they instead use the fine-tuned Qwen2.5-Math-7B to generate the training data to bootstrap round 4. (Basically, they distill from DeepSeek in round 1 and then they distill from fine-tuned Qwen in round 4.)
They justify:
TBH I’m not sure how this helps them with saving on GPU resources. For some reason it’s cheaper to generate a lot of big/long rollouts with the Qwen2.5-Math-7B-r4 than three times with [smaller model]-r3?)
It doesn’t make sense to me either, but it does seem to invalidate the “bootstrapping” results for the other 3 models. Maybe it’s because they could batch all reward model requests into one instance.
When MS doesn’t have enough compute to do their evals, the rest of us may struggle!