To clarify, when distilling did you also train the model to predict the final answer or only the CoT? I assume the latter but it wasn’t 100% clear from the text.
. What’s the variance on the pass@1 performance? (e.g. via bootstrap sampling) You say the performance of “chunked paraphrasing” is worse, but the degradation looks pretty minor and without knowing the variance it’s hard to know whether this is a meaningful difference.
“We find that the pass@1 results are worse, especially for small models.” Do you mean large models?
Notes:
I’m somewhat confused why you claim that paraphrasing the whole CoT (i.e. non-chunked) breaks the causal link between syntax(t=0) and semantics(t=1) if you then train on the CoT. I’ll try to paraphrase (ha!) what’s going on in my own words:
Let x be the original CoT and f(x) the paraphrase. Let y be the correct final answer. Then f(x) might still contain the same structure as x, e.g. mutual information between X and f(X) is high. If you now trained a model to predict $y | f(x)$, I would guess it will learn to pick up on any present steganographic structure that is predictive of y.
If you trained a model to only predict the CoT, i.e. f(x, t=1) | f(x|t=0) I would still guess the model will learn to pick up on the structure if it is present and helpful. E.g. if previously the model was always putting an equation after outputting an enumerated list and now it does so after a bullet point list because f converted all enumerations into bullet points.
So this seems to hinge on f choosing incoherent syntax, e.g. what you observe in the chunk-paraphrase setting, but it’s not clear to me that this is the behavior we should expect from the non-chunked paraphraser.
Both (the final answers are usually very short compared to the CoT)
The 1-sigma uncertainty over the mean is sigma = sqrt(acc * (1-acc) / n) ~ 1%. But I think the results are slightly more significant than this suggests as I evaluate all models on the exact same set of questions + given that chunk-paraphrased is worse across experiments on multiple models, I would guess that the same effect would exist for a much larger eval.
Good catch, I think I wrote that based on earlier results. I’ll edit the post.
On your notes:
If paraphrasing was a bijection between semantically-equivalent sentences, then I agree you could learn the structure in the bijection space. Is that what you were pointing at? My hope is that paraphrasing removes the mutual information through standardization / randomization (where what is random is not the map, but the actual samples because I am sampling at temperature 1). For example, if enumerated list ⇒ equation and bulleted list ⇒ sth else, then swapping enumerated and bulleted would not help but having the model use always a bulleted list or use a bulleted list a random 50% of the time does help to break the enumerated list ⇔ equation link.
I agree that it’s unclear where paraphrasing is on the [bijection]-[no syntax left unrandomized] spectrum.
If paraphrasing was a bijection between semantically-equivalent sentences, then I agree you could learn the structure in the bijection space. Is that what you were pointing at?
Yes exactly. I further claim that non-chunk paraphrasing seems more likely to be closer to the bijection end of the spectrum than chunk paraphrasing, though I agree that it could still be a many-to-one mapping (i.e. standardization).
I am especially concerned about this when training on the final answer.
Maybe my question just boils down to
How do you distinguish between “paraphrasing didn’t break the syntactical <> semantic link” and “the model isn’t relying on that link / the link doesn’t exist in the first place”.
My view here is that if you look at the paraphrases, it seems that a decent amount of standardization happened. Empirically, it also looks like T=1 sampling of the paraphraser introduces a random variations in the syntax. So I would be very surprised if a decent chunk of the syntactic link was not broken (e.g. at least 20% of it). Therefore I think it’s reasonable to bound the performance due to encoded reasoning to ~5x the gap between the original distillation and the paraphrased distillation, which would be ~7% (in absolute pass@1), which still seems not very frightening to me given that the no-CoT vs CoT gap is >50%.
This is sensitive to the size of the gap. If future work finds a ~5% gap on a new model, I’d be very concerned because it would make it more likely that the majority of the reasoning is encoded.
(Edit: This is a rough sketch of the argument, there are complications around diminishing returns on reasoning: maybe only a very small amount of visible reasoning is responsible for the majority of the no-CoT vs CoT gap. But it’s unclear how much this undermines my argument because I most care about the ability of the model to not leak anything about what it is reasoning about.)
Great experiments!
Questions:
To clarify, when distilling did you also train the model to predict the final answer or only the CoT? I assume the latter but it wasn’t 100% clear from the text.
. What’s the variance on the pass@1 performance? (e.g. via bootstrap sampling) You say the performance of “chunked paraphrasing” is worse, but the degradation looks pretty minor and without knowing the variance it’s hard to know whether this is a meaningful difference.
“We find that the pass@1 results are worse, especially for small models.” Do you mean large models?
Notes:
I’m somewhat confused why you claim that paraphrasing the whole CoT (i.e. non-chunked) breaks the causal link between syntax(t=0) and semantics(t=1) if you then train on the CoT. I’ll try to paraphrase (ha!) what’s going on in my own words:
Let x be the original CoT and f(x) the paraphrase. Let y be the correct final answer. Then f(x) might still contain the same structure as x, e.g. mutual information between X and f(X) is high. If you now trained a model to predict $y | f(x)$, I would guess it will learn to pick up on any present steganographic structure that is predictive of y.
If you trained a model to only predict the CoT, i.e. f(x, t=1) | f(x|t=0) I would still guess the model will learn to pick up on the structure if it is present and helpful. E.g. if previously the model was always putting an equation after outputting an enumerated list and now it does so after a bullet point list because f converted all enumerations into bullet points.
So this seems to hinge on f choosing incoherent syntax, e.g. what you observe in the chunk-paraphrase setting, but it’s not clear to me that this is the behavior we should expect from the non-chunked paraphraser.
Both (the final answers are usually very short compared to the CoT)
The 1-sigma uncertainty over the mean is sigma = sqrt(acc * (1-acc) / n) ~ 1%. But I think the results are slightly more significant than this suggests as I evaluate all models on the exact same set of questions + given that chunk-paraphrased is worse across experiments on multiple models, I would guess that the same effect would exist for a much larger eval.
Good catch, I think I wrote that based on earlier results. I’ll edit the post.
On your notes:
If paraphrasing was a bijection between semantically-equivalent sentences, then I agree you could learn the structure in the bijection space. Is that what you were pointing at? My hope is that paraphrasing removes the mutual information through standardization / randomization (where what is random is not the map, but the actual samples because I am sampling at temperature 1). For example, if enumerated list ⇒ equation and bulleted list ⇒ sth else, then swapping enumerated and bulleted would not help but having the model use always a bulleted list or use a bulleted list a random 50% of the time does help to break the enumerated list ⇔ equation link.
I agree that it’s unclear where paraphrasing is on the [bijection]-[no syntax left unrandomized] spectrum.
Yes exactly. I further claim that non-chunk paraphrasing seems more likely to be closer to the bijection end of the spectrum than chunk paraphrasing, though I agree that it could still be a many-to-one mapping (i.e. standardization).
I am especially concerned about this when training on the final answer.
Maybe my question just boils down to
I agree this is a concern.
My view here is that if you look at the paraphrases, it seems that a decent amount of standardization happened. Empirically, it also looks like T=1 sampling of the paraphraser introduces a random variations in the syntax. So I would be very surprised if a decent chunk of the syntactic link was not broken (e.g. at least 20% of it). Therefore I think it’s reasonable to bound the performance due to encoded reasoning to ~5x the gap between the original distillation and the paraphrased distillation, which would be ~7% (in absolute pass@1), which still seems not very frightening to me given that the no-CoT vs CoT gap is >50%.
This is sensitive to the size of the gap. If future work finds a ~5% gap on a new model, I’d be very concerned because it would make it more likely that the majority of the reasoning is encoded.
(Edit: This is a rough sketch of the argument, there are complications around diminishing returns on reasoning: maybe only a very small amount of visible reasoning is responsible for the majority of the no-CoT vs CoT gap. But it’s unclear how much this undermines my argument because I most care about the ability of the model to not leak anything about what it is reasoning about.)