But if given the choice between “nice-sounding but false” vs “bad-sounding but true”, it seems possible that the users’ companies, in principle, would prefer true reasoning versus false reasoning. Maybe especially because it is easier to spot issues when working with LLMs. E.g. Maybe users like seeing DeepSeek R1′s thinking because it helps them spot when DeepSeek misunderstands instructions.
This definitely aligns with my own experience so far.
On the day Claude 3.7 Sonnet was announced, I happened to be in the middle of a frustrating struggle with o3-mini at work: it could almost do what I needed it to do, yet it frequently failed at one seemingly easy aspect of the task, and I could find no way to fix the problem.
So I tried Claude 3.7 Sonnet, and quickly figured out what the issue was: o3-mini wasn’t giving itself enough room to execute the right algorithm for the part it was failing at, even with OpenAI’s “reasoning_effort” param set to “high.”[1]
Claude 3.7 Sonnet could do this part of the task if, and only if, I gave it enough room. This was immediately obvious from reading CoTs and playing around with maximum CoT lengths. After I determined how many Claude-tokens were necessary, I later checked that number against the number of reasoning tokens reported for o3-mini by the OpenAI API, and inferred that o3-mini must not have been writing enough text, even though I still couldn’t see whatever text it did write.
In this particular case, granular control over CoT length would have sufficed even without visible CoT. If OpenAI had provided a max token length param, I could have tuned this param by trial and error like I did with Claude.
Even then, though, I would have had to guess that length was the issue in the first place.
And in the general case, if I can’t see the CoT, then I’m shooting in the dark. Iterating on a prompt (or anything else) goes a lot quicker when you can actually see the full consequences of your changes!
In short: from an end user’s perspective, CoT visibility is a capabilities improvement.
I ended up just switching to 3.7 Sonnet for the task discussed above – not because it was “smarter” as a model in any way I knew about, but simply because the associated API made it so much easier to construct prompts that would effectively leverage its intelligence for my purposes.
This strikes me as a very encouraging sign for the CoT-monitoring alignment story.
Even ifyou have to pay an “alignment tax” on benchmarks to keep the CoT legible rather than accepting “neuralese,” that does not mean you will come out behind when people try to use your model to get things done in real life. (The “real alignment tax” is likely more like an alignment surplus, favoring legible CoT rather than penalizing it.)
One might argue that eventually, when the model is strongly superhuman, this surplus will go away because the human user will no longer have valuable insights about the CoT: the model will simply “figure out the most effective kinds of thoughts to have” on its own, in every case.
But there is path dependency here: if the most capable models (in a practical sense) are legible CoT models while we are still approaching this superhuman limit (and not there yet), then the first model for which legible CoT is no longer necessary will likely still have legible CoT (because this will be the “standard best practice” and there will be no reason to deviate from it until after we’ve crossed this particular threshold, and it won’t be obvious we’ve crossed it except in hindsight). So we would get a shot at alignment-via-CoT-monitoring on a “strongly superhuman” model at least once, before there were any other “strongly superhuman” models in existence with designs less amenable to this approach.
If I had been using a “non-reasoning” model, I would have forced it to do things the “right way” by imposing a structure on the output. E.g. I might ask it for a json object with a property that’s an array having one element per loop iteration, where the attributes of the array elements express precisely what needs to be “thought about” in each iteration.
Such techniques can be very powerful with “non-reasoning” models, but they don’t work well with reasoning models, because they get interpreted as constraining the “output” rather than the “reasoning”; by the time the model reaches the section whose structure has been helpfully constrained by the user, it’s already done a bunch of mostly uncontrollable “reasoning,” which may well have sent it down a bad path (and which, even in the best case, will waste tokens on correct serialized reasoning whose conceptual content will be repeated all over again in the verbose structured output).
This is one way that reasoning models feel like a partial step backwards to me. The implicit premise is that the model can just figure out on its own how to structure its CoT, and if it were much smarter than me perhaps that would be true – but of course in practice the model does “the wrong sort of CoT” by default fairly often, and with reasoning models I just have to accept the default behavior and “take the hit” when it’s wrong.
This frustrating UX seems like an obvious consequence of Deepseek-style RL on outcomes. It’s not obvious to me what kind of training recipe would be needed to fix it, but I have to imagine this will get less awkward in the near future (unless labs are so tunnel-visioned by reasoning-friendly benchmarks right now that they don’t prioritize glaring real-use problems like this one).
Indeed these are some reasons for optimism. I really do think that if we act now, we can create and cement an industry standard best practice of keeping CoT’s pure (and also showing them to the user, modulo a few legitimate exceptions, unlike what OpenAI currently does) and that this could persist for months or even years, possibly up to around the time of AGI, and that this would be pretty awesome for humanity if it happened.
Do you have a sense of what I, as a researcher, could do?
I sense that having users/companies want faithful CoT is very important.. In-tune users, as. nostalgebraist points out, will know how to use CoTs to debug LLMs. But I’m not sure whether this represents only 1% of users, so big labs just won’t care. Maybe we need to try and educate more users about this. Maybe reach out to people who tweet about LLM best use cases to highlight this?
Since ’23 my answer to that question would have been “well the first step is for researchers like you to produce [basically exactly the paper OpenAI just produced]”
So that’s done. Nice. There are lots of follow-up experiments that can be done.
I don’t think trying to shift the market/consumers as a whole is very tractable.
But talking to your friends at the companies, getting their buy-in, seems valuable.
This definitely aligns with my own experience so far.
On the day Claude 3.7 Sonnet was announced, I happened to be in the middle of a frustrating struggle with o3-mini at work: it could almost do what I needed it to do, yet it frequently failed at one seemingly easy aspect of the task, and I could find no way to fix the problem.
So I tried Claude 3.7 Sonnet, and quickly figured out what the issue was: o3-mini wasn’t giving itself enough room to execute the right algorithm for the part it was failing at, even with OpenAI’s “reasoning_effort” param set to “high.”[1]
Claude 3.7 Sonnet could do this part of the task if, and only if, I gave it enough room. This was immediately obvious from reading CoTs and playing around with maximum CoT lengths. After I determined how many Claude-tokens were necessary, I later checked that number against the number of reasoning tokens reported for o3-mini by the OpenAI API, and inferred that o3-mini must not have been writing enough text, even though I still couldn’t see whatever text it did write.
In this particular case, granular control over CoT length would have sufficed even without visible CoT. If OpenAI had provided a max token length param, I could have tuned this param by trial and error like I did with Claude.
Even then, though, I would have had to guess that length was the issue in the first place.
And in the general case, if I can’t see the CoT, then I’m shooting in the dark. Iterating on a prompt (or anything else) goes a lot quicker when you can actually see the full consequences of your changes!
In short: from an end user’s perspective, CoT visibility is a capabilities improvement.
I ended up just switching to 3.7 Sonnet for the task discussed above – not because it was “smarter” as a model in any way I knew about, but simply because the associated API made it so much easier to construct prompts that would effectively leverage its intelligence for my purposes.
This strikes me as a very encouraging sign for the CoT-monitoring alignment story.
Even if you have to pay an “alignment tax” on benchmarks to keep the CoT legible rather than accepting “neuralese,” that does not mean you will come out behind when people try to use your model to get things done in real life. (The “real alignment tax” is likely more like an alignment surplus, favoring legible CoT rather than penalizing it.)
One might argue that eventually, when the model is strongly superhuman, this surplus will go away because the human user will no longer have valuable insights about the CoT: the model will simply “figure out the most effective kinds of thoughts to have” on its own, in every case.
But there is path dependency here: if the most capable models (in a practical sense) are legible CoT models while we are still approaching this superhuman limit (and not there yet), then the first model for which legible CoT is no longer necessary will likely still have legible CoT (because this will be the “standard best practice” and there will be no reason to deviate from it until after we’ve crossed this particular threshold, and it won’t be obvious we’ve crossed it except in hindsight). So we would get a shot at alignment-via-CoT-monitoring on a “strongly superhuman” model at least once, before there were any other “strongly superhuman” models in existence with designs less amenable to this approach.
If I had been using a “non-reasoning” model, I would have forced it to do things the “right way” by imposing a structure on the output. E.g. I might ask it for a json object with a property that’s an array having one element per loop iteration, where the attributes of the array elements express precisely what needs to be “thought about” in each iteration.
Such techniques can be very powerful with “non-reasoning” models, but they don’t work well with reasoning models, because they get interpreted as constraining the “output” rather than the “reasoning”; by the time the model reaches the section whose structure has been helpfully constrained by the user, it’s already done a bunch of mostly uncontrollable “reasoning,” which may well have sent it down a bad path (and which, even in the best case, will waste tokens on correct serialized reasoning whose conceptual content will be repeated all over again in the verbose structured output).
This is one way that reasoning models feel like a partial step backwards to me. The implicit premise is that the model can just figure out on its own how to structure its CoT, and if it were much smarter than me perhaps that would be true – but of course in practice the model does “the wrong sort of CoT” by default fairly often, and with reasoning models I just have to accept the default behavior and “take the hit” when it’s wrong.
This frustrating UX seems like an obvious consequence of Deepseek-style RL on outcomes. It’s not obvious to me what kind of training recipe would be needed to fix it, but I have to imagine this will get less awkward in the near future (unless labs are so tunnel-visioned by reasoning-friendly benchmarks right now that they don’t prioritize glaring real-use problems like this one).
Indeed these are some reasons for optimism. I really do think that if we act now, we can create and cement an industry standard best practice of keeping CoT’s pure (and also showing them to the user, modulo a few legitimate exceptions, unlike what OpenAI currently does) and that this could persist for months or even years, possibly up to around the time of AGI, and that this would be pretty awesome for humanity if it happened.
Do you have a sense of what I, as a researcher, could do?
I sense that having users/companies want faithful CoT is very important.. In-tune users, as. nostalgebraist points out, will know how to use CoTs to debug LLMs. But I’m not sure whether this represents only 1% of users, so big labs just won’t care. Maybe we need to try and educate more users about this. Maybe reach out to people who tweet about LLM best use cases to highlight this?
Since ’23 my answer to that question would have been “well the first step is for researchers like you to produce [basically exactly the paper OpenAI just produced]”
So that’s done. Nice. There are lots of follow-up experiments that can be done.
I don’t think trying to shift the market/consumers as a whole is very tractable.
But talking to your friends at the companies, getting their buy-in, seems valuable.