Edit: Let me know if you or someone you know is interested in working on this sort of research. I work at Open Phil and we’re interested in getting a better sense of how much demand for funding in this area there is.
Good post. I agree with the headline that faithful CoT is a promising path forward for interpreting and supervising LLMs.
TL;DR: I’d suggest researchers focus more narrowly on a subset of unfaithful chain-of-thought: steganographic/encoded reasoning that the model is using to perform hidden serial reasoning, as studied in Roger et al. IMO this type of unfaithfulness is more important, more neglected, and more tractable to fix than the sort of post-hoc reasoning studied in Turpin et al., or the hidden parallel reasoning studied in Pfau et al.
I think it’s worth distinguishing between three kinds of unfaithful chain of thought that can occur when you ask a model a question and give it time to (I’m slightly adapting this trichotomy from Lanham et al.):
Epiphenomenal (typically post-hoc) reasoning: This is when the process of writing a CoT explanation is not actually helping the model produce its answer. This can be okay, if (as OP mentions), the model is just accurately explaining the reasoning process that it followed to produce its answer. But it can also lead to explanations that are inaccurate, as Turpin et al. demonstrate; these explanations leave out important reasons for a decision and/or make false predictions about how the model would behave on other inputs. For more on whether models’ explanations make accurate predictions about their counterfactual behaviors, see here.
Hidden parallelized reasoning: This is when a model does a bunch of reasoning while generating a chain of thought, but it doesn’t encode any of its intermediate calculations/conclusions in the tokens it produces. It just uses the fact that it has lots of tokens to think with to do a large number of parallelized calculations. See Pfau et al. for a demonstration of what this looks like; you can make models that get a question wrong if forced to answer immediately, but get it right if you ask them “[Question]?...........................Give your answer now:”.
Something like hidden parallelized reasoning may emerge naturally from pre-training, as studied in Wu et al. They show that larger models do more “pre-caching”, i.e. computing a feature that is useless for predicting the literal next token, but will be useful for predicting some future next token down the line.
Steganography/encoded serial reasoning: This is when a model uses the words it generates in its chain of thought to accomplish as task that it could not do in a single forward pass, but it encodes meaning in its word choice that is opaque to a typical overseer. This is demonstrated in Roger et al., which takes a model that is too dumb to learn a QA task without CoT but can learn the task with CoT, and then teaches it to perform the task using steganographic CoTs.
I think steganography is the most concerning form of unfaithful chain-of-thought of these three, but it’s also the easiest to study and mitigate. That is to say, I think it’s more important, more neglected, and more tractable than the other two.
Importance/scariness of steg vs. other forms of unfaithfulness: I think steganographic reasoning in chains of thought is both reasonably likely to emerge naturally and potentially quite dangerous, especially compared to epiphenomenal and hidden parallelized reasoning.
Why likely? It seems useful for multi-step reasoning to arbitrarily assign information-laden connotations to pairs of words that are synonymous to humans, especially under a financial pressure to use reduce the number of tokens used. Also, OpenAI performed no public tests for steganography on o1 and this OpenAI RS says he’d be totally happy with steganography, so I get the sense that OpenAI at least isn’t prioritizing avoiding steg highly.
Why dangerous? Steganographic CoTs allow models to perform long chains of serial reasoning outside human oversight, and to send messages to other AIs (esp. copies of themselves).
Carefully analyze their inputs to distinguish real chances to defect from synthetic honeypots, or distinguish insecure containment environments (that could be escaped from) from secure ones.
Subtly collude with other copies of themselves in order to coordinate on e.g. sabotaging AI developers’ cybersecurity, alignment research, or AI training code.
Meanwhile, epiphenomenal and hidden parallelized reasoning don’t allow for very long and complex trains of thought, or for hidden communication with other AIs.
Neglectedness: Almost all work on CoT faithfulness (e.g. this) focuses entirely on unfaithful post-hoc reasoning and ignores steganography entirely. I conjecture that this is because long, RL-trained CoTs are new with o1, and there hasn’t been clear evidence of steganographic reasoning occurring organically yet. That said, I’d guess o1 is already steganographic, on this basis of this sort of thing.
Tractability:Roger et al. find that “paraphrasing is an effective defense against all steganography schemes we’ve tried” and Radhakrishnan et al. find that “Question Decomposition Improves the Faithfulness of Model-Generated Reasoning”. I am pretty skeptical that these basic defenses can resist optimization pressures against them, and I’d bet that further research will overturn Roger et al.’s blue-team victory. But I’d like to see the SOTA on defenses pushed further; I think that defenses can probably reduce steganography somewhat and make it incrementally harder for future AIs to defect.
I think there are a lot of exciting follow-up directions for future research here; finding cases where steg emerges organically, testing out different paraphrasing approaches for reducing steg, finding ways of decomposing tasks into subproblems to reduce how much context/background info any given copy of a model has (and then preventing the copies from steganographically communicating).
(I’d be interested in hearing about any good work in this area that I haven’t mentioned in this comment, if anyone is aware of some)
Edit: Let me know if you or someone you know is interested in working on this sort of research. I work at Open Phil and we’re interested in getting a better sense of how much demand for funding in this area there is.
Good post. I agree with the headline that faithful CoT is a promising path forward for interpreting and supervising LLMs.
TL;DR: I’d suggest researchers focus more narrowly on a subset of unfaithful chain-of-thought: steganographic/encoded reasoning that the model is using to perform hidden serial reasoning, as studied in Roger et al. IMO this type of unfaithfulness is more important, more neglected, and more tractable to fix than the sort of post-hoc reasoning studied in Turpin et al., or the hidden parallel reasoning studied in Pfau et al.
I think it’s worth distinguishing between three kinds of unfaithful chain of thought that can occur when you ask a model a question and give it time to (I’m slightly adapting this trichotomy from Lanham et al.):
Epiphenomenal (typically post-hoc) reasoning: This is when the process of writing a CoT explanation is not actually helping the model produce its answer. This can be okay, if (as OP mentions), the model is just accurately explaining the reasoning process that it followed to produce its answer. But it can also lead to explanations that are inaccurate, as Turpin et al. demonstrate; these explanations leave out important reasons for a decision and/or make false predictions about how the model would behave on other inputs. For more on whether models’ explanations make accurate predictions about their counterfactual behaviors, see here.
Hidden parallelized reasoning: This is when a model does a bunch of reasoning while generating a chain of thought, but it doesn’t encode any of its intermediate calculations/conclusions in the tokens it produces. It just uses the fact that it has lots of tokens to think with to do a large number of parallelized calculations. See Pfau et al. for a demonstration of what this looks like; you can make models that get a question wrong if forced to answer immediately, but get it right if you ask them “[Question]?...........................Give your answer now:”.
Something like hidden parallelized reasoning may emerge naturally from pre-training, as studied in Wu et al. They show that larger models do more “pre-caching”, i.e. computing a feature that is useless for predicting the literal next token, but will be useful for predicting some future next token down the line.
Steganography/encoded serial reasoning: This is when a model uses the words it generates in its chain of thought to accomplish as task that it could not do in a single forward pass, but it encodes meaning in its word choice that is opaque to a typical overseer. This is demonstrated in Roger et al., which takes a model that is too dumb to learn a QA task without CoT but can learn the task with CoT, and then teaches it to perform the task using steganographic CoTs.
I think steganography is the most concerning form of unfaithful chain-of-thought of these three, but it’s also the easiest to study and mitigate. That is to say, I think it’s more important, more neglected, and more tractable than the other two.
Importance/scariness of steg vs. other forms of unfaithfulness: I think steganographic reasoning in chains of thought is both reasonably likely to emerge naturally and potentially quite dangerous, especially compared to epiphenomenal and hidden parallelized reasoning.
Why likely? It seems useful for multi-step reasoning to arbitrarily assign information-laden connotations to pairs of words that are synonymous to humans, especially under a financial pressure to use reduce the number of tokens used. Also, OpenAI performed no public tests for steganography on o1 and this OpenAI RS says he’d be totally happy with steganography, so I get the sense that OpenAI at least isn’t prioritizing avoiding steg highly.
Why dangerous? Steganographic CoTs allow models to perform long chains of serial reasoning outside human oversight, and to send messages to other AIs (esp. copies of themselves).
This could allow models to:
Generate complex plans for escaping safety measures and going rogue.
Carefully analyze their inputs to distinguish real chances to defect from synthetic honeypots, or distinguish insecure containment environments (that could be escaped from) from secure ones.
Subtly collude with other copies of themselves in order to coordinate on e.g. sabotaging AI developers’ cybersecurity, alignment research, or AI training code.
Meanwhile, epiphenomenal and hidden parallelized reasoning don’t allow for very long and complex trains of thought, or for hidden communication with other AIs.
Neglectedness: Almost all work on CoT faithfulness (e.g. this) focuses entirely on unfaithful post-hoc reasoning and ignores steganography entirely. I conjecture that this is because long, RL-trained CoTs are new with o1, and there hasn’t been clear evidence of steganographic reasoning occurring organically yet. That said, I’d guess o1 is already steganographic, on this basis of this sort of thing.
Tractability: Roger et al. find that “paraphrasing is an effective defense against all steganography schemes we’ve tried” and Radhakrishnan et al. find that “Question Decomposition Improves the Faithfulness of Model-Generated Reasoning”. I am pretty skeptical that these basic defenses can resist optimization pressures against them, and I’d bet that further research will overturn Roger et al.’s blue-team victory. But I’d like to see the SOTA on defenses pushed further; I think that defenses can probably reduce steganography somewhat and make it incrementally harder for future AIs to defect.
I think there are a lot of exciting follow-up directions for future research here; finding cases where steg emerges organically, testing out different paraphrasing approaches for reducing steg, finding ways of decomposing tasks into subproblems to reduce how much context/background info any given copy of a model has (and then preventing the copies from steganographically communicating).
(I’d be interested in hearing about any good work in this area that I haven’t mentioned in this comment, if anyone is aware of some)