I believe o1-type models that are trained to effectively reason out loud may actually be better for AI safety than the alternative. However, this is conditional on their CoTs being faithful, even after optimization by RL. I believe that scenario is entirely possible, though not necessarily something that will happen by default. See the case for CoT unfaithfulness is overstated, my response with specific ideas for ensuring faithfulness, and Daniel Kokotajlo’s doc along similar lines.
There seems to be quite a lot of low-hanging fruit here! I’m optimistic that highly-faithful CoTs can demonstrate enough momentum and short-term benefits to win out over uninterpretable neuralese, but it could easily go the other way. I think way more people should be working on methods to make CoT faithfulness robust to optimization (and how to minimize the safety tax of such methods).
It depends on what the alternative is. Here’s a hierarchy:
1. Pretrained models with a bit of RLHF, and lots of scaffolding / bureaucracy / language model programming. 2. Pretrained models trained to do long chains of thought, but with some techniques in place (e.g. paraphraser, shoggoth-face, see some of my docs) that try to preserve faithfulness. 3. Pretrained models trained to do long chains of thought, but without those techniques, such that the CoT evolves into some sort of special hyperoptimized lingo 4. As above except with recurrent models / neuralese, i.e. no token bottleneck.
Alas, my prediction is that while 1>2>3>4 from a safety perspective, 1<2<3<4 from a capabilities perspective. I predict that the corporations will gleefully progress down this chain, rationalizing why things are fine and safe even as things get substantially less safe due to the changes.
Anyhow I totally agree on the urgency, tractability, and importance of faithful CoT research. I think that if we can do enough of that research fast enough, we’ll be able to ‘hold the line’ at stage 2 for some time, possibly long enough to reach AGI.
The good news I’ll share is that some of the most important insights about the safety/alignment work done on LLMs do transfer over pretty well to a lot of plausible AGI architectures, so while there’s a little safety loss each time you go from 1 to 4, a lot of the theoretical ways to achieve alignment of these new systems remain intact, though the danger here is that the implementation difficulty pushes the safety tax too high, which is a pretty real concern.
Specifically, the insights I’m talking about are the controllability of AI with data, combined with their feedback on RL being way denser than human RL from evolution, meaning that instrumental convergence is affected significantly.
I think that IF we could solve alignment for level 2 systems, many of the lessons (maybe most? Maybe if we are lucky all?) would transfer to level 4 systems. However, time is running out to do that before the industry moves on beyond level 2.
Let me explain.
Faithful CoT is not a way to make your AI agent share your values, or truly strive towards the goals and principles you dictate in the Spec. Instead, it’s a way to read your AI’s mind, so that you can monitor its thoughts and notice when your alignment techniques failed and its values/goals/principles/etc. ended up different from what you hoped.
So faithful CoT is not by itself a solution, but it’s a property which allows us to iterate towards a solution.
Which is great! But we better start iterating fast because time is running out. When the corporations move on to e.g. level 3 and 4 systems, they will be loathe to spend much compute tinkering with legacy level 2 systems, much less train new advanced level 2 systems.
I mostly just agree with your comment here, and IMO the things I don’t exactly get aren’t worth disagreeing too much about, since it’s more in the domain of a reframing than an actual disagreement.
Anyhow I totally agree on the urgency, tractability, and importance of faithful CoT research. I think that if we can do enough of that research fast enough, we’ll be able to ‘hold the line’ at stage 2 for some time, possibly long enough to reach AGI.
Do you have thoughts on how much it helps that autointerp seems to now be roughly human-level on some metrics and can be applied cheaply (e.g. https://transluce.org/neuron-descriptions), so perhaps we might have another ‘defensive line’ even past stage 2 (e.g. in the case of https://transluce.org/neuron-descriptions, corresponding to the level of ‘granularity’ of autointerp applied to the activations of all the MLP neurons inside an LLM)?
I don’t have much to say about the human-level-on-some-metrics thing. evhub’s mechinterp tech tree was good for laying out the big picture, if I recall correctly. And yeah I think interpretability is another sort of ‘saving throw’ or ‘line of defense’ that will hopefully be ready in time.
There’s a regularization problem to solve for 3.9 and 4, and it’s not obvious to me that glee will be enough to solve it (3.9 = “unintelligible CoT”).
I’m not sure how o1 works in detail, but for example, backtracking (which o1 seems to use) makes heavy use of the pretrained distribution to decide on best next moves. So, at the very least, it’s not easy to do away with the native understanding of language. While it’s true that there is some amount of data that will enable large divergences from the pretrained distribution—and I could imagine mathematical proof generation eventually reaching this point, for example—more ambitious goals inherently come with less data, and it’s not obvious to me that there will be enough data in alignment-critical applications to cause such a large divergence.
There’s an alternative version of language invention where the model invents a better language for (e.g.) maths then uses that for more ambitious projects, but that language is probably quite intelligible!
When I imagine models inventing a language my imagination is something like Shinichi Mochizuki’s Inter-universal Teichmüller theory invented for his supposed proof of abc conjecture. It is clearly something like mathematical English and you could say it is “quite intelligible” compared to “neuralese”, but at the end, it is not very intelligible.
Mathematical reasoning might be specifically conducive to language invention because our ability to automatically verify reasoning means that we can potentially get lots of training data. The reason I expect the invented language to be “intelligible” is that it is coupled (albeit with some slack) to automatic verification.
I believe o1-type models that are trained to effectively reason out loud may actually be better for AI safety than the alternative. However, this is conditional on their CoTs being faithful, even after optimization by RL. I believe that scenario is entirely possible, though not necessarily something that will happen by default. See the case for CoT unfaithfulness is overstated, my response with specific ideas for ensuring faithfulness, and Daniel Kokotajlo’s doc along similar lines.
There seems to be quite a lot of low-hanging fruit here! I’m optimistic that highly-faithful CoTs can demonstrate enough momentum and short-term benefits to win out over uninterpretable neuralese, but it could easily go the other way. I think way more people should be working on methods to make CoT faithfulness robust to optimization (and how to minimize the safety tax of such methods).
It depends on what the alternative is. Here’s a hierarchy:
1. Pretrained models with a bit of RLHF, and lots of scaffolding / bureaucracy / language model programming.
2. Pretrained models trained to do long chains of thought, but with some techniques in place (e.g. paraphraser, shoggoth-face, see some of my docs) that try to preserve faithfulness.
3. Pretrained models trained to do long chains of thought, but without those techniques, such that the CoT evolves into some sort of special hyperoptimized lingo
4. As above except with recurrent models / neuralese, i.e. no token bottleneck.
Alas, my prediction is that while 1>2>3>4 from a safety perspective, 1<2<3<4 from a capabilities perspective. I predict that the corporations will gleefully progress down this chain, rationalizing why things are fine and safe even as things get substantially less safe due to the changes.
Anyhow I totally agree on the urgency, tractability, and importance of faithful CoT research. I think that if we can do enough of that research fast enough, we’ll be able to ‘hold the line’ at stage 2 for some time, possibly long enough to reach AGI.
The good news I’ll share is that some of the most important insights about the safety/alignment work done on LLMs do transfer over pretty well to a lot of plausible AGI architectures, so while there’s a little safety loss each time you go from 1 to 4, a lot of the theoretical ways to achieve alignment of these new systems remain intact, though the danger here is that the implementation difficulty pushes the safety tax too high, which is a pretty real concern.
Specifically, the insights I’m talking about are the controllability of AI with data, combined with their feedback on RL being way denser than human RL from evolution, meaning that instrumental convergence is affected significantly.
I think that IF we could solve alignment for level 2 systems, many of the lessons (maybe most? Maybe if we are lucky all?) would transfer to level 4 systems. However, time is running out to do that before the industry moves on beyond level 2.
Let me explain.
Faithful CoT is not a way to make your AI agent share your values, or truly strive towards the goals and principles you dictate in the Spec. Instead, it’s a way to read your AI’s mind, so that you can monitor its thoughts and notice when your alignment techniques failed and its values/goals/principles/etc. ended up different from what you hoped.
So faithful CoT is not by itself a solution, but it’s a property which allows us to iterate towards a solution.
Which is great! But we better start iterating fast because time is running out. When the corporations move on to e.g. level 3 and 4 systems, they will be loathe to spend much compute tinkering with legacy level 2 systems, much less train new advanced level 2 systems.
I mostly just agree with your comment here, and IMO the things I don’t exactly get aren’t worth disagreeing too much about, since it’s more in the domain of a reframing than an actual disagreement.
Do you have thoughts on how much it helps that autointerp seems to now be roughly human-level on some metrics and can be applied cheaply (e.g. https://transluce.org/neuron-descriptions), so perhaps we might have another ‘defensive line’ even past stage 2 (e.g. in the case of https://transluce.org/neuron-descriptions, corresponding to the level of ‘granularity’ of autointerp applied to the activations of all the MLP neurons inside an LLM)?
Later edit: Depending on research progress (especially w.r.t. cost effectiveness), other levels of ‘granularity’ might also become available (fully automated) soon for monitoring, e.g. sparse (SAE) feature circuits (of various dangerous/undesirable capabilities), as demo-ed in Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models.
I don’t have much to say about the human-level-on-some-metrics thing. evhub’s mechinterp tech tree was good for laying out the big picture, if I recall correctly. And yeah I think interpretability is another sort of ‘saving throw’ or ‘line of defense’ that will hopefully be ready in time.
There’s a regularization problem to solve for 3.9 and 4, and it’s not obvious to me that glee will be enough to solve it (3.9 = “unintelligible CoT”).
I’m not sure how o1 works in detail, but for example, backtracking (which o1 seems to use) makes heavy use of the pretrained distribution to decide on best next moves. So, at the very least, it’s not easy to do away with the native understanding of language. While it’s true that there is some amount of data that will enable large divergences from the pretrained distribution—and I could imagine mathematical proof generation eventually reaching this point, for example—more ambitious goals inherently come with less data, and it’s not obvious to me that there will be enough data in alignment-critical applications to cause such a large divergence.
There’s an alternative version of language invention where the model invents a better language for (e.g.) maths then uses that for more ambitious projects, but that language is probably quite intelligible!
When I imagine models inventing a language my imagination is something like Shinichi Mochizuki’s Inter-universal Teichmüller theory invented for his supposed proof of abc conjecture. It is clearly something like mathematical English and you could say it is “quite intelligible” compared to “neuralese”, but at the end, it is not very intelligible.
Mathematical reasoning might be specifically conducive to language invention because our ability to automatically verify reasoning means that we can potentially get lots of training data. The reason I expect the invented language to be “intelligible” is that it is coupled (albeit with some slack) to automatic verification.