o1 is a bad idea
This post comes a bit late with respect to the news cycle, but I argued in a recent interview that o1 is an unfortunate twist on LLM technologies, making them particularly unsafe compared to what we might otherwise have expected:
The basic argument is that the technology behind o1 doubles down on a reinforcement learning paradigm, which puts us closer to the world where we have to get the value specification exactly right in order to avert catastrophic outcomes.
Additionally, this technology takes us further from interpretability. If you ask GPT4 to produce a chain-of-thought (with prompts such as “reason step-by-step to arrive at an answer”), you know that in some sense, the natural-language reasoning you see in the output is how it arrived at the answer.[1] This is not true of systems like o1. The o1 training rewards any pattern which results in better answers. This can work by improving the semantic reasoning which the chain-of-thought apparently implements, but it can also work by promoting subtle styles of self-prompting. In principle, o1 can learn a new internal language which helps it achieve high reward.
You can tell the RL is done properly when the models cease to speak English in their chain of thought
- Andrej Karpathy
A loss of this type of (very weak) interpretability would be quite unfortunate from a practical safety perspective. Technology like o1 moves us in the wrong direction.
Informal Alignment
The basic technology currently seems to have the property that it is “doing basically what it looks like it is doing” in some sense. (Not a very strong sense, but at least, some sense.) For example, when you ask ChatGPT to help you do your taxes, it is basically trying to help you do your taxes.
This is a very valuable property for AI safety! It lets us try approaches like Cognitive Emulation.
In some sense, the Agent Foundations program at MIRI sees the problem as: human values are currently an informal object. We can only get meaningful guarantees for formal systems. So, we need to work on formalizing concepts like human values. Only then will we be able to get formal safety guarantees.
Unfortunately, fully formalizing human values appears to be very difficult. Human values touch upon basically all of the human world, which is to say, basically all informal concepts. So it seems like this route would need to “finish philosophy” by making an essentially complete bridge between formal and informal. (This is, arguably, what approaches such as Natural Abstractions are attempting.)
Approaches similar to Cognitive Emulation lay out an alternative path. Formalizing informal concepts seems hard, but it turns out that LLMs “basically succeed” at importing all of the informal human concepts into a computer. GPT4 does not engage in the sorts of naive misinterpretations which were discussed in the early days of AI safety. If you ask it for a plan to manufacture paperclips, it doesn’t think the best plan would involve converting all the matter in the solar system into paperclips. If you ask for a plan to eliminate cancer, it doesn’t think the extermination of all biological life would count as a success.
We know this comes with caveats; phenomena such as adversarial examples show that the concept-borders created by modern machine learning are deeply inhuman in some ways. The computerized versions of human commonsense concepts are not robust to optimization. We don’t want to naively optimize these rough mimics of human values.
Nonetheless, these “human concepts” seem robust enough to get a lot of useful work out of AI systems, without automatically losing sight of ethical implications such as the preservation of life. This might not be the sort of strong safety guarantee we would like, but it’s not nothing. We should be thinking about ways to preserve these desirable properties going forward. Systems such as o1 threaten this.
- ^
Yes, this is a fairly weak sense. There is a lot of computation under the hood in the big neural network, and we don’t know exactly what’s going on there. However, we also know “in some sense” that the computation there is relatively weak. We also know it hasn’t been trained specifically to cleverly self-prompt into giving a better answer (unlike o1); it “basically” interprets its own chain-of-thought as natural language, the same way it interprets human input.
So, to the extent that the chain-of-thought helps produce a better answer in the end, we can conclude that this is “basically” improved due to the actual semantic reasoning which the chain-of-thought apparently implements. This reasoning can fail for systems like o1.
I believe o1-type models that are trained to effectively reason out loud may actually be better for AI safety than the alternative. However, this is conditional on their CoTs being faithful, even after optimization by RL. I believe that scenario is entirely possible, though not necessarily something that will happen by default. See the case for CoT unfaithfulness is overstated, my response with specific ideas for ensuring faithfulness, and Daniel Kokotajlo’s doc along similar lines.
There seems to be quite a lot of low-hanging fruit here! I’m optimistic that highly-faithful CoTs can demonstrate enough momentum and short-term benefits to win out over uninterpretable neuralese, but it could easily go the other way. I think way more people should be working on methods to make CoT faithfulness robust to optimization (and how to minimize the safety tax of such methods).
It depends on what the alternative is. Here’s a hierarchy:
1. Pretrained models with a bit of RLHF, and lots of scaffolding / bureaucracy / language model programming.
2. Pretrained models trained to do long chains of thought, but with some techniques in place (e.g. paraphraser, shoggoth-face, see some of my docs) that try to preserve faithfulness.
3. Pretrained models trained to do long chains of thought, but without those techniques, such that the CoT evolves into some sort of special hyperoptimized lingo
4. As above except with recurrent models / neuralese, i.e. no token bottleneck.
Alas, my prediction is that while 1>2>3>4 from a safety perspective, 1<2<3<4 from a capabilities perspective. I predict that the corporations will gleefully progress down this chain, rationalizing why things are fine and safe even as things get substantially less safe due to the changes.
Anyhow I totally agree on the urgency, tractability, and importance of faithful CoT research. I think that if we can do enough of that research fast enough, we’ll be able to ‘hold the line’ at stage 2 for some time, possibly long enough to reach AGI.
The good news I’ll share is that some of the most important insights about the safety/alignment work done on LLMs do transfer over pretty well to a lot of plausible AGI architectures, so while there’s a little safety loss each time you go from 1 to 4, a lot of the theoretical ways to achieve alignment of these new systems remain intact, though the danger here is that the implementation difficulty pushes the safety tax too high, which is a pretty real concern.
Specifically, the insights I’m talking about are the controllability of AI with data, combined with their feedback on RL being way denser than human RL from evolution, meaning that instrumental convergence is affected significantly.
I think that IF we could solve alignment for level 2 systems, many of the lessons (maybe most? Maybe if we are lucky all?) would transfer to level 4 systems. However, time is running out to do that before the industry moves on beyond level 2.
Let me explain.
Faithful CoT is not a way to make your AI agent share your values, or truly strive towards the goals and principles you dictate in the Spec. Instead, it’s a way to read your AI’s mind, so that you can monitor its thoughts and notice when your alignment techniques failed and its values/goals/principles/etc. ended up different from what you hoped.
So faithful CoT is not by itself a solution, but it’s a property which allows us to iterate towards a solution.
Which is great! But we better start iterating fast because time is running out. When the corporations move on to e.g. level 3 and 4 systems, they will be loathe to spend much compute tinkering with legacy level 2 systems, much less train new advanced level 2 systems.
I mostly just agree with your comment here, and IMO the things I don’t exactly get aren’t worth disagreeing too much about, since it’s more in the domain of a reframing than an actual disagreement.
Do you have thoughts on how much it helps that autointerp seems to now be roughly human-level on some metrics and can be applied cheaply (e.g. https://transluce.org/neuron-descriptions), so perhaps we might have another ‘defensive line’ even past stage 2 (e.g. in the case of https://transluce.org/neuron-descriptions, corresponding to the level of ‘granularity’ of autointerp applied to the activations of all the MLP neurons inside an LLM)?
Later edit: Depending on research progress (especially w.r.t. cost effectiveness), other levels of ‘granularity’ might also become available (fully automated) soon for monitoring, e.g. sparse (SAE) feature circuits (of various dangerous/undesirable capabilities), as demo-ed in Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models.
I don’t have much to say about the human-level-on-some-metrics thing. evhub’s mechinterp tech tree was good for laying out the big picture, if I recall correctly. And yeah I think interpretability is another sort of ‘saving throw’ or ‘line of defense’ that will hopefully be ready in time.
There’s a regularization problem to solve for 3.9 and 4, and it’s not obvious to me that glee will be enough to solve it (3.9 = “unintelligible CoT”).
I’m not sure how o1 works in detail, but for example, backtracking (which o1 seems to use) makes heavy use of the pretrained distribution to decide on best next moves. So, at the very least, it’s not easy to do away with the native understanding of language. While it’s true that there is some amount of data that will enable large divergences from the pretrained distribution—and I could imagine mathematical proof generation eventually reaching this point, for example—more ambitious goals inherently come with less data, and it’s not obvious to me that there will be enough data in alignment-critical applications to cause such a large divergence.
There’s an alternative version of language invention where the model invents a better language for (e.g.) maths then uses that for more ambitious projects, but that language is probably quite intelligible!
When I imagine models inventing a language my imagination is something like Shinichi Mochizuki’s Inter-universal Teichmüller theory invented for his supposed proof of abc conjecture. It is clearly something like mathematical English and you could say it is “quite intelligible” compared to “neuralese”, but at the end, it is not very intelligible.
Mathematical reasoning might be specifically conducive to language invention because our ability to automatically verify reasoning means that we can potentially get lots of training data. The reason I expect the invented language to be “intelligible” is that it is coupled (albeit with some slack) to automatic verification.
I’m somewhat surprised by this paragraph. I thought the MIRI position was that they did not in fact predict AIs behaving like this, and the behavior of GPT4 was not an update at all for them. See this comment by Eliezer. I mostly bought that MIRI in fact never worried about AIs going rouge based on naive misinterpretations, so I’m surprised to see Abram saying the opposite now.
Abram, did you disagree about this with others at MIRI, so the behavior of GPT4 was an update for you but not for them, or do you think they are misremembering/misconstructing their earlier thoughts on this matter, or is there a subtle distinction here that I’m missing?
“Misinterpretation” is somewhat ambiguous. It either means not correctly interpreting the intent of an instruction (and therefore also not acting on that intent) or correctly understanding the intent of the instruction while still acting on a different interpretation. The latter is presumably what the outcome pump was assumed to do. LLMs can apparently both understand and act on instructions pretty well. The latter was not at all clear in the past.
This is bad, but perhaps there is a silver lining.
If internal communication within the scaffold appears to be in plain English, it will tempt humans to assume the meaning coincides precisely with the semantic content of the message.
If the chain of thought contains seemingly nonsensical content, it will be impossible to make this assumption.
It seems likely that process supervision was used for o1. I’d be curious to what extent it addresses the concerns here, if a supervision model assesses that each reasoning step is correct, relevant, and human-understandable. Even with process supervision, o1 might give a final answer that essentially ignores the process or uses some self-prompting. But process supervision also feels helpful, especially when the supervising model is more human-like, similar to pre-o1 models.
Process supervision seems like a plausible o1 training approach but I think it would conflict with this:
I think it might just be outcome-based RL, training the CoT to maximize the probability of correct answers or maximize human preference reward model scores or minimize next-token entropy.
It can be both, of course. Start with process supervision but combine it with… something else. It’s hard to learn how to reason from scratch, but it’s also clearly not doing pure strict imitation learning, because the transcripts & summaries are just way too weird to be any kind of straightforward imitation learning of expert transcripts (or even ones collected from users or the wild).
Wouldn’t that conflict with the quote? (Though maybe they’re not doing what they’ve implied in the quote)
I like the intuition behind this argument, which I don’t remember seeing spelled out anywhere else before.
I wonder how much hope one should derive from the fact that, intuitively, RL seems like it should be relatively slow at building new capabilities from scratch / significantly changing model internals, so there might be some way to buy some safety from also monitoring internals (both for dangerous capabilities already existant after pretraining, and for potentially new ones [slowly] built through RL fine-tuning). Related passage with an at least somewhat similar intuition, from https://www.lesswrong.com/posts/FbSAuJfCxizZGpcHc/interpreting-the-learning-of-deceit#How_to_Catch_an_LLM_in_the_Act_of_Repurposing_Deceitful_Behaviors (the post also discusses how one might go about monitoring for dangerous capabilities already existant after pretraining):
I agree with this analysis. I mean, I’m not certain further optimization will erode the interpretability of the generated CoT, its possible the fact its pretrained to use human natural language pushes it in a stable equilibrium, but I don’t think so, there are ways the CoT can become less interpretable in a step-wise fashion.
But this is the way its going, seems inevitable to me. Just scaling up models and then training them on English language internet text, is clearly less efficient (from a “build AGI” perspective, and from a profit-perspective) than training them to do the specific tasks that the users of the technology want. So thats the way its going.
And once you’re training the models this way, the tether between human-understandable concepts and the CoT will be completely destroyed. If they stay together, it will just be because its kind of a stable initial condition.
I agree with your technical points, but I don’t think that we could particularly expect the other path. Safety properties of LLMs seem to be desirable from extremely safety-pilled point of view, not from perspective of average capabilities researcher and RL seems to be The Answer to many learning problems.
I appreciated this! I didn’t know much about o1, and this gives me a much better understanding of how it’s different. My brain finds Abram very trustworthy for some reason.