Automated / strongly-augmented safety research.
Bogdan Ionut Cirstea
Gemini 2.0 Flash Thinking is claimed to ‘transparently show its thought process’ (in contrast to o1, which only shows a summary): https://x.com/denny_zhou/status/1869815229078745152. This might be at least a bit helpful in terms of studying how faithful (e.g. vs. steganographic, etc.) the Chains of Thought are.
A fairer comparison would probably be to actually try hard at building the kind of scaffold which could use ~10k$ in inference costs productively. I suspect the resulting agent would probably not do much better than with 100$ of inference, but it seems hard to be confident. And it seems harder still to be confident about what will happen even in just 3 years’ time, given that pretraining compute seems like it will probably grow about 10x/year and that there might be stronger pushes towards automated ML.
A related announcement, explicitly targeting ‘building an epistemically sound research agent
@elicitorg that can use unlimited test-time compute while keeping reasoning transparent & verifiable’: https://x.com/stuhlmueller/status/1869080354658890009.
The toy task reminded me of the ‘Hidden side objectives’ subsection in section ‘B.1.2 OBFUSCATED SCHEMING REASONING’ of Towards evaluations-based safety cases for AI scheming.
For the SOTA on swebench-verified as of 16-12-2024: ‘it was around $5k for a total run.. around 8M tokens for a single swebench-problem.’
‘That means, around three months, it is possible to achieve performance comparable to current state-of-the-art LLMs using a model with half the parameter size.’
If this trend continues, combined with (better/more extensible) inference scaling laws, it could make LM agents much more competitive on many AI R&D capabilities soon, at much longer horizon tasks.
E.g. - figure 11 from RE-Bench: Evaluating frontier AI R&D capabilities of language model agents against human experts:
Also related: Before smart AI, there will be many mediocre or specialized AIs.
Densing Law of LLMs
The kind of instrumental reasoning required for alignment faking seems relevant, including through n-hop latent reasoning; see e.g. section ‘B.1.3 HIDDEN SCHEMING REASONING’ from Towards evaluations-based safety cases for AI scheming. I wouldn’t be too surprised if models could currently bypass this through shortcuts, but a mix of careful data filtering + unlearning of memorized facts about deceptive learning, as suggested in https://www.lesswrong.com/posts/9AbYkAy8s9LvB7dT5/the-case-for-unlearning-that-removes-information-from-llm#Information_you_should_probably_remove_from_the_weights, could force them to externalize their reasoning (which should be much easier to monitor than latent reasoning), if they were to try to alignment-fake; though steganography would also be another threat model here, as discussed e.g. in section ‘B.1.2 OBFUSCATED SCHEMING REASONING’ of Towards evaluations-based safety cases for AI scheming.
I don’t dispute that transformers can memorize shortcuts. I do dispute their ability to perform latent (opaque) multi-hop reasoning robustly. And I think this should be (very) non-controversial; e.g. Mor Geva has many papers on this topic.
I’m pointing out that transformers seem really bad at internal multi-hop reasoning; currently they can’t even do 2-hop robustly, 3-hop robustly seems kind of out of the question right now, and scaling doesn’t seem to help much either (see e.g. Figures 2 and 3 in Do Large Language Models Perform Latent Multi-Hop Reasoning without Exploiting Shortcuts? and also how much more robust and scalable CoT reasoning is). So ‘chain-of-thought but internalized to the model will take over’ seems very unlikely with transformers, and much more so if basic mitigations like unlearning (e.g. of memorized facts about deceptive learning, as suggested in https://www.lesswrong.com/posts/9AbYkAy8s9LvB7dT5/the-case-for-unlearning-that-removes-information-from-llm#Information_you_should_probably_remove_from_the_weights, to mitigate the possibility of latent scheming) were applied.
Steganography is a separate threat model, but even there I’d interpret current evidence (e.g. Preventing Language Models From Hiding Their Reasoning) as mostly positive (as in, even relatively simple mitigations like paraphrasing seem to go very far).
I do indeed predict that we will see chain-of-thought become less faithful as model capabilities increase, and that other ways of doing the same thing as chain-of-thought but internalized to the model will take over.
This prediction seems largely falsified as long as transformers remain the dominant architecture, and especially if we deliberately add optimization pressures towards externalized reasoning and against internal, latent reasoning; see e.g. Do Large Language Models Perform Latent Multi-Hop Reasoning without Exploiting Shortcuts? and LLMs Do Not Think Step-by-step In Implicit Reasoning.
Thanks for this post!
This looks much worse than I thought it would, both in terms of funding underdeployment, and in terms of overfocusing on evals.
Claude Sonnet-3.5 New, commenting on the limited scalability of RNNs, when prompted with ‘comment on what this would imply for the scalability of RNNs, refering (parts of) the post’ and fed https://epoch.ai/blog/data-movement-bottlenecks-scaling-past-1e28-flop (relevant to opaque reasoning, out-of-context reasoning, scheming):
’Based on the article’s discussion of data movement bottlenecks, RNNs (Recurrent Neural Networks) would likely face even more severe scaling challenges than Transformers for several reasons:
Sequential Nature: The article mentions pipeline parallelism and discusses how sequential dependencies affect scaling. From the text:
Pipeline parallelism: Different GPUs handle different sequential blocks, or layers, of a Transformer. The output of one GPU is fed to the next as an input sequentially until an input makes it through all of the blocks of the model.
RNNs have even stronger sequential dependencies than Transformers, as each token’s processing depends directly on the previous token’s hidden state. This would make them much harder to parallelize effectively.
Latency Wall Impact: The article notes that:
The basic reason behind the latency wall is that a bigger model typically requires more gradient steps during training, and if these have to be done in a fixed time window, this means each gradient step must take a shorter time as we scale the models we’re training.
RNNs would hit this latency wall even harder because their sequential processing means they can’t take advantage of some of the parallelization strategies discussed in the article, particularly tensor parallelism which requires:
after every one or two matrix multiplications (depending on the exact implementation) the GPUs need to synchronize results with each other.
Batch Size Scaling: The article suggests that one potential solution to scaling limits is:
If we can find a way to aggressively scale batch sizes together with model sizes, we can push out the latency wall.
However, RNNs are typically trained with truncated backpropagation through time, which limits how much you can effectively increase the batch size without losing long-range dependencies. This would make it harder to apply this particular solution to RNNs.
This analysis suggests that while Transformers are approaching scaling limits around 2e28 FLOP, RNNs would likely hit prohibitive scaling bottlenecks at significantly lower compute levels due to their inherently sequential nature and limited parallelization options.′
LLMs Do Not Think Step-by-step In Implicit Reasoning
If this generalizes, OpenAI’s Orion, rumored to be trained on synthetic data produced by O1, might see significant gains not just in STEM domains, but more broadly—from O1 Replication Journey—Part 2: Surpassing O1-preview through Simple Distillation, Big Progress or Bitter Lesson?:
’this study reveals how simple distillation from O1’s API, combined with supervised fine-tuning, can achieve superior performance on complex mathematical reasoning tasks. Through extensive experiments, we show that a base model fine-tuned on simply tens of thousands of samples O1-distilled long-thought chains outperforms O1-preview on the American Invitational Mathematics Examination (AIME) with minimal technical complexity. Moreover, our investigation extends beyond mathematical reasoning to explore the generalization capabilities of O1-distilled models across diverse tasks: hallucination, safety and open-domain QA. Notably, despite training only on mathematical problem-solving data, our models demonstrated strong generalization to open-ended QA tasks and became significantly less susceptible to sycophancy after fine-tuning.′
QwQ-32B-Preview was released open-weights, seems comparable to o1-preview. Unless they’re gaming the benchmarks, I find it both pretty impressive and quite shocking that a 32B model can achieve this level of performance. Seems like great news vs. opaque (e.g in one-forward-pass) reasoning. Less good with respect to proliferation (there don’t seem to be any [deep] algorithmic secrets), misuse and short timelines.
Do Large Language Models Perform Latent Multi-Hop Reasoning without Exploiting Shortcuts?
The above numbers suggest that (as long as sample efficiency doesn’t significantly improve) the world will always have enough compute to produce at least 23 million token-equivalents per second from any model that the world can afford to train (end-to-end, chinchilla-style). Notably, these are many more token-equivalents per second than we currently have human-AI-researcher-seconds per second. (And the AIs would have the further advantage of having much faster serial speeds.)
So once an AI system trained end-to-end can produce similarly much value per token as a human researcher can produce per second, AI research will be more than fully automated. This means that, when AI first contributes more to AI research than humans do, the average research progress produced by 1 token of output will be significantly less than an average human AI researcher produces in a second of thinking.
There’s probably a very similarly-shaped argument to be made based on difference in cost per token: because LLMs are much cheaper per token, the first time an LLM is as cost-efficient at producing AI research as a human researcher, it should be using many more tokens in its outputs (‘the average research progress produced by 1 token of output will be significantly less than an average human AI researcher produces in 1 token of output’). Which, similarly, should be helpful because ‘the token-by-token output of a single AI system should be quite easy for humans to supervise and monitor for danger’.
This framing might be more relevant from the POV of economic incentives to automate AI research (and I’m particularly interested in the analogous incentives to/feasibility of automating AI safety research).
I’m very uncertain and feel somewhat out of depth on this. I do have quite some hope though from arguments like those in https://aiprospects.substack.com/p/paretotopian-goal-alignment.
I’ve had similar thoughts previously: https://www.lesswrong.com/posts/wr2SxQuRvcXeDBbNZ/bogdan-ionut-cirstea-s-shortform?commentId=rSDHH4emZsATe6ckF.