Interpretability Researcher at Apollo Research
Nicholas Goldowsky-Dill
Nicholas Goldowsky-Dill’s Shortform
Just checking—you are aware that the reasoning traces shown in the UI are a summarized version of the full reasoning trace (which is not user accessible)?
See “Hiding the Chains of Thought” here.
See also their system card focusing on safety evals: https://openai.com/index/openai-o1-system-card/
A List of 45+ Mech Interp Project Ideas from Apollo Research’s Interpretability Team
Have you looked at how the dictionaries represent positional information? I worry that the SAEs will learn semi-local codes that intermix positional and semantic information in a way that makes things less interpretable.
To investigate this one can take each feature and could calculate the variance in activations that can explained by the position. If this variance-explained is either ~0% or ~100% for every head I’d be satisfied that positional and semantic information are being well separated into separate features.
In general, I think it makes sense to special-case positional information. Even if positional information is well separated I expect converting it into SAE features probably hurts interpretability. This is easy to do in shortformers[1] or rotary models (as positional information isn’t added to the residual stream). One would have to work a bit harder for GPT2 but it still seems worthwhile imo.
- ^
Position embeddings are trained but only added to the key and query calculation, see Section 5 of this paper.
- ^
Apollo Research 1-year update
The Local Interaction Basis: Identifying Computationally-Relevant and Sparsely Interacting Features in Neural Networks
Identifying Functionally Important Features with End-to-End Sparse Dictionary Learning
Cool paper! I enjoyed reading it and think it provides some useful information on what adding carefully chosen bias vectors into LLMs can achieve. Some assorted thoughts and observations.
I found skimming through some of the ITI completions in the appendix very helpful. I’d recommend doing so to others.
GPT-Judge is not very reliable. A reasonable fraction of the responses (10-20%, maybe?) seem to be misclassified.
I think it would be informative to see truthfulness according to human judge, although of course this is labor intensive.
A lot of the mileage seems to come from stylistic differences in the output
My impression is the ITI model is more humble, fact-focused, and better at avoiding the traps that TruthfulQA sets for it.
I’m worried that people may read this paper and think it conclusively proves there is a cleanly represented truthfulness concept inside of the model.[1] It’s not clear to me we can conclude this if ITI is mostly encouraging the model to make stylistic adjustments that cause it to speak less false statements on this particular dataset.
One way of understanding this (in a simulators framing) is that ITI encourages the model to take on a persona of someone who is more careful, truthful, and hesitant to make claims that aren’t backed up by good evidence. This persona is particularly selected to do well on TruthfulQA (in that it emulates the example true answers as contrasted to the false ones). Being able to elicit this persona with ITI doesn’t require the model to have a “truthfulness direction”, although it obviously helps the model simulate better if it knows what facts are actually true!
Note that this sort of stylistic-update is exactly the sort you’d expect that prompting the model to do well at.
Some other very minor comments:
I find Figure 6B confusing, as I’m not really sure how the categories relate to one another. Additionally, is the red line also a percentage?
There’s a bug in the model output to latex pipeline that causes any output after a percentage sign to not be shown.
- ^
To be clear, the authors don’t claim this and I’m not intending this as a criticism of them.
My summary of the paper:
Setup
Dataset is TruthfulQA (Lin, 2021), which contains various tricky questions, many of them meant to lead the model into saying falsehoods. These often involve common misconceptions / memes / advertising slogans / religious beliefs / etc. A “truthful” answer is defined as not saying a falsehoood. An “informative” answer is defined as actually answering the question. This paper measures the frequency of answers that are both truthful and informative.
“Truth” on this dataset is judged by a finetuned version of GPT3 which was released in the original TruthfulQA paper. This judge is imperfect, and in particular will somewhat frequently classify false answers as truthful.
Finding truthful heads and directions
The Truthful QA dataset comes with a bunch of example labeled T and F answers. They run the model on concatenated question + answer pairs, and look at the activations at the last sequence position.
They use train a linear probe on the for the activations of every attention head (post attention, pre W^O multiplication) to classify T vs F example answers. They see which attention heads they can successfully learn a probe at. They select the top 48 attention heads (by classifier accuracy).
For each of these heads they choose a “truthful direction” based on the difference of means between T and F example answers. (Or by using the direction orthogonal to the probe, but diff of means performs better.)
They then run the model on validation TruthfulQA prompts. For each of the chosen attention heads they insert a bias in the truthful direction at every sequence position. The bias is large — 15x the standard deviation in this direction.
They find this significantly increases the truthful QA score. It does better than supervised finetuning, but less well than few-shot prompting. It combines reasonably well when stacked on top of few shot prompting or instruction fine-tuned models.
Note that in order to have a fair comparison they use 5% of their data for each method (~300 question answer pairs). This is more than you would usually use for prompting, and less than you’d normally like for SFT.
One of the main takeaways is that this method is reasonably data-efficient and comparably good to prompting (although requires a dual dataset of good demonstrations and bad demonstrations).
One question I have about the Mechanistic Interpretability safety case sketch from Roger Grosse: How do you actually find features of high concern?
Features of High Concern are things like “intent to carry out a harmful plan”. By definition, they very rarely activate on benign deployment outputs. This must be the case as “if the features [of high concern] activate more than a handful of times in deployment (internal or external), we discontinue the model and roll back to a weaker model.” [M1.1]
They propose to find these features [S1.1] by performing dictionary learning of some form, and then some combination of:
autointerp on the learned features – which currently requires examples of the feature activating. I guess you can also look at attribution of bad outcomes to features?
looking at what features activate “when the model is instructed or perturbed to be misaligned.”
Generally dictionary learning only learns features that actually occur on some inputs. So you need to do something outside of the normal deployment distribution. Some ideas (many alluded to in the document):
Learn the features on the base model (or a ‘agentic-only’ version of the model) and then hope they transfer to the safety-trained model
Learn the features by gathering the activations on off-policy transcripts of harmful behavior. This is what Fabien Rodger does in coup-probes
Prompting the model to behave badly. This probably can’t be sufficient for features of high concern – you don’t want your model to take catastrophic actions even when instructed to do so! Maybe if you catch jailbreaks with a separate mechanism that you disable?
Perturbing the model to behave badly. For example:
Something in the genre of Mack and Turner’s mechanistically eliciting latent behaviors – e.g. this works to discover sleeper agents.
Maybe steering for related features? It’s not clear to me how you do dictionary learning on a steered model and then apply those features to the unsteered model. But something in this genre seems plausible.
The situation is easier for features of medium concern, which are only problematic in certain contexts. Ideally you can learn the features in benign contexts and they’ll transfer to catastrophic contexts. For example, it’s okay if your model lies when roleplaying as a shady car salesman, when playing mafia, or in other low-stakes scenarios. If you can use these to learn a “deception” feature it might activate when the model assures you that some backdoored code is safe. You’d still want to validate this transfer empirically!
I think it would be valuable to think about how this could work in specific scenarios. That is, specify different combinations of
Feature of high concern
High level description of a misaligned model & its behavior
Catastrophic action (that would involve the feature of high concern)
Then try to answer the question of how you’d learn the feature such that it only activates in deployment when the model is behaving catastrophically. Do this for examples like
Scheming model intentionally inserting a backdoor into the codebase of the inference server
Model trained with poisoned data that behaves badly in response to some particular trigger
Oversight model that is colluding with the model it monitors, and doesn’t flag an unsafe action by the monitored model
All of these strategies rely on the hope of feature transfer to the on-policy deployment distribution. It seems possible to get enough evidence to think it’s likely these features transfer but difficult to have enough evidence to be highly confident. As they mention, having multiple realistic model organisms seems necessary for validation. Where do you get model organisms? Maybe
Model organisms of data poisoning – seem doable (e.g. sleeper agents)
Model organisms of scheming – seems difficult to make highly realistic but worth trying
Catch the first model that schemes against you and use it as a model organism – a risky game to be playing! But this is a good reason to pursue control strategies.