PhD student at UCL DARK doing RL, OOD Robustness and safety. Interested in self improvement.
RobertKirk
A quick technical question: In the comparison to fine-tuning results in Section 6 where you stack CAA with fine-tuning, do you find a new steering vector after each fine-tune, or are you using the same steering vector for all fine-tuned models? My guess is you’re doing the former as it’s likely to be more performant, but I’d be interested to see what happens if you try to do the latter.
I think a point of no return exists if you only use small LRs. I think if you can use any LR (or any LR schedule) then you can definitely jump out of the loss basin. You could imagine just choosing a really large LR to basically resent to a random init and then starting again.
I do think that if you want to utilise the pretrained model effectively, you likely want to stay in the same loss basin during fine-tuning.
To check I understand this, is another way of saying this that in the scaling experiments, there’s effectively a confounding variable which is model performance on the task:
Improving zero-shot performance decreases deletion-CoT-faithfulness
The model will get the answer write with no CoT, so adding a prefix of correct reasoning is unlikely to change the output, hence a decrease in faithfulness
Model scale is correlated with model performance.
So the scaling experiments show model scale is correlated with less faithfulness, but probably via the correlation with model performance.
If you had a way of measuring the faithfulness conditioned on a given performance for a given model scale then you could measure how scaling up changes faithfulness. Maybe for a given model size you can plot performance vs faithfulness (with each dataset being a point on this plot), measure the correlation for that plot and then use that as a performance-conditioned faithfulness metric? Or there’s likely some causally correct way of measuring the correlation between model scale and faithfulness while removing the confounder of model performance.
If you train on infinite data, I assume you’d not see a delay between training and testing, but you’d expect a non-monotonic accuracy curve that looks kind of like the test accuracy curve in the finite-data regime? So I assume infinite data is also cheating?
I’ve been using “delayed generalisation”, which I think is more precise than “grokking”, places the emphasis on the delay rather the speed of the transition, and is a short phrase.
I think the hyperlink for “conv nets without residual streams” is wrong? It’s https://www.westernunion.com/web/global-service/track-transfer for me
This feels kind of like a semantic disagreement to me. To ground it, it’s probably worth considering whether further research on the CCS-style work I posted would also be useful for self-driving cards (or other applications). I think that would depend on whether the work improves the robustness of the contrastive probing regardless of what is being probed for (which would be generically useful), or whether it improves the probing specifically for truthfulness in systems that have a conception of truthfulness , possibly by improving the constraints or adding additional constraints (less useful for other systems). I think both would be good, but I’m uncertain which would be more useful to pursue if one is motivated by reducing x-risk from misalignment.
I think that “don’t kill humans” can’t chain into itself because there’s not a real reason for its action-bids to systematically lead to future scenarios where it again influences logits and gets further reinforced, whereas “drink juice” does have this property.
I’m trying to understand why the juice shard has this propety. Which of these (if any) are the the explanation for this:
Bigger juice shards will bid on actions which will lead to juice multiple times over time, as it pushes the agent towards juice from quite far away (both temporally and spatially), and hence will be strongly reinforcement when the reward comes, even though it’s only a single reinforcement event (actually getting the juice).
Juice will be acquired more with stronger juice shards, leading to a kind of virtuous cycle, assuming that getting juice is always positive reward (or positive advantage/reinforcement, to avoid zero-point issues)
The first seems at least plausibly to also to apply to “avoid moldy food”, if it requires multiple steps of planning to avoid moldy food (throwing out moldy food, buying fresh ingredients and then cooking them, etc.)
The second does seem to be more specific to juice than mold, but it seems to me that’s because getting juice is rare, and is something we can better and better at, whereas avoiding moldy food is something that’s fairly easy to learn, and past that there’s not much reinforcement to happen. If that’s the case, then I kind of see that as being covered by the rare-states explanation in my previous comment, or maybe an extension of that to “rare states and skills in which improvement leads to more reward”.
Having just read tailcalled comment, I think that is in some sense another of phasing what I was trying to say, where rare (but not too rare) states are likely to mean that policy-caused variance is high on those decisions. Probably policy-caused variance is more fundamental/closer as an explanation to what’s actually happening in the learning process, but maybe states of certain rarity which are high-reward/reinforcement is one possibly environmental feature that produces policy-caused variance.
So I struggle to think of an example of a tool that is good for finding insidious misaligning bugs but not others.
One example: A tool that is designed to detect whether a model is being truthful (correctly representing what it knows about the world to the user), perhaps based on specific properties of truthfulness (for example see https://www.alignmentforum.org/posts/L4anhrxjv8j2yRKKp/how-discovering-latent-knowledge-in-language-models-without) doesn’t seem like it would be useful for improving the reliability of self-driving cars, as likely self-driving cars aren’t misaligned in the sense that they could drive perfectly safely but choose not to, but rather are just unable to drive perfectly safely because some of their internal (learned) systems aren’t sufficiently robust.
Not Paul, but some possibilities why ARC’s work wouldn’t be relevant for self-driving cars:
The stuff Paul said about them aiming at understanding quite simple human values (don’t kill us all, maintain our decision-making power) rather than subtle things. It’s likely for self-driving cars we’re more concerned with high reliability and hence would need to be quite specific. E.g., maybe ARC’s approach could discern whether a car understands whether it’s driving on the road or not (seems like a fairly simple concept), but not whether it’s driving in a riskier way than humans in specific scenarios.
One of the problems that I think ARC is worried about is ontology identification, which seems like a meaningfully different problem for sub-human systems (whose ontologies are worse than ours, so in theory could be injected into ours) than for human-level or super-human systems (where that may not hold). Hence focusing on the super-human case would look weird and possibly not helpful for the subhuman case, although it would be great if they could solve all the cases in full generality.
Maybe once it works ARC’s approach could inform empirical work which helps with self-driving cars, but if you were focused on actually doing the thing for cars you’d just aim directly at that, whereas ARC’s approach would be a very roundabout and needlessly complex and theoretical way of solving the problem (this may or may not actually be the case, maybe solving this for self-driving cars is actually fundamentally difficult in the same way as for ASI, but it seems less likely).
I found it useful to compare a shard that learns to pursue juice (positive value) to one that avoids eating mouldy food (prohibition), just so they’re on the same kind of framing/scale.
It feels like a possible difference between prohibitions and positive values is that positive values specify a relatively small portion of the state space that is good/desirable (there are not many states in which you’re drinking juice), and hence possibly only activate less frequently, or only when parts of the state space like that are accessible, whereas prohibitions specify a large part of the state space that is bad (but not so much that the complement is a small portion—there are perhaps many potential states where you eat mouldy food, but the complement of that set is still not a similar size to the set of states of drinking juice). The first feels more suited to forming longer-term plans towards the small part of the state space (cf this definition of optimisation), whereas the second is less so. Then shards that start doing optimisation like this are hence more likely to become agentic/self-reflective/meta-cognitive etc.
In effect, positive values are more likely/able to self-chain because they actually (kind of, implicitly) specify optimisation goals, and hence shards can optimise them, and hence grow and improve that optimisation power, whereas prohibitions specify a much larger desirable state set, and so don’t require or encourage optimisation as much.
As an implication of this, I could imagine that in most real-world settings “don’t kill humans” would act as you describe, but in environments where it’s very easy to accidentally kill humans, such that states where you don’t kill humans are actually very rare, then the “don’t kill humans” shard could chain into itself more, and hence become more sophisticated/agentic/reflective. Does that seem right to you?
Thanks for the answer! I feel uncertain whether that suggestion is an “alignment” paradigm/method though—either these formally specified goals don’t cover most of the things we care about, in which case this doesn’t seem that useful, or they do, in which case I’m pretty uncertain how we can formally specify them—that’s kind of the whole outer alignment problem. Also, there is still (weaker) pressure to produce outputs that look good to humans, if humans are searching over goals to find those that produce good outputs. I agree it’s further away, but that seems like it could also be a bad thing, if it makes it harder to pressure the models to actually do what we want in the first place.
I still don’t think you’ve proposed an alternative to “training a model with human feedback”. “maintaining some qualitative distance between the optimisation target for an AI model and the human “does this look good?” function” sounds nice, but how do we even do that? What else should we optimise the model for, or how should we make it aligned? If you think the solution is use AI-assisted humans as overseers, then that doesn’t seem to be a real difference with what Buck is saying. So even if he actually had written that he’s not aware of an alternative to “training a model with human/overseer feedback”, I don’t think you’ve refuted that point.
An existing example of something like the difference between amortised and direct optimisation is doing RLHF (w/o KL penalties to make the comparison exact) vs doing rejection sampling (RS) with a trained reward model. RLHF amortises the cost of directly finding good outputs according to the reward model, such that at evaluation the model can produce good outputs with a single generation, whereas RS requires no training on top of the reward model, but uses lots more compute at evaluation by generating and filtering with the RM. (This case doesn’t exactly match the description in the post as we’re using RL in the amortised optimisation rather than SL. This could be adjusted by gathering data with RS, and then doing supervised fine-tuning on that RS data, and seeing how that compares to RS).
Given we have these two types of optimisation, I think two key things to consider are how each type of optimisation interacts with Goodhart’s Law, and how they both generalise (kind of analogous to outer/inner alignment, etc.):
The work on overoptimisation scaling laws in this setting shows that, at least on distribution, there does seem to be a meaningful difference to the over-optimisation behaviour between the two types of optimisation—as shown by the different functional forms for RS vs RLHF.
I think the generalisation point is most relevant when we consider that the optimisation process used (either in direct optimisation to find solutions, or in amortised optimisation to produce the dataset to amortise) may not generalise perfectly. In the setting above, this corresponds to the reward model not generalising perfectly. It would be interesting to see a similar investigation as the overoptimisation work but for generalisation properties—how does the generalisation of the RLHF policy relate to the generalisation of the RM, and similarly to the RS policy? Of course, over-optimisation and generalisation probably interact, so it may be difficult to disentangle whether poor performance under distribution shift is due to over-optimisation or misgeneralisation, unless we have a gold RM that also generalises perfectly.
Instead, Aligned AI used its technology to automatically tease out the ambiguities of the original data.
Could you provide any technical details about how this works? Otherwise I don’t know what to take from this post.
Question: How do we train an agent which makes lots of diamonds, without also being able to robustly grade expected-diamond-production for every plan the agent might consider?
I thought you were about to answer this question in the ensuing text, but it didn’t feel like to me you gave an answer. You described the goal (values-child), but not how the mother would produce values-child rather than produce evaluation-child. How do you do this?
You might well expect that features just get ignored below some threshold and monosemantically represented above it, or it could be that you just always get a polysemantic morass in that limit
I guess the recent work on Polysemanticity and Capacity seems to suggest the latter case, especially in sparser settings, given the zone where multiple feature are represented polysemantically, although I can’t remember if they investigate power-law feature frequencies or just uniform frequencies
were a little concerned about going down a rabbit hole given some of the discussion around whether the results replicated, which indicated some sensitivity to optimizer and learning rate.
My impression is that that discussion was more about whether the empirical results (i.e. do ResNets have linear mode connectivity?) held up, rather than whether the methodology used and present in the code base could be used to find whether linear mode connectivity is present between two models (up to permutation) for a given dataset. I imagine you could take the code and easily adapt it to check for LMC between two trained models pretty quickly (it’s something I’m considering trying to do as well, hence the code requests).
I think (at least in our case) it might be simpler to get at this question, and I think the first thing I’d do to understand connectivity is ask “how much regularization do I need to move from one basin to the other?” So for instance suppose we regularized the weights to directly push them from one basin towards the other, how much regularization do we need to make the models actually hop?
That would defiitely be interesting to see. I guess this is kind of presupposing that the models are in different basins (which I also believe but hasn’t yet been verified). I also think looking at basins and connectivity would be more interesting in the case where there was more noise, either from initialisation, inherently in the data, or by using a much lower batch size so that SGD was noisy. In this case it’s less likely that the same configuration results in the same basin, but if your interventions are robust to these kinds of noise then it’s a good sign.
Good question! We haven’t tried that precise experiment, but have tried something quite similar. Specifically, we’ve got some preliminary results from a prune-and-grow strategy (holding sparsity fixed, pruning smallest-magnitude weights, enabling non-sparse weights) that does much better than a fixed sparsity strategy.
I’m not quite sure how to interpret these results in terms of the lottery ticket hypothesis though. What evidence would you find useful to test it?
That’s cool, looking forward to seeing more detail. I think these results don’t seem that related to the LTH (if I understand your explanation correctly), as LTH involves finding sparse subnetworks in dense ones. Possibly it only actually holds in model with many more parameters, I haven’t seen it investigated in models that aren’t overparametrised in a classical sense.
I think if iterative magnitude pruning (IMP) on these problems produced much sparse subnetworks that also maintained the monosemanticity levels, then that would suggest that sparsity doesn’t penalise monosemanticity (or polysemanticity) in this toy model, and also (much more speculatively) that the sparse well-performing subnetworks that IMP finds in other networks possibly also maintain their levels of poly/mono-semanticity. If we also think these networks are favoured towards poly or mono, then that hints at how the overall learning process if favoured towards poly or mono.
This work looks super interesting, definitely keen to see more!
Will you open-source your code for running the experiments and producing plots? I’d definitely be keen to play around with it. (They already did here: https://github.com/adamjermyn/toy_model_interpretability I just missed it. Thanks! Although it would be useful to have the plotting code as well, if that’s easy to share?)Note that we primarily study the regime where there are more features than embedding dimensions (i.e. the sparse feature layer is wider than the input) but where features are sufficiently sparse that the number of features present in any given sample is smaller than the embedding dimension. We think this is likely the relevant limit for e.g. language models, where there are a vast array of possible features but few are present in any given sample.
I agree that N (true feature dimension) > d (observed dimension), and that sparsity will be high, but I’m uncertain whether the other part of the regime (that you don’t mention here), that k (model latent dimension) > N, is likely to be true. Do you think that is likely to be the case? As an analogy, I think the intermediate feature dimensions in MLP layers in transformers (analogously k) are much lower dimension than the “true intrinsic dimension of features in natural language” (analogously N), even if it is larger than the input dimension (embedding dimension* num_tokens, analogously d). So I expect whereas in your regime . Do you think you’d be able to find monosemantic networks for ? Did you try out this regime at all (I don’t think I could find it in the paper).
In the paper you say that you weakly believe that monosemantic and polysemantic network parametrisations are likely in different loss basins, given they’re implementing very different algorithms. I think (given the size of your networks) it should be easy to test for at least linear mode connectivity with something like git re-basin (https://github.com/samuela/git-re-basin). Have you tried doing that? I think there are also algorithms for finding non-linear (e.g. quadratic) mode connectivity, although I’m less familiar with them. If it is the case that they’re in different basins, I’d be curious to see whether there are just two basins (poly vs mono), or a basin for each level of monosemanticity, or if even within a level of polysemanticity there are multiple basins. If it’s one of the former cases, it’s be interesting to do something like the connectivity-based fine-tuning talked about here (https://openreview.net/forum?id=NZZoABNZECq, in effect optimise for a new parametrisation that is linearly disconnected from the previous one), and see if doing that from a polysemantic initialisation can produce a more monosemantic one, or if it just becomes polysemantic in a different way.
You also mentioned your initial attempts at sparsity through a hard-coded initially sparse matrix failed; I’d be very curious to see whether a lottery ticket-style iterative magnitude pruning was able to produce sparse matrices from the high-latent-dimension monosemantic networks that are still monosemantic, or more broadly how the LTH interacts with polysemanticity—are lottery tickets less polysemantic, or more, or do they not really change the monosemanticity?
If my understanding of the bias decay method is correct, is a large initial part of training only reducing the bias (through weight decay) until certain neurons start firing? If that’s the case, could you calculate the maximum output in the latent dimension on the dataset at the start of training (say B), and then initialise the bias to be just below -B, so that you skip almost all of the portion of training that’s only moving the bias term. You could do this per-neuron or just maxing over neurons. Or is this portion of training relatively small compared to the rest of training, and the slower convergence is more due to less neurons getting gradients even when some of them are outputting higher than the bias?
Thanks for writing the post, and it’s great to see that (at least implicitly) lots of the people doing mechanistic interpretability (MI) are talking to each other somewhat.
Some comments and questions:
I think “science of deep learning” would be a better term than “deep learning theory” for what you’re describing, given that I think all the phenomena you list aren’t yet theoretically grounded or explained in a mathematical way, and are rather robust empirical observations. Deep learning theory could be useful, especially if it had results concerning the internals of the network, but I think that’s a different genre of work to the science of DL work.
In your description of the relevance of the lottery ticket hypothesis (LTH), it feels like a bit of a non-sequitur to immediately discuss removing dangerous circuits at initialisation. I guess you think this is because lottery tickets are in some way about removing circuits at the beginning of training (although currently we only know how to find out which circuits by getting to the end of training)? I think the LTH potentially has broader relevance for MI, i.e.: if lottery tickets do exist and are of equal performance, then it’s possible they’d be easier to interpret (due to increased sparsity); or just understanding what the existence of lottery tickets means for what circuits are more likely to emerge during neural network training.
When you say “Automating Mechanistic Interpretability research”, do you mean automating (1) the task of interpreting a given network (automating MI), or automating (2) the research of building methods/understanding/etc. that enable us to better-interpret neural networks (automating MI Research)? I realise that a lot of current MI research, even if the ultimate goal is (2), is mostly currently doing (1) as a first step.
Most of the text in that section implies automating (1) to me, but “Eventually, we might also want to automate the process of deciding which interventions to perform on the model to improve AI safety” seems to lean more towards automating (2), which comes under generally approach of automating alignment research. Obviously it would be great to be able to do both of them, but automating (1) seems both much more tractable, and also probably necessary to enable scalable interpretability of large models, whereas (2) is potentially less necessary for MI research to be useful for AI safety.
How would you imagine doing this? I understand your hypothesis to be “If a model generalises as if it’s a mesa-optimiser, then it’s better-described as having simplicity bias”. Are you imagining training systems that are mesa-optimisers (perhaps explicitly using some kind of model-based RL/inference-time planning and search/MCTS), and then trying to see if they tend to learn simple cross-episode inner goals which would be implied by a stronger implicity bias?