Interesting, though note that it’s only evidence that ‘capabilities generalize further than alignment does’ if the capabilities are actually the result of generalisation. If there’s training for agentic behaviour but no safety training in this domain then the lesson is more that you need your safety training to cover all of the types of action that you’re training your model for.
Hoagy
Super interesting! Have you checked whether the average of N SAE features looks different to an SAE feature? Seems possible they live in an interesting subspace without the particular direction being meaningful.
Also really curious what the scaling factors are for computing these values are, in terms of the size of the dense vector and the overall model?
I don’t follow, sorry—what’s the problem of unique assignment of solutions in fluid dynamics and what’s the connection to the post?
How are you setting when ? I might be totally misunderstanding something but at - feels like you need to push up towards like 2k to get something reasonable? (and the argument in 1.4 for using clearly doesn’t hold here because it’s not greater than for this range of values).
Yeah I’d expect some degree of interference leading to >50% success on XORs even in small models.
Huh, I’d never seen that figure, super interesting! I agree it’s a big issue for SAEs and one that I expect to be thinking about a lot. Didn’t have any strong candidate solutions as of writing the post, wouldn’t even able to be able to say any thoughts I have on the topic now, sorry. Wish I’d posted this a couple of weeks ago.
Some additional SAE thoughts
Well the substance of the claim is that when a model is calculating lots of things in superposition, these kinds of XORs arise naturally as a result of interference, so one thing to do might be to look at a small algorithmic dataset of some kind where there’s a distinct set of features to learn and no reason to learn the XORs and see if you can still probe for them. It’d be interesting to see if there are some conditions under which this is/isn’t true, e.g. if needing to learn more features makes the dependence between their calculation higher and the XORs more visible.
Maybe you could also go a bit more mathematical and hand-construct a set of weights which calculates a set of features in superposition so you can totally rule out any model effort being expended on calculating XORs and then see if they’re still probe-able.
Another thing you could do is to zero-out or max-ent the neurons/attention heads that are important for calculating the feature, and see if you can still detect an feature. I’m less confident in this because it might be too strong and delete even a ‘legitimate’ feature or too weak and leave some signal in.
This kind of interference also predicts that the and features should be similar and so the degree of separation/distance from the category boundary should be small. I think you’ve already shown this to some extent with the PCA stuff though some quantification of the distance to boundary would be interesting. Even if the model was allocating resource to computing these XORs you’d still probably expect them to be much less salient though so not sure if this gives much evidence either way.
My hypothesis about what’s going on here, apologies if it’s already ruled out, is that we should not think of it separately computing the XOR of A and B, but rather that features A and B are computed slightly differently when the other feature is off or on. In a high dimensional space, if the vector and the vector are slightly different, then as long as this difference is systematic, this should be sufficient to successfully probe for .
For example, if A and B each rely on a sizeable number of different attention heads to pull the information over, they will have some attention heads which participate in both of them, and they would ‘compete’ in the softmax, where if head C is used in both writing features A and B, it will contribute less to writing feature A if it is also being used to pull across feature B, and so the representation of A will be systematically different depending on the presence of B.
It’s harder to draw the exact picture for MLPs but I think similar interdependencies can occur there though I don’t have an exact picture of how, interested to discuss and can try and sketch it out if people are curious. Probably would be like, neurons will participate in both, neurons which participate in A and B will be more saturated if B is active than if B is not active, so the output representation of A will be somewhat dependent on B.
More generally, I expect the computation of features to be ‘good enough’ but still messy and somewhat dependent on which other features are present because this kludginess allows them to pack more computation into the same number of layers than if the features were computed totally independently.
What assumptions is this making about scaling laws for these benchmarks? I wouldn’t know how to convert laws for losses into these kind of fuzzy benchmarks.
There had been various clashes between Altman and the board. We don’t know what all of them were. We do know the board felt Altman was moving too quickly, without sufficient concern for safety, with too much focus on building consumer products, while founding additional other companies. ChatGPT was a great consumer product, but supercharged AI development counter to OpenAI’s stated non-profit mission.
Does anyone have proof of the board’s unhappiness about speed, lack of safety concern and disagreement with founding other companies. All seem plausible but have seen basically nothing concrete.
Could you elaborate on what it would mean to demonstrate ‘savannah-to-boardroom’ transfer? Our architecture was selected for in the wilds of nature, not our training data. To me it seems that when we use an architecture designed for language translation for understanding images we’ve demonstrated a similar degree of transfer.
I agree that we’re not yet there on sample efficient learning in new domains (which I think is more what you’re pointing at) but I’d like to be clearer on what benchmarks would show this. For example, how well GPT-4 can integrate a new domain of knowledge from (potentially multiple epochs of training on) a single textbook seems a much better test and something that I genuinely don’t know the answer to.
Do you know why 4x was picked? I understand that doing evals properly is a pretty substantial effort, but once we get up to gigantic sizes and proto-AGIs it seems like it could hide a lot. If there was a model sitting in training with 3x the train-compute of GPT4 I’d be very keen to know what it could do!
Yes that makes a lot of sense that linearity would come hand in hand with generalization. I’d recently been reading Krotov on non-linear Hopfield networks but hadn’t made the connection. They say that they’re planning on using them to create more theoretically grounded transformer architectures. and your comment makes me think that these wouldn’t succeed but then the article also says:
This idea has been further extended in 2017 by showing that a careful choice of the activation function can even lead to an exponential memory storage capacity. Importantly, the study also demonstrated that dense associative memory, like the traditional Hopfield network, has large basins of attraction of size O(Nf). This means that the new model continues to benefit from strong associative properties despite the dense packing of memories inside the feature space.
which perhaps corresponds to them also being able to find good linear representation and to mix generalization and memorization like a transformer?
Reposting from a shortform post but I’ve been thinking about a possible additional argument that networks end up linear that I’d like some feedback on:
the tldr is that overcomplete bases necessitate linear representations
Neural networks use overcomplete bases to represent concepts. Especially in vector spaces without non-linearity, such as the transformer’s residual stream, there are just many more things that are stored in there than there are dimensions, and as Johnson Lindenstrauss shows, there are exponentially many almost-orthogonal directions to store them in (of course, we can’t assume that they’re stored linearly as directions, but if they were then there’s lots of space). (see also Toy models of transformers, sparse coding work)
Many different concepts may be active at once, and the model’s ability to read a representation needs to be robust to this kind of interference.
Highly non-linear information storage is going to be very fragile to interference because, by the definition of non-linearity, the model will respond differently to the input depending on the existing level of that feature. For example, if the response is quadratic or higher in the feature direction, then the impact of turning that feature on will be much different depending on whether certain not-quite orthogonal features are also on. If feature spaces are somehow curved then they will be similarly sensitive.
Of course linear representations will still be sensitive to this kind of interferences but I suspect there’s a mathematical proof for why linear features are the most robust to represent information in this kind of situation but I’m not sure where to look for existing work or how to start trying to prove it.
There’s an argument that I’ve been thinking about which I’d really like some feedback or pointers to literature on:
the tldr is that overcomplete bases necessitate linear representations
Neural networks use overcomplete bases to represent concepts. Especially in vector spaces without non-linearity, such as the transformer’s residual stream, there are just many more things that are stored in there than there are dimensions, and as Johnson Lindenstrauss shows, there are exponentially many almost-orthogonal directions to store them in (of course, we can’t assume that they’re stored linearly as directions, but if they were then there’s lots of space). (see also Toy models of transformers, my sparse autoencoder posts)
Many different concepts may be active at once, and the model’s ability to read a representation needs to be robust to this kind of interference.
Highly non-linear information storage is going to be very fragile to interference because, by the definition of non-linearity, the model will respond differently to the input depending on the existing level of that feature. For example, if the response is quadratic or higher in the feature direction, then the impact of turning that feature on will be much different depending on whether certain not-quite orthogonal features are also on. If feature spaces are somehow curved then they will be similarly sensitive.
Of course linear representations will still be sensitive to this kind of interferences but I suspect there’s a mathematical proof for why linear features are the most robust to represent information in this kind of situation but I’m not sure where to look for existing work or how to start trying to prove it.
See e.g. “So I think backpropagation is probably much more efficient than what we have in the brain.” from https://www.therobotbrains.ai/geoff-hinton-transcript-part-one
More generally, I think the belief that there’s some kind of important advantage that cutting edge AI systems have over humans comes more from human-AI performance comparisons e.g. GPT-4 way outstrips the knowledge about the world of any individual human in terms of like factual understanding (though obv deficient in other ways) with probably 100x less params. A bioanchors based model of AI development would imo predict that this is very unlikely. Whether the core of this advantage is in the form or volume or information density of data, or architecture, or something about the underlying hardware I am less confident.
Not totally sure but i think it’s pretty likely that scaling gets us to AGI, yeah. Or more particularly, gets us to the point of AIs being able to act as autonomous researchers or act as high (>10x) multipliers on the productivity of human researchers which seems like the key moment of leverage for deciding how the development to AI will go.
Don’t have a super clean idea of what self-reflective thought means. I see that e.g. GPT-4 can often say something, think further about it, and then revise its opinion. I would expect a little bit of extra reasoning quality and general competence to push this ability a lot further.
1 line summary is that NNs can transmit signals directly from any part of the network to any other, while brain has to work only locally.
More broadly I get the sense that there’s been a bit of a shift in at least some parts of theoretical neuroscience from understanding how we might be able to implement brain-like algorithms to understanding how the local algorithms that the brain uses might be able to approximate backprop, suggesting that artificial networks might have an easier time than the brain and so it would make sense that we could make something which outcompetes the brain without a similar diversity of neural structures.
This is way outside my area tbh, working off just a couple of things like this paper by Beren Millidge https://arxiv.org/pdf/2006.04182.pdf and some comments by Geoffrey Hinton that I can’t source.
I think the low-hanging fruit here is that alongside training for refusals we should be including lots of data where you pre-fill some % of a harmful completion and then train the model to snap out of it, immediately refusing or taking a step back, which is compatible with normal training methods. I don’t remember any papers looking at it, though I’d guess that people are doing it