What do you think about the Superposition Hypothesis? If that were true, then at a sufficient sparsity of features in the input there is no basis in which the network is thinking in, meaning it will be impossible to find a rotation matrix that allows for a bijective mapping between neurons and features.
I would assume that the rotation matrix that enables local changes via the sparse Jacobian coincides with one which maximizes some notion of “neuron-feature-bijectiveness”. But as noted above that seems impossible if the SPH holds.
What do you think about the Superposition Hypothesis? If that were true, then at a sufficient sparsity of features in the input there is no basis in which the network is thinking in, meaning it will be impossible to find a rotation matrix that allows for a bijective mapping between neurons and features.
I’d say that there is a basis the network is thinking in in this hypothetical, it would just so happens to not match the human abstraction set for thinking about the problem in question.
If due to superposition, it proves advantageous to the AI to have a single feature that kind of does dog-head-detection and kind of does car-front-detection, because dog heads and car fronts don’t show up in the training data at the same time, so it can still get perfect loss through a properly constructed dual-purpose feature like this, it’d mean that to the AI, dog heads and car fronts are “the same thing”.
The network hasn’t figured out how to distinguish between them. In a more general data set where dog heads and car fronts can co-occur, this network would fail. Its abstractions are optimised for the narrow training data set, where it genuinely proved to be unnecessarily cumbersome to assign different concepts to those two things.
As AIs get more capable and general, I’d expect the concepts/features they use to start more closely matching the ones humans use in many domains. As AI gets superhuman, I would be somewhat worried about it finding new concept/feature sets that work even better and more generally than human ones.
If due to superposition, it proves advantageous to the AI to have a single feature that kind of does dog-head-detection and kind of does car-front-detection, because dog heads and car fronts don’t show up in the training data at the same time, so it can still get perfect loss through a properly constructed dual-purpose feature like this, it’d mean that to the AI, dog heads and car fronts are “the same thing”.
I don’t think that’s true. Imagine a toy scenario of two features that run through a 1D non-linear bottleneck before being reconstructed. Assuming that with some weight settings you can get superposition, the model is able to reconstruct the features ≈perfectly as long as they don’t appear together. That means the model can still differentiate the two features, they are different in the model’s ontology.
As AIs get more capable and general, I’d expect the concepts/features they use to start more closely matching the ones humans use in many domains.
My intuition disagrees here too. Whether we will observe superposition is a function of (number of “useful” features in the data), (sparsity of said features), and something like (bottleneck size).
It’s possible that bottleneck size will never be enough to compensate for number of features. Also it seems reasonable to me that ≈all of reality is extremely sparse in features, which presumably favors superposition.
Also it seems reasonable to me that ≈all of reality is extremely sparse in features, which presumably favors superposition.
Reality is usually sparse in features, and that‘s why even very small and simple intelligences can operate within it most of the time, so long as they don’t leave their narrow contexts. But the mark of a general intelligence is that it can operate even in highly out-of-distribution situations. Cars are usually driven on roads, so an intelligence could get by using a car even if its concepts of car-ness were all mixed up with its conception of roadness. But a human can plan to take a car to the moon and drive it on the dust there, and then do that. This indicates to me that a general intelligence needs to think in features that can compose to handle almost any data, not just data that usually appeared in the training distribution.
If your architectures has too many bottlenecks to allow this, I expect that it will not be able to become a human-level general intelligence.
(Parts of the human brain definitely seem narrow and specialised too of course, it‘s only the general reasoning capabilities that seem to have these ultra-factorising, nigh-universally applicable concepts.)
Note also that concepts humans use can totally be written as superpositions of other concepts too, most of these other concepts apparently just aren‘t very universally useful.
Reality is usually sparse in features, and that‘s why even very small and simple intelligences can operate within it most of the time, so long as they don’t leave their narrow contexts.
Reality is rich in features, but sparse in features that matter to a simple organism. That’s why context matters.
I don’t think that’s true. Imagine a toy scenario of two features that run through a 1D non-linear bottleneck before being reconstructed. Assuming that with some weight settings you can get superposition, the model is able to reconstruct the features ≈perfectly as long as they don’t appear together. That means the model can still differentiate the two features, they are different in the model’s ontology.
I’m not sure I understand this example. If I have a single 1-D feature, a floating point number that goes up with the amount of dog-headedness or car-frontness in a picture, then how can the model in a later layer reconstruct whether there was a dog-head xor a car-front in the image from that floating point number, unless it has other features that effectively contain this information?
Possibly the source of our disagreement here is that you are imagining the neuron ought to be strictly monotonically increasing in activation relative to the dog-headedness of the image?
If we abandon that assumption then it is relatively clear how to encode two numbers in 1D. Let’s assume we observe two numbers X,Y. With probability p, X=0,Y∼N(0,1), and with probability (1−p), Y=0,X∼N(0,1).
We now want to encode these two events in some third variable Z, such that we can perfectly reconstruct X,Y with probability ≈1.
I put the solution behind a spoiler for anyone wanting to try it on their own.
Choose some veeeery large μ≫1 (much greater than the variance of the normal distribution of the features). For the first event, set Z=Y−μ. For the second event, set Z=X+μ.
The decoding works as follows:
If Z is negative, then with probability ≈1 we are in the first scenario and we can set X=0,Y=Z+μ. Vice versa if Z is positive.
Ah, I see. Thank you for pointing this out. Do superposition features actually seem to work like this in practice in current networks? I was not aware of this.
In any case, for a network like the one you describe I would change my claim from
it’d mean that to the AI, dog heads and car fronts are “the same thing”.
to the AI having a concept for something humans don’t have a neat short description for. So for example, if your algorithm maps X>0 Y>0 to the first case, I’d call it a feature of “presence of dog heads or car fronts, or presence of car fronts”.
I don’t think this is an inherent problem for the theory. That a single floating point number can contain a lot of information is fine, so long as you have some way to measure how much it is.
Do superposition features actually seem to work like this in practice in current networks? I was not aware of this.
I’m not aware of any work that identifies superposition in exactly this way in NNs of practical use. As Spencer notes, you can verify that it does appear in certain toy settings though. Anthropic notes in their SoLU paper that they view their results as evidence for the SPH in LLMs. Imo the key part of the evidence here is that using a SoLU destroys performance but adding another LayerNorm afterwards solves that issue. The SoLU selects strongly against superposition and LayerNorm makes it possible again, which is some evidence that the way the LLM got to its performance was via superposition.
ETA: Ofc there could be some other mediating factor, too.
This example is meant to only illustrate how one could achieve this encoding. It’s not how an actual autoencoder would work. An actual NN might not even use superposition for the data I described and it might need some other setup to elicit this behavior. But to me it sounded like you are sceptical that superposition is nothing but the network being confused whereas I think it can be the correct way to still be able to reconstruct the features to a reasonable degree.
Not confused, just optimised to handle data of the kind seen in training, and with limited ability to generalise beyond that, compared to human vision.
I’d say that there is a basis the network is thinking in in this hypothetical, it would just so happens to not match the human abstraction set for thinking about the problem in question.
Well, yes but the number of basis elements that make that basis human interpretable could theoretically be exponential in the number of neurons.
Sure, but that’s not a question I’m primarily interested in. I don’t want the most interpretable basis, I want the basis that network itself uses for thinking. My goal is to find the elementary unit of neural networks, to build theorems and eventually a whole predictive theory of neural network computation and selection on top of.
That this may possibly make current networks more human-interpretable even in the short run is just a neat side benefit to me.
Ah, I might have misunderstood your original point then, sorry!
I’m not sure what you mean by “basis” then. How strictly are you using this term?
I imagine you are basically going down the “features as elementary unit” route proposed in Circuits (although you might not be pre-disposed to assume features are the elementary unit).Finding the set of features used by the network and figuring out how its using them in its computations does not 1-to-1 translate to “find the basis the network is thinking in” in my mind.
I imagine you are basically going down the “features as elementary unit” route proposed in Circuits (although you might not be pre-disposed to assume features are the elementary unit).Finding the set of features used by the network and figuring out how its using them in its computations does not 1-to-1 translate to “find the basis the network is thinking in” in my mind.
Fair enough, imprecise use of language. For some definitions of “thinking” I’d guess a small vision CNN isn’t thinking anything.
I mostly expect networks at zero loss to not to be in a superposition, since we should expect those networks to be in a broad basin, meaning fairly few independent, orthogonal, features, so less room to implement two completely different functions. But we don’t always find networks in broad basins, so we may see some networks in a superposition.
It would be interesting to study which training regimes and architectures most/least often produce easily-interpretable networks by this metric, and this may give some insight into when you see superposition.
In the cases where there is a nice basis this device finds, we may also expect it to disentangle any superpositions which exist, and for this superposition to be a combination of two fairly simple functions, requiring very few features, or interpreting the same features in different ways.
I disagree with your intuition that we should not expect networks at irreducible loss to not be in superposition.
The reason I brought this up is that there are, IMO, strong first-principle reasons for why SPH should be correct. Say there are two features, which have an independent probability of 0.05 to be present in a given data point, then it would be wasteful to allocate a full neuron to each of these features. The probability of both features being present at the same time is a mere 0.00025. If the superposition is implemented well you get basically two features for the price of one with an error rate of 0.025%. So if there is even a slight pressure towards compression, e.g. by having less available neurons than features, then superposition should be favored by the network.
Now does this toy scenario map to reality? I think it does, and in some sense it is even more favorable to SPH since often the presence of features will be anti-correlated.
Ah, I think you’re right here, though I don’t think this means there’s no room for improvement on the sparsity front. Do you know of any hand-constructed examples of a layer in superposition, for which we know the features of? I’d like to play around with one, and see if there’s any robust way to disentangle it.
I agree that all is not lost wrt sparsity and if SPH turns out to be true it might help us disentangle the superimposed features to better understand what is going on. You could think of constructing an “expanded” view of a neural network. The expanded view would allocate one neuron per feature and thus has sparse activations for any given data point and would be easier to reason about. That seems impractical in reality, since the cost of constructing this view might in theory be exponential, as there are exponentially many “almost orthogonal” vectors for a given vector space dimension, as a function of the dimension.
I think my original comment was meant more as a caution against the specific approach of “find an interpretable basis in activation space”, since that might be futile, rather than a caution against all attempts at finding a sparse representation of the computations that are happining within the network.
I don’t think there is anything on that front other than the paragraphs in the SoLU paper. I alluded to a possible experiment for this on Twitter in response to that paper but haven’t had the time to try it out myself: You could take a tiny autoencoder to reconstruct some artificially generated data where you vary attributes such as sparsity, ratio of input dimensions vs. bottleneck dimensions, etc. You could then look at the weight matrices of the autoencoder to figure out how it’s embedding the features in the bottleneck and which settings lead to superposition, if any.
I’m not at liberty to share it directly but I am aware that Anthropic have a draft of small toy models with hand-coded synthetic data showing superposition very cleanly. They go as far as saying that searching for an interpretable basis may essentially be mistaken.
Interesting idea!
What do you think about the Superposition Hypothesis? If that were true, then at a sufficient sparsity of features in the input there is no basis in which the network is thinking in, meaning it will be impossible to find a rotation matrix that allows for a bijective mapping between neurons and features.
I would assume that the rotation matrix that enables local changes via the sparse Jacobian coincides with one which maximizes some notion of “neuron-feature-bijectiveness”. But as noted above that seems impossible if the SPH holds.
I’d say that there is a basis the network is thinking in in this hypothetical, it would just so happens to not match the human abstraction set for thinking about the problem in question.
If due to superposition, it proves advantageous to the AI to have a single feature that kind of does dog-head-detection and kind of does car-front-detection, because dog heads and car fronts don’t show up in the training data at the same time, so it can still get perfect loss through a properly constructed dual-purpose feature like this, it’d mean that to the AI, dog heads and car fronts are “the same thing”.
The network hasn’t figured out how to distinguish between them. In a more general data set where dog heads and car fronts can co-occur, this network would fail. Its abstractions are optimised for the narrow training data set, where it genuinely proved to be unnecessarily cumbersome to assign different concepts to those two things.
As AIs get more capable and general, I’d expect the concepts/features they use to start more closely matching the ones humans use in many domains. As AI gets superhuman, I would be somewhat worried about it finding new concept/feature sets that work even better and more generally than human ones.
I don’t think that’s true. Imagine a toy scenario of two features that run through a 1D non-linear bottleneck before being reconstructed. Assuming that with some weight settings you can get superposition, the model is able to reconstruct the features ≈perfectly as long as they don’t appear together. That means the model can still differentiate the two features, they are different in the model’s ontology.
My intuition disagrees here too. Whether we will observe superposition is a function of (number of “useful” features in the data), (sparsity of said features), and something like (bottleneck size). It’s possible that bottleneck size will never be enough to compensate for number of features. Also it seems reasonable to me that ≈all of reality is extremely sparse in features, which presumably favors superposition.
Reality is usually sparse in features, and that‘s why even very small and simple intelligences can operate within it most of the time, so long as they don’t leave their narrow contexts. But the mark of a general intelligence is that it can operate even in highly out-of-distribution situations. Cars are usually driven on roads, so an intelligence could get by using a car even if its concepts of car-ness were all mixed up with its conception of roadness. But a human can plan to take a car to the moon and drive it on the dust there, and then do that. This indicates to me that a general intelligence needs to think in features that can compose to handle almost any data, not just data that usually appeared in the training distribution.
If your architectures has too many bottlenecks to allow this, I expect that it will not be able to become a human-level general intelligence.
(Parts of the human brain definitely seem narrow and specialised too of course, it‘s only the general reasoning capabilities that seem to have these ultra-factorising, nigh-universally applicable concepts.)
Note also that concepts humans use can totally be written as superpositions of other concepts too, most of these other concepts apparently just aren‘t very universally useful.
Reality is rich in features, but sparse in features that matter to a simple organism. That’s why context matters.
I’m not sure I understand this example. If I have a single 1-D feature, a floating point number that goes up with the amount of dog-headedness or car-frontness in a picture, then how can the model in a later layer reconstruct whether there was a dog-head xor a car-front in the image from that floating point number, unless it has other features that effectively contain this information?
Possibly the source of our disagreement here is that you are imagining the neuron ought to be strictly monotonically increasing in activation relative to the dog-headedness of the image?
If we abandon that assumption then it is relatively clear how to encode two numbers in 1D. Let’s assume we observe two numbers X,Y. With probability p, X=0,Y∼N(0,1), and with probability (1−p), Y=0,X∼N(0,1).
We now want to encode these two events in some third variable Z, such that we can perfectly reconstruct X,Y with probability ≈1.
I put the solution behind a spoiler for anyone wanting to try it on their own.
Choose some veeeery large μ≫1 (much greater than the variance of the normal distribution of the features). For the first event, set Z=Y−μ. For the second event, set Z=X+μ.
The decoding works as follows:
If Z is negative, then with probability ≈1 we are in the first scenario and we can set X=0,Y=Z+μ. Vice versa if Z is positive.
Ah, I see. Thank you for pointing this out. Do superposition features actually seem to work like this in practice in current networks? I was not aware of this.
In any case, for a network like the one you describe I would change my claim from
to the AI having a concept for something humans don’t have a neat short description for. So for example, if your algorithm maps X>0 Y>0 to the first case, I’d call it a feature of “presence of dog heads or car fronts, or presence of car fronts”.
I don’t think this is an inherent problem for the theory. That a single floating point number can contain a lot of information is fine, so long as you have some way to measure how much it is.
I’m not aware of any work that identifies superposition in exactly this way in NNs of practical use.
As Spencer notes, you can verify that it does appear in certain toy settings though. Anthropic notes in their SoLU paper that they view their results as evidence for the SPH in LLMs. Imo the key part of the evidence here is that using a SoLU destroys performance but adding another LayerNorm afterwards solves that issue. The SoLU selects strongly against superposition and LayerNorm makes it possible again, which is some evidence that the way the LLM got to its performance was via superposition.
ETA: Ofc there could be some other mediating factor, too.
This example is meant to only illustrate how one could achieve this encoding. It’s not how an actual autoencoder would work. An actual NN might not even use superposition for the data I described and it might need some other setup to elicit this behavior.
But to me it sounded like you are sceptical that superposition is nothing but the network being confused whereas I think it can be the correct way to still be able to reconstruct the features to a reasonable degree.
Not confused, just optimised to handle data of the kind seen in training, and with limited ability to generalise beyond that, compared to human vision.
Yeah I agree with that. But there is also a sense in which some (many?) features will be inherently sparse.
A token is either the first one of multi-token word or it isn’t.
A word is either a noun, a verb or something else.
A word belongs to language LANG and not to any other language/has other meanings in those languages.
A H×W image can only contain so many objects which can only contain so many sub-aspects.
I don’t know what it would mean to go “out of distribution” in any of these cases.
This means that any network that has an incentive to conserve parameter usage (however we want to define that), might want to use superposition.
Well, yes but the number of basis elements that make that basis human interpretable could theoretically be exponential in the number of neurons.
Sure, but that’s not a question I’m primarily interested in. I don’t want the most interpretable basis, I want the basis that network itself uses for thinking. My goal is to find the elementary unit of neural networks, to build theorems and eventually a whole predictive theory of neural network computation and selection on top of.
That this may possibly make current networks more human-interpretable even in the short run is just a neat side benefit to me.
Ah, I might have misunderstood your original point then, sorry!
I’m not sure what you mean by “basis” then. How strictly are you using this term?
I imagine you are basically going down the “features as elementary unit” route proposed in Circuits (although you might not be pre-disposed to assume features are the elementary unit).Finding the set of features used by the network and figuring out how its using them in its computations does not 1-to-1 translate to “find the basis the network is thinking in” in my mind.
Fair enough, imprecise use of language. For some definitions of “thinking” I’d guess a small vision CNN isn’t thinking anything.
I mostly expect networks at zero loss to not to be in a superposition, since we should expect those networks to be in a broad basin, meaning fairly few independent, orthogonal, features, so less room to implement two completely different functions. But we don’t always find networks in broad basins, so we may see some networks in a superposition.
It would be interesting to study which training regimes and architectures most/least often produce easily-interpretable networks by this metric, and this may give some insight into when you see superposition.
In the cases where there is a nice basis this device finds, we may also expect it to disentangle any superpositions which exist, and for this superposition to be a combination of two fairly simple functions, requiring very few features, or interpreting the same features in different ways.
I disagree with your intuition that we should not expect networks at irreducible loss to not be in superposition.
The reason I brought this up is that there are, IMO, strong first-principle reasons for why SPH should be correct. Say there are two features, which have an independent probability of 0.05 to be present in a given data point, then it would be wasteful to allocate a full neuron to each of these features. The probability of both features being present at the same time is a mere 0.00025. If the superposition is implemented well you get basically two features for the price of one with an error rate of 0.025%. So if there is even a slight pressure towards compression, e.g. by having less available neurons than features, then superposition should be favored by the network.
Now does this toy scenario map to reality? I think it does, and in some sense it is even more favorable to SPH since often the presence of features will be anti-correlated.
Ah, I think you’re right here, though I don’t think this means there’s no room for improvement on the sparsity front. Do you know of any hand-constructed examples of a layer in superposition, for which we know the features of? I’d like to play around with one, and see if there’s any robust way to disentangle it.
I agree that all is not lost wrt sparsity and if SPH turns out to be true it might help us disentangle the superimposed features to better understand what is going on. You could think of constructing an “expanded” view of a neural network. The expanded view would allocate one neuron per feature and thus has sparse activations for any given data point and would be easier to reason about. That seems impractical in reality, since the cost of constructing this view might in theory be exponential, as there are exponentially many “almost orthogonal” vectors for a given vector space dimension, as a function of the dimension.
I think my original comment was meant more as a caution against the specific approach of “find an interpretable basis in activation space”, since that might be futile, rather than a caution against all attempts at finding a sparse representation of the computations that are happining within the network.
I don’t think there is anything on that front other than the paragraphs in the SoLU paper. I alluded to a possible experiment for this on Twitter in response to that paper but haven’t had the time to try it out myself: You could take a tiny autoencoder to reconstruct some artificially generated data where you vary attributes such as sparsity, ratio of input dimensions vs. bottleneck dimensions, etc. You could then look at the weight matrices of the autoencoder to figure out how it’s embedding the features in the bottleneck and which settings lead to superposition, if any.
I’m not at liberty to share it directly but I am aware that Anthropic have a draft of small toy models with hand-coded synthetic data showing superposition very cleanly. They go as far as saying that searching for an interpretable basis may essentially be mistaken.