A different way of stating the usual Anthropic-esque concept of features that I find useful: Features are the things that are getting composed when a neural network is taking advantage of compositionality. This isn’t begging the question, you just can’t answer this without knowing about the data distribution and the computational strategy of the model after training.
For instance, the reason the neurons aren’t always features, even though it’s natural to write the activations (which then get “composed” into the inputs to the next layer) in the neuron basis, is because if your data only lies on a manifold in the space of all possible values, the local coordinates of that manifold might rarely line up with the neurons basis.
A different way of stating the usual Anthropic-esque concept of features that I find useful: Features are the things that are getting composed when a neural network is taking advantage of compositionality. This isn’t begging the question, you just can’t answer this without knowing about the data distribution and the computational strategy of the model after training.
For instance, the reason the neurons aren’t always features, even though it’s natural to write the activations (which then get “composed” into the inputs to the next layer) in the neuron basis, is because if your data only lies on a manifold in the space of all possible values, the local coordinates of that manifold might rarely line up with the neurons basis.