This example is meant to only illustrate how one could achieve this encoding. It’s not how an actual autoencoder would work. An actual NN might not even use superposition for the data I described and it might need some other setup to elicit this behavior. But to me it sounded like you are sceptical that superposition is nothing but the network being confused whereas I think it can be the correct way to still be able to reconstruct the features to a reasonable degree.
Not confused, just optimised to handle data of the kind seen in training, and with limited ability to generalise beyond that, compared to human vision.
This example is meant to only illustrate how one could achieve this encoding. It’s not how an actual autoencoder would work. An actual NN might not even use superposition for the data I described and it might need some other setup to elicit this behavior.
But to me it sounded like you are sceptical that superposition is nothing but the network being confused whereas I think it can be the correct way to still be able to reconstruct the features to a reasonable degree.
Not confused, just optimised to handle data of the kind seen in training, and with limited ability to generalise beyond that, compared to human vision.
Yeah I agree with that. But there is also a sense in which some (many?) features will be inherently sparse.
A token is either the first one of multi-token word or it isn’t.
A word is either a noun, a verb or something else.
A word belongs to language LANG and not to any other language/has other meanings in those languages.
A H×W image can only contain so many objects which can only contain so many sub-aspects.
I don’t know what it would mean to go “out of distribution” in any of these cases.
This means that any network that has an incentive to conserve parameter usage (however we want to define that), might want to use superposition.