Yes that makes a lot of sense that linearity would come hand in hand with generalization. I’d recently been reading Krotov on non-linear Hopfield networks but hadn’t made the connection. They say that they’re planning on using them to create more theoretically grounded transformer architectures. and your comment makes me think that these wouldn’t succeed but then the article also says:
This idea has been further extended in 2017 by showing that a careful choice of the activation function can even lead to an exponential memory storage capacity. Importantly, the study also demonstrated that dense associative memory, like the traditional Hopfield network, has large basins of attraction of size O(Nf). This means that the new model continues to benefit from strong associative properties despite the dense packing of memories inside the feature space.
which perhaps corresponds to them also being able to find good linear representation and to mix generalization and memorization like a transformer?
Yes that makes a lot of sense that linearity would come hand in hand with generalization. I’d recently been reading Krotov on non-linear Hopfield networks but hadn’t made the connection. They say that they’re planning on using them to create more theoretically grounded transformer architectures. and your comment makes me think that these wouldn’t succeed but then the article also says:
which perhaps corresponds to them also being able to find good linear representation and to mix generalization and memorization like a transformer?