Doesn’t weight decay/L2 regularization tend to get rid of the “singularities”, though? There are no longer directions you can move in that change your model weights and leave the loss the same because you are altering your loss function to prefer weights of lower norm. A classic example of L2 regularization removing the “singularities”/directions you can move leaving your loss the same is the L2 regularized support vector machine w/ hinge loss, which motivated me to check it for neural nets. I tried some numerical experiments and found zero eigenvalues of the Hessian at the minimum (of a one hidden layer tanh net) but when I added L2 regularization to the loss these went away. We use weight decay in practice to train most things and it improves generalization so, if your results are dependent on the zero eigenvalues of the Hessian, wouldn’t that falsify them?
Yep, regularization tends to break these symmetries.
I think the best way to think of this is that it causes the valleys to become curved — i.e., regularization helps the neural network navigate the loss landscape. In its absence, moving across these valleys depends on the stochasticity of SGD which grows very slowly with the square root of time.
That said, regularization is only a convex change to the landscape that doesn’t change the important geometrical features. In its presence, we should still expect the singularities of the corresponding regularization-free landscape to have a major macroscopic effect.
There are also continuous zero-loss deformations in the loss landscape that are not affected by regularization because they aren’t a feature of the architecture but of the “truth”. (See the thread with tgb for a discussion of this, where we call these “Type B”.)
Doesn’t weight decay/L2 regularization tend to get rid of the “singularities”, though? There are no longer directions you can move in that change your model weights and leave the loss the same because you are altering your loss function to prefer weights of lower norm. A classic example of L2 regularization removing the “singularities”/directions you can move leaving your loss the same is the L2 regularized support vector machine w/ hinge loss, which motivated me to check it for neural nets. I tried some numerical experiments and found zero eigenvalues of the Hessian at the minimum (of a one hidden layer tanh net) but when I added L2 regularization to the loss these went away. We use weight decay in practice to train most things and it improves generalization so, if your results are dependent on the zero eigenvalues of the Hessian, wouldn’t that falsify them?
Yep, regularization tends to break these symmetries.
I think the best way to think of this is that it causes the valleys to become curved — i.e., regularization helps the neural network navigate the loss landscape. In its absence, moving across these valleys depends on the stochasticity of SGD which grows very slowly with the square root of time.
That said, regularization is only a convex change to the landscape that doesn’t change the important geometrical features. In its presence, we should still expect the singularities of the corresponding regularization-free landscape to have a major macroscopic effect.
There are also continuous zero-loss deformations in the loss landscape that are not affected by regularization because they aren’t a feature of the architecture but of the “truth”. (See the thread with tgb for a discussion of this, where we call these “Type B”.)