High dimensional spaces are unlikely to have local optima, and probably don’t have any optima at all.
Just recall what is necessary for a set of parameters to be at a optimum. All the gradients need to be zero, and the hessian needs to be positive semidefinite. In other words, you need to be surrounded by walls. In 4 dimensions, you can walk through walls. GPT3 has 175 billion parameters. In 175 billion dimensions, walls are so far beneath your notice that if you observe them at all it is like God looking down upon individual protons.
If there’s any randomness at all in the loss landscape, which of course there is, it’s vanishingly unlikely that all of the millions or billions of directions the model has to choose from will be simultaneously uphill. With so many directions to choose from you will always have at least one direction to escape. It’s just completely implausible that any big model comes close to any optima at all. In fact it’s implausible that an optimum exists. Unless you have a loss function that has a finite minimum value like squared loss (not cross entropy or softmax), or without explicit regularization that bounds the magnitude of the values, forces positive curvature, and hurts performance of the model, all real models diverge.
Wait, how is it possible for there to be no optimum at all? There’s only a finite number of possible settings of the 175 billion parameters; there has to be at least one setting such that no other setting has lower loss. (I don’t know much math, I’m probably misunderstanding what optimum means.)
Taking finite precision floating point numbers, there must be an optimum.
In the real numbers, there are an infinity of possible settings. Its possible to have an infinite sequence of ever better solutions. (The trivial example is the bigger a parameter is, the better the results, for any setting, its possible to add 1 and get a better setting.)
Its possible for the score to behave like the blue line. Getting better and better without bound. Its also possible for the score to behave like the orange line. With some finite value it approaches but never reaches.
Agreed, there can be a optimum. But I think the intuition here is that it is exceedingly rare enough to run into a situation where it is local optima in all “directions”.
It is only an “optimum” when all 175 billion parameters are telling you to screw off and stop trying.
There can be N settings that perfectly tie for the best score.
Also, they might exist in neighborhoods that are also very very very high scoring, such that incremental progress into any of those neighborhoods makes the optimal function local to the optimizer.
...
One thing that helps me visualize it is to remember circuit diagrams. There are many “computing systems” rich enough and generic enough that several steps of “an algorithm” can be embedded inside of that larger system with plenty of room to spare. Once the model is “big enough” to contain the right algorithm… it doesn’t really pragmatically matter which computing substrate parts are used to calculate which parts of The Correct Function Given The Data.
Another helpful insight is an old chestnut that Ilya always stuck in his talks back in the day (haven’t seen a talk lately (maybe he’s still doing it?)) about how a two-layer neural net can learn integer sorting.
I assume the neural net must discover some algorithm that either “just is radix sort” or else is similar to radix sort, which is a linear time sorting algorithm that can get away with computing SORT in linear time by having a maximum value. (I’ve never personally tried to train a net to do this, nor tried to figure out how and why the weights worked after training them.)
But basically: these systems can do fully generic computation and can learn which part of “fully generic computation” is approximately The Correct Function based on the labeled data.
...
Also, they often have some regularization built in generally (because it often makes it go faster?) so that there is a penalty for “complicated models”. This makes overfitting much less common in practice, especially on real problems where there’s really something non-trivial to learn that is hiding in the data.
The lower level “stuff out of which the learning is made” becomes less important eventually, due to that “stuff” being optimized to be sufficient to learn whatever the learning substrate is being “asked to learn” (in the form of extensive examples of correct computation of the function).
The “lower level learning stuff” is not entirely unimportant <3
There’s still a question of cost. You want to do it FAST and CHEAP if that is also possible, once “computing the right thing at all” is achievable <3
I think the linked post is assuming that the parameters are real numbers.
I’m still confused by “Unless you have a loss function that has a finite minimum value like squared loss (not cross entropy or softmax)” because cross entropy is bounded below at zero.
Source: https://moultano.wordpress.com/2020/10/18/why-deep-learning-works-even-though-it-shouldnt/
Wait, how is it possible for there to be no optimum at all? There’s only a finite number of possible settings of the 175 billion parameters; there has to be at least one setting such that no other setting has lower loss. (I don’t know much math, I’m probably misunderstanding what optimum means.)
Taking finite precision floating point numbers, there must be an optimum.
In the real numbers, there are an infinity of possible settings. Its possible to have an infinite sequence of ever better solutions. (The trivial example is the bigger a parameter is, the better the results, for any setting, its possible to add 1 and get a better setting.)
Its possible for the score to behave like the blue line. Getting better and better without bound. Its also possible for the score to behave like the orange line. With some finite value it approaches but never reaches.
Agreed, there can be a optimum. But I think the intuition here is that it is exceedingly rare enough to run into a situation where it is local optima in all “directions”.
It is only an “optimum” when all 175 billion parameters are telling you to screw off and stop trying.
There can be N settings that perfectly tie for the best score.
Also, they might exist in neighborhoods that are also very very very high scoring, such that incremental progress into any of those neighborhoods makes the optimal function local to the optimizer.
...
One thing that helps me visualize it is to remember circuit diagrams. There are many “computing systems” rich enough and generic enough that several steps of “an algorithm” can be embedded inside of that larger system with plenty of room to spare. Once the model is “big enough” to contain the right algorithm… it doesn’t really pragmatically matter which computing substrate parts are used to calculate which parts of The Correct Function Given The Data.
Another helpful insight is an old chestnut that Ilya always stuck in his talks back in the day (haven’t seen a talk lately (maybe he’s still doing it?)) about how a two-layer neural net can learn integer sorting.
I assume the neural net must discover some algorithm that either “just is radix sort” or else is similar to radix sort, which is a linear time sorting algorithm that can get away with computing SORT in linear time by having a maximum value. (I’ve never personally tried to train a net to do this, nor tried to figure out how and why the weights worked after training them.)
But basically: these systems can do fully generic computation and can learn which part of “fully generic computation” is approximately The Correct Function based on the labeled data.
...
Also, they often have some regularization built in generally (because it often makes it go faster?) so that there is a penalty for “complicated models”. This makes overfitting much less common in practice, especially on real problems where there’s really something non-trivial to learn that is hiding in the data.
The lower level “stuff out of which the learning is made” becomes less important eventually, due to that “stuff” being optimized to be sufficient to learn whatever the learning substrate is being “asked to learn” (in the form of extensive examples of correct computation of the function).
The “lower level learning stuff” is not entirely unimportant <3
There’s still a question of cost. You want to do it FAST and CHEAP if that is also possible, once “computing the right thing at all” is achievable <3
I think the linked post is assuming that the parameters are real numbers.
I’m still confused by “Unless you have a loss function that has a finite minimum value like squared loss (not cross entropy or softmax)” because cross entropy is bounded below at zero.