There can be N settings that perfectly tie for the best score.
Also, they might exist in neighborhoods that are also very very very high scoring, such that incremental progress into any of those neighborhoods makes the optimal function local to the optimizer.
...
One thing that helps me visualize it is to remember circuit diagrams. There are many “computing systems” rich enough and generic enough that several steps of “an algorithm” can be embedded inside of that larger system with plenty of room to spare. Once the model is “big enough” to contain the right algorithm… it doesn’t really pragmatically matter which computing substrate parts are used to calculate which parts of The Correct Function Given The Data.
Another helpful insight is an old chestnut that Ilya always stuck in his talks back in the day (haven’t seen a talk lately (maybe he’s still doing it?)) about how a two-layer neural net can learn integer sorting.
I assume the neural net must discover some algorithm that either “just is radix sort” or else is similar to radix sort, which is a linear time sorting algorithm that can get away with computing SORT in linear time by having a maximum value. (I’ve never personally tried to train a net to do this, nor tried to figure out how and why the weights worked after training them.)
But basically: these systems can do fully generic computation and can learn which part of “fully generic computation” is approximately The Correct Function based on the labeled data.
...
Also, they often have some regularization built in generally (because it often makes it go faster?) so that there is a penalty for “complicated models”. This makes overfitting much less common in practice, especially on real problems where there’s really something non-trivial to learn that is hiding in the data.
The lower level “stuff out of which the learning is made” becomes less important eventually, due to that “stuff” being optimized to be sufficient to learn whatever the learning substrate is being “asked to learn” (in the form of extensive examples of correct computation of the function).
The “lower level learning stuff” is not entirely unimportant <3
There’s still a question of cost. You want to do it FAST and CHEAP if that is also possible, once “computing the right thing at all” is achievable <3
There can be N settings that perfectly tie for the best score.
Also, they might exist in neighborhoods that are also very very very high scoring, such that incremental progress into any of those neighborhoods makes the optimal function local to the optimizer.
...
One thing that helps me visualize it is to remember circuit diagrams. There are many “computing systems” rich enough and generic enough that several steps of “an algorithm” can be embedded inside of that larger system with plenty of room to spare. Once the model is “big enough” to contain the right algorithm… it doesn’t really pragmatically matter which computing substrate parts are used to calculate which parts of The Correct Function Given The Data.
Another helpful insight is an old chestnut that Ilya always stuck in his talks back in the day (haven’t seen a talk lately (maybe he’s still doing it?)) about how a two-layer neural net can learn integer sorting.
I assume the neural net must discover some algorithm that either “just is radix sort” or else is similar to radix sort, which is a linear time sorting algorithm that can get away with computing SORT in linear time by having a maximum value. (I’ve never personally tried to train a net to do this, nor tried to figure out how and why the weights worked after training them.)
But basically: these systems can do fully generic computation and can learn which part of “fully generic computation” is approximately The Correct Function based on the labeled data.
...
Also, they often have some regularization built in generally (because it often makes it go faster?) so that there is a penalty for “complicated models”. This makes overfitting much less common in practice, especially on real problems where there’s really something non-trivial to learn that is hiding in the data.
The lower level “stuff out of which the learning is made” becomes less important eventually, due to that “stuff” being optimized to be sufficient to learn whatever the learning substrate is being “asked to learn” (in the form of extensive examples of correct computation of the function).
The “lower level learning stuff” is not entirely unimportant <3
There’s still a question of cost. You want to do it FAST and CHEAP if that is also possible, once “computing the right thing at all” is achievable <3