Going by that description, it is much much less important than residual learning, because hyperparameter optimization is not new. There are a lot of approaches: grid search, random search, Gaussian processes. Some hyperparameter optimizations baked into MSR’s deep learning framework would save some researcher time and effort, certainly, but I don’t know that it would’ve made any big difference unless they have something quite unusual going one.
(I liked one paper which took a Bayesian multi-armed bandit approach and treated error curves as partial information about final performance, and it would switch between different networks being trained based on performance, regularly ‘freezing’ and ‘thawing’ networks as the probability each network would become the best performer changed with information from additional mini-batches/epoches.) Probably the single coolest one is last year some researchers showed that it is possible to somewhat efficiently backpropagate on hyperparameters! So hyperparameters just become more parameters to learn, and you can load up on all sorts of stuff without worrying about it making your hyperparameter optimization futile or having to train a billion times, and would both save people a lot of time (for using vanilla networks) and allow exploring extremely complicated and heavily parameterized families of architectures, and would be a big deal. Unfortunately, it’s still not efficient enough for the giant networks we want to train. :(
A step which was taken a long time ago and does not seem to have played much of a role in recent developments; for the most part, people don’t bother with extensive hyperparameter tuning. Better initialization, better algorithms like dropout or residual learning, better architectures, but not hyperparameters.
Going by that description, it is much much less important than residual learning, because hyperparameter optimization is not new. There are a lot of approaches: grid search, random search, Gaussian processes. Some hyperparameter optimizations baked into MSR’s deep learning framework would save some researcher time and effort, certainly, but I don’t know that it would’ve made any big difference unless they have something quite unusual going one.
(I liked one paper which took a Bayesian multi-armed bandit approach and treated error curves as partial information about final performance, and it would switch between different networks being trained based on performance, regularly ‘freezing’ and ‘thawing’ networks as the probability each network would become the best performer changed with information from additional mini-batches/epoches.) Probably the single coolest one is last year some researchers showed that it is possible to somewhat efficiently backpropagate on hyperparameters! So hyperparameters just become more parameters to learn, and you can load up on all sorts of stuff without worrying about it making your hyperparameter optimization futile or having to train a billion times, and would both save people a lot of time (for using vanilla networks) and allow exploring extremely complicated and heavily parameterized families of architectures, and would be a big deal. Unfortunately, it’s still not efficient enough for the giant networks we want to train. :(
The key point is that machine learning starts to happen at the hyper-parameter level. Which is one more step toward systems that optimize themselves.
A step which was taken a long time ago and does not seem to have played much of a role in recent developments; for the most part, people don’t bother with extensive hyperparameter tuning. Better initialization, better algorithms like dropout or residual learning, better architectures, but not hyperparameters.