[I am a total noob on history of deep learning & AI]
From a cursory glance I find Schmidhuber’s take convincing.
He argues that the (vast) majority of conceptual & theoretical advances in deep learning have been understood decades before—often by Schmidhuber and his collaborators.
It is unfortunate that the above poster is anonymous. It is very clear to me that there is a big difference between theoretical & conceptual advances and the great recent practical advances due to stacking MOAR layers.
It is possible that remaining steps to AGI consists of just stacking MOAR layers: compute + data + comparatively small advances in data/compute efficiency + something something RL Metalearning will produce an AGI. Certainly, not all problems can be solved [fast] by incremental advances and/or iterating on previous attempts. Some can. It may be the unfortunate reality that creating [but not understanding!] AGI is one of them.
[I am a total noob on history of deep learning & AI]
From a cursory glance I find Schmidhuber’s take convincing.
He argues that the (vast) majority of conceptual & theoretical advances in deep learning have been understood decades before—often by Schmidhuber and his collaborators.
Moreover, he argues that many of the current leaders in the field improperly credit previous discoveries
It is unfortunate that the above poster is anonymous. It is very clear to me that there is a big difference between theoretical & conceptual advances and the great recent practical advances due to stacking MOAR layers.
It is possible that remaining steps to AGI consists of just stacking MOAR layers: compute + data + comparatively small advances in data/compute efficiency + something something RL Metalearning will produce an AGI. Certainly, not all problems can be solved [fast] by incremental advances and/or iterating on previous attempts. Some can. It may be the unfortunate reality that creating [but not understanding!] AGI is one of them.