there’s the Schmidhuber Scholarpedia articles in some cases, but aside from being outdated, it’s, well, Schmidhuber.
I hate Schmimdhuber with a passion because I can smell everything he touches on Wikipedia and they are always terrible.
Sometimes when I read pages about AI, I see things that almost certainly came from him, or one of his fans. I struggle to speak of exactly what Schmidhuber’s kind of writing gives, but perhaps this will suffice: “People never give the right credit to anything. Everything of importance is either published by my research group first but miscredited to someone later, or something like that. Deep Learning? It’s done not by Hinton, but Amari, but not Amari, but by Ivanenkho. The more obscure the originator, the better, because it reveals how bad people are at credit assignment—if they were better at it, the real originators would not have been so obscure.”
For example, LSTM is actually originated by Schmidhuber… and actually, it’s also credited to Schmidhuber (… or maybe Hochreiter?). But then GAN should be credited to Schmidhuber, and also Transformers. Currently he (or his fans) kept trying to put the phrase “internal spotlights of attention” into the Transformer page, and I kept removing it. He wanted the credit so much that he went for argument-by-punning, renaming “fast weight programmer” to “linear transformers”, and to quote out of context “internal spotlights of attention” just to fortify the argument with a pun! I can do puns too! Rosenblatt (1962) even wrote about “back-propagating errors” in an MLP with a hidden layer. So what?
I actually took Schmidhuber’s claim seriously and carefully rewrote of Ivanenkho’s Group method of data handling, giving all the mathematical details, so that one may evaluate it for itself instead of Schmidhuber’s claim. A few months later someone manually reverted everything I wrote! What does it read like according to a partisan of Ivanenkho?
The development of GMDH consists of a synthesis of ideas from different areas of science: the cybernetic concept of “black box” and the principle of successive genetic selection of pairwise features, Godel’s incompleteness theorems and the Gabor’s principle of “freedom of decisions choice”, and the Beer’s principle of external additions. GMDH is the original method for solving problems for structural-parametric identification of models for experimental data under uncertainty… Since 1989 the new algorithms (AC, OCC, PF) for non-parametric modeling of fuzzy objects and SLP for expert systems were developed and investigated. Present stage of GMDH development can be described as blossom out of deep learning neuronets and parallel inductive algorithms for multiprocessor computers.
Well excuse me, “Godel’s incompleteness theorems”? “the original method”? Also, I thought “fuzzy” has stopped being fashionable since 1980s. I actually once tried to learn fuzzy logic and gave up after not seeing what is the big deal. It is filled with such pompous and self-important terminology, as if the lack of substance must be made up by the heights of spiritual exhortation. Why say “combined” when they could say “consists of a synthesis of ideas from different areas of science”?
As a side note, such turgid prose, filled with long noun-phrases is pretty common among the Soviets. I once read that this kind of massive noun-phrase had a political purpose, but I don’t remember what it is.
I hate Schmimdhuber with a passion because I can smell everything he touches on Wikipedia and they are always terrible.
Sometimes when I read pages about AI, I see things that almost certainly came from him, or one of his fans. I struggle to speak of exactly what Schmidhuber’s kind of writing gives, but perhaps this will suffice: “People never give the right credit to anything. Everything of importance is either published by my research group first but miscredited to someone later, or something like that. Deep Learning? It’s done not by Hinton, but Amari, but not Amari, but by Ivanenkho. The more obscure the originator, the better, because it reveals how bad people are at credit assignment—if they were better at it, the real originators would not have been so obscure.”
For example, LSTM is actually originated by Schmidhuber… and actually, it’s also credited to Schmidhuber (… or maybe Hochreiter?). But then GAN should be credited to Schmidhuber, and also Transformers. Currently he (or his fans) kept trying to put the phrase “internal spotlights of attention” into the Transformer page, and I kept removing it. He wanted the credit so much that he went for argument-by-punning, renaming “fast weight programmer” to “linear transformers”, and to quote out of context “internal spotlights of attention” just to fortify the argument with a pun! I can do puns too! Rosenblatt (1962) even wrote about “back-propagating errors” in an MLP with a hidden layer. So what?
I actually took Schmidhuber’s claim seriously and carefully rewrote of Ivanenkho’s Group method of data handling, giving all the mathematical details, so that one may evaluate it for itself instead of Schmidhuber’s claim. A few months later someone manually reverted everything I wrote! What does it read like according to a partisan of Ivanenkho?
Well excuse me, “Godel’s incompleteness theorems”? “the original method”? Also, I thought “fuzzy” has stopped being fashionable since 1980s. I actually once tried to learn fuzzy logic and gave up after not seeing what is the big deal. It is filled with such pompous and self-important terminology, as if the lack of substance must be made up by the heights of spiritual exhortation. Why say “combined” when they could say “consists of a synthesis of ideas from different areas of science”?
As a side note, such turgid prose, filled with long noun-phrases is pretty common among the Soviets. I once read that this kind of massive noun-phrase had a political purpose, but I don’t remember what it is.