ANNs were dead, SVM+kernel methods were superior, and that few other ML techniques mattered. Actually, the problem was simply that they were training ANNs improperly.
Well… I suppose that characterization is true, but only if you allow the acronym “ANN” to designate a really quite broad class of algorithms.
It was true that multilayer perceptrons trained with backpropagation are inferior to SVMs. It is also true that deep belief networks trained with some kind of Hintonian contrastive divergence algorithm are probably better than SVMs. If you tag both the multilayer perceptrons and the deep belief networks with the “ANN” label, then it is true that the consensus in the field reversed itself. But I think it is more precise just to say that people invented a whole new type of learning machine.
(I’m sure you know all this, I’m commenting for the benefit of readers who are not ML experts).
Well… I suppose that characterization is true, but only if you allow the acronym “ANN” to designate a really quite broad class of algorithms.
It was true that multilayer perceptrons trained with backpropagation are inferior to SVMs. It is also true that deep belief networks trained with some kind of Hintonian contrastive divergence algorithm are probably better than SVMs. If you tag both the multilayer perceptrons and the deep belief networks with the “ANN” label, then it is true that the consensus in the field reversed itself. But I think it is more precise just to say that people invented a whole new type of learning machine.
(I’m sure you know all this, I’m commenting for the benefit of readers who are not ML experts).