Oh there are many examples of this throughout science.
In my own area (machine learning), a decade ago there used to be a huge clique of researchers who’s “consensus” was that ANNs were dead, SVM+kernel methods were superior, and that few other ML techniques mattered. Actually, the problem was simply that they were training ANNs improperly. Later researchers showed how to properly train ANNs, and the work of the Toronto machine intelligence group especially established that ANNs were quite superior to SVMs for many tasks.
In econometrics, subsequence time series (STS) clustering was widely thought to be a good approach for analyzing market movements. After decades of work and hundreds of papers on this technique, Keogh et al showed in 2005 that the results of STS clustering are actually indistinguishable from noise!
Another one, in physics, was pointed out by Lee Smolin in his book, The Trouble with Physics. In string theory it was commonly, but wrongly, consensus opinion that Mandelstam had proven string theory finite. Actually, he had only eliminated some particular forms of infinities. The work on establishing string theory as finite is still ongoing.
ANNs were dead, SVM+kernel methods were superior, and that few other ML techniques mattered. Actually, the problem was simply that they were training ANNs improperly.
Well… I suppose that characterization is true, but only if you allow the acronym “ANN” to designate a really quite broad class of algorithms.
It was true that multilayer perceptrons trained with backpropagation are inferior to SVMs. It is also true that deep belief networks trained with some kind of Hintonian contrastive divergence algorithm are probably better than SVMs. If you tag both the multilayer perceptrons and the deep belief networks with the “ANN” label, then it is true that the consensus in the field reversed itself. But I think it is more precise just to say that people invented a whole new type of learning machine.
(I’m sure you know all this, I’m commenting for the benefit of readers who are not ML experts).
This is a different type of problem. OP is talking about people saying there is a consensus, when actually there’s a lot of disagreement. You’re talking about times where there was (some kind of) a consensus, but that consensus was wrong.
I apologize then, that wasn’t how I read it. When you said “huge clique” and “widely thought,” I thought you were saying that the majority of the field falls into those groups.
Oh there are many examples of this throughout science.
In my own area (machine learning), a decade ago there used to be a huge clique of researchers who’s “consensus” was that ANNs were dead, SVM+kernel methods were superior, and that few other ML techniques mattered. Actually, the problem was simply that they were training ANNs improperly. Later researchers showed how to properly train ANNs, and the work of the Toronto machine intelligence group especially established that ANNs were quite superior to SVMs for many tasks.
In econometrics, subsequence time series (STS) clustering was widely thought to be a good approach for analyzing market movements. After decades of work and hundreds of papers on this technique, Keogh et al showed in 2005 that the results of STS clustering are actually indistinguishable from noise!
Another one, in physics, was pointed out by Lee Smolin in his book, The Trouble with Physics. In string theory it was commonly, but wrongly, consensus opinion that Mandelstam had proven string theory finite. Actually, he had only eliminated some particular forms of infinities. The work on establishing string theory as finite is still ongoing.
Well… I suppose that characterization is true, but only if you allow the acronym “ANN” to designate a really quite broad class of algorithms.
It was true that multilayer perceptrons trained with backpropagation are inferior to SVMs. It is also true that deep belief networks trained with some kind of Hintonian contrastive divergence algorithm are probably better than SVMs. If you tag both the multilayer perceptrons and the deep belief networks with the “ANN” label, then it is true that the consensus in the field reversed itself. But I think it is more precise just to say that people invented a whole new type of learning machine.
(I’m sure you know all this, I’m commenting for the benefit of readers who are not ML experts).
This is a different type of problem. OP is talking about people saying there is a consensus, when actually there’s a lot of disagreement. You’re talking about times where there was (some kind of) a consensus, but that consensus was wrong.
That’s not clear to me from reading the comment. passive_fist, can you clarify?
In all 3 cases I described except the last, it wasn’t a consensus at all, but a percieved consensus within a subset of the community.
I apologize then, that wasn’t how I read it. When you said “huge clique” and “widely thought,” I thought you were saying that the majority of the field falls into those groups.