Why don’t you write a post on how it is naive? Do you actually know something about practical application of these methods?
Yes, if experts say that they use quantifiable data X, Y, and Z to predict outcomes, that simple algorithms beat them on only that data might not be important if the experts really use other data. But there is lots of evidence saying that experts are terrible at non-quantifiable data, such as thinking interviews are useful in hiring. Tetlock finds that ecologically valid use of these trivial models beats experts in politics.
When based on the same evidence, the predictions of SPRs are at least as reliable as, and are typically more reliable than, the predictions of human experts for problems of social prediction.
I’m reminded of one of your early naively breathless articles here on the value of mid-80s and prior expert systems.
Why don’t you write a post on how it is naive? Do you actually know something about practical application of these methods?
Yes, if experts say that they use quantifiable data X, Y, and Z to predict outcomes, that simple algorithms beat them on only that data might not be important if the experts really use other data. But there is lots of evidence saying that experts are terrible at non-quantifiable data, such as thinking interviews are useful in hiring. Tetlock finds that ecologically valid use of these trivial models beats experts in politics.
this one:
http://lesswrong.com/lw/3gv/statistical_prediction_rules_outperform_expert/
Hmm yes, ‘same evidence’.