A lot of outcomes about which we care deeply are not very predictable. For example, it is not comforting to members of a graduate school admissions committee to know that only 23% of the variance in later faculty ratings of a student can be predicted by a unit weighting of the student’s undergraduate GPA, his or her GRE score, and a measure of the student’s undergraduate institution selectivity—but that is opposed to 4% based on those committee members’ global ratings of the applicant. We want to predict outcomes important to us. It is only rational to conclude that if one method (a linear model) does not predict well, something else may do better. What is not rational—in fact, it’s irrational—is to conclude that this “something else” necessarily exists and, in the absence of any positive supporting evidence, is intuitive global judgment.
Hastie & Dawes, Rational Choice in an Uncertain World, pp. 67-8.
[The results that] (a) the correlation with the model’s predictions is higher than the correlation with clinical prediction, but (b) both correlations are low [...] often lead psychologists to interpret the findings as meaning that while the low correlation of the model indicates that linear modeling is deficient as a method, the even lower correlation of the judges indicates only that the wrong judges were used.
Hastie & Dawes, Rational Choice in an Uncertain World, pp. 67-8.
Related:
Dawes, in JUU:HB p. 392.