“Well, the first rule is that you can’t really know anything if you just remember isolated facts and try and bang ’em back. If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form.
You’ve got to have models in your head. And you’ve got to array your experience—both vicarious and direct—on this latticework of models. You may have noticed students who just try to remember and pound back what is remembered. Well, they fail in school and in life. You’ve got to hang experience on a latticework of models in your head.”
Eh. That could have maybe been phrased better with less hyperbole. But I don’t think he is literally making a prediction about practical life outcomes. I think he A) implicitly classes understanding as a terminal value here and B) is using failure to mean not achieving your goals (in this case understanding). That seems reasonable enough. I think a decent chunk of folks on lesswrong would value epistemic rationality even if it was proven that it didn’t make their lives any better along other axes. In any case you can dump the “fail” part from the quote and the general idea about mental models is fine.
--Charles Munger http://ycombinator.com/munger.html
Any serious experiment proving this? There are ways to win at life that don’t require understanding of many things.
And certainly ways to succeed in a school environment without understanding what you’re supposed to be learning about.
Eh. That could have maybe been phrased better with less hyperbole. But I don’t think he is literally making a prediction about practical life outcomes. I think he A) implicitly classes understanding as a terminal value here and B) is using failure to mean not achieving your goals (in this case understanding). That seems reasonable enough. I think a decent chunk of folks on lesswrong would value epistemic rationality even if it was proven that it didn’t make their lives any better along other axes. In any case you can dump the “fail” part from the quote and the general idea about mental models is fine.