Man-with-hammer doesn’t seem terribly interesting to me; it’s just one expression of basic rationality flaws. It’s consists of:
a) public commitment of advocating your hammer as the cure for all nails, leading to internal entrenchment
b) defending a continuing stream of status and reward as a leading hammer-man; attacking alternative tools proposed by young upstarts
Of course, there’s strategic specialization in competitive research—identify whatever secret weapons that you have (but few others working on your problem have), and gamble that those weapons will lead to some advance on the problem (better: develop your arsenal while looking for likely nails).
What’s funny is that the majority of published AI papers are just applying the latest fashionable tools (that aren’t at all unique) to standard tasks. Everybody seems to be grabbing the same hammer. To give a particular example: machine learning methods used in natural language processing has moved through: Expectation-Maximization (maximum likelihood), Maximum Entropy (set parameters so as to give the model’s predictions the same expectations of features as in real data), discriminative (set parameters directly on improving task accuracy), and Bayesian by sampling.
Thank you for pointing out Munger’s talk to us.
Man-with-hammer doesn’t seem terribly interesting to me; it’s just one expression of basic rationality flaws. It’s consists of:
a) public commitment of advocating your hammer as the cure for all nails, leading to internal entrenchment
b) defending a continuing stream of status and reward as a leading hammer-man; attacking alternative tools proposed by young upstarts
Of course, there’s strategic specialization in competitive research—identify whatever secret weapons that you have (but few others working on your problem have), and gamble that those weapons will lead to some advance on the problem (better: develop your arsenal while looking for likely nails).
What’s funny is that the majority of published AI papers are just applying the latest fashionable tools (that aren’t at all unique) to standard tasks. Everybody seems to be grabbing the same hammer. To give a particular example: machine learning methods used in natural language processing has moved through: Expectation-Maximization (maximum likelihood), Maximum Entropy (set parameters so as to give the model’s predictions the same expectations of features as in real data), discriminative (set parameters directly on improving task accuracy), and Bayesian by sampling.