Eliezer, the ‘outside view’ concept can also naturally be used to describe the work of Philip Tetlock, who found that political/foreign affairs experts were generally beaten by what Robin Dawes calls the “the robust beauty of simple linear models.” Experts relying on coherent ideologies (EDIT: hedgehogs) did particularly badly.
Those political events were affected by big systemic pressures that someone could have predicted using inside view considerations, e.g. understanding the instability of the Soviet Union, but in practice acknowledged experts were not good enough at making use of such insights to generate net improvements on average.
Now, we still need to assign probabilities over different different models, not all of which should be so simple, but I think it’s something of a caricature to focus so much on the homework/curriculum planning problems.
(It’s foxes who know many things and do better; the hedgehog knows one big thing.)
I haven’t read Tetlock’s book yet. I’m certainly not surprised to hear that foreign affairs “experts” are full of crap on average; their incentives are dreadful. I’m much more surprised to hear that situations like the instability of the Soviet Union could be described and successfully predicted by simple linear models, and I’m extremely suspicious if the linear models were constructed in retrospect. Wasn’t this more like the kind of model-based forecasting that was actually done in advance?
Conversely if the result is just that hedgehogs did worse than foxes, I’m not surprised because hedgehogs have worse incentives—internal incentives, that is, there are no external incentives AFAICT.
I have read Dawes on medical experts being beaten by improper linear models (i.e., linear models with made-up −1-or-1 weights and normalized inputs, if I understand correctly) whose factors are the judgments of the same experts on the facets of the problem. This ought to count as the triumph or failure of something but it’s not quite isomorphic to outside view versus inside view.
Eliezer, the ‘outside view’ concept can also naturally be used to describe the work of Philip Tetlock, who found that political/foreign affairs experts were generally beaten by what Robin Dawes calls the “the robust beauty of simple linear models.” Experts relying on coherent ideologies (EDIT: hedgehogs) did particularly badly.
Those political events were affected by big systemic pressures that someone could have predicted using inside view considerations, e.g. understanding the instability of the Soviet Union, but in practice acknowledged experts were not good enough at making use of such insights to generate net improvements on average.
Now, we still need to assign probabilities over different different models, not all of which should be so simple, but I think it’s something of a caricature to focus so much on the homework/curriculum planning problems.
(It’s foxes who know many things and do better; the hedgehog knows one big thing.)
I haven’t read Tetlock’s book yet. I’m certainly not surprised to hear that foreign affairs “experts” are full of crap on average; their incentives are dreadful. I’m much more surprised to hear that situations like the instability of the Soviet Union could be described and successfully predicted by simple linear models, and I’m extremely suspicious if the linear models were constructed in retrospect. Wasn’t this more like the kind of model-based forecasting that was actually done in advance?
Conversely if the result is just that hedgehogs did worse than foxes, I’m not surprised because hedgehogs have worse incentives—internal incentives, that is, there are no external incentives AFAICT.
I have read Dawes on medical experts being beaten by improper linear models (i.e., linear models with made-up −1-or-1 weights and normalized inputs, if I understand correctly) whose factors are the judgments of the same experts on the facets of the problem. This ought to count as the triumph or failure of something but it’s not quite isomorphic to outside view versus inside view.