I’ve pointed before to this very good review of Philip Tetlock’s book, Expert Political Judgment. The review describes the results of Tetlock’s experiments evaluating expert predictions in the field of international politics, where they did very poorly. On average the experts did about as well as random predictions and were badly outperformed by simple statistical extrapolations.
Even after going over the many ways the experts failed in detail, and even though the review is titled “Everybody’s An Expert”, the reviewer concludes, “But the best lesson of Tetlock’s book may be the one that he seems most reluctant to draw: Think for yourself.”
Does that make sense, though? Think for yourself? If you’ve just read an entire book describing how poorly people did who thought for themselves and had a lot more knowledge than you do, is it really likely that you will do better to think for yourself? This advice looks like the same kind of flaw Eliezer describes here, the failure to generalize from knowledge of others’ failures to appreciation of your own.
There’s a better counterargument than that in Tetlock—one of the data points he collected was from a group of university undergraduates, and they did worse than the worst experts, worse than blind chance. Thinking for yourself is the worst option Tetlock considered.
Thinking for yourself is the worst option Tetlock considered.
Worse for making predictions, I suppose. But if people never think for themselves, we are never going to have any new ideas. Statistical extrapolation may be great for prediction, but it is poor for originality. So we value thinking for oneself. But the hit-rate is terrtlble. We have to put up with huge amounts of crap to get the gems. Most Ideas are Wrong, as I like to say when people tell me I’m being “too critical”.
Oh, it’s less general than that—it’s worse for political forecasting specifically. Other kinds of prediction (e.g. will this box fit under this table?), thinking for yourself is often one of the better options.
But, you know, political forecasting is one of the things we often care about. So knowing rules of thumb like “trust the experts, but not very much” is quite helpful.
Actually, when I was rereading the comments and saw your mention of Tetlock, I thought you would point out the bit where he noted the hedgehog predictors made inferior predictions within their area of expertise than without.
Fantastic article. The problem is that now I have a pet theory with which to dismiss anything said by a TV pundit with whom I disagree: I’d be better off guessing myself or at random than listening to them.
Maybe I can estimate how many variables various conclusions rest on, and how much uncertainty is in each, in order to estimate the total uncertainty in various possible outcomes. I’ll have to pay special attention to any evidence that undercuts my beliefs and assumptions, to try to avoid confirmation bias.
Fantastic article. The problem is that now I have a pet theory with which to dismiss anything said by a TV pundit with whom I disagree: I’d be better off guessing myself or at random than listening to them.
That’s great, stop watching TV. TV pundits are an awful source of information.
The problem is that now I have a pet theory with which to dismiss anything said by a TV pundit with whom I disagree: I’d be better off guessing myself or at random than listening to them.
TV pundits are entertainers. They’re hired less for their insightful commentary and more for their ability to engage an audience.
I’ve pointed before to this very good review of Philip Tetlock’s book, Expert Political Judgment. The review describes the results of Tetlock’s experiments evaluating expert predictions in the field of international politics, where they did very poorly. On average the experts did about as well as random predictions and were badly outperformed by simple statistical extrapolations.
Even after going over the many ways the experts failed in detail, and even though the review is titled “Everybody’s An Expert”, the reviewer concludes, “But the best lesson of Tetlock’s book may be the one that he seems most reluctant to draw: Think for yourself.”
Does that make sense, though? Think for yourself? If you’ve just read an entire book describing how poorly people did who thought for themselves and had a lot more knowledge than you do, is it really likely that you will do better to think for yourself? This advice looks like the same kind of flaw Eliezer describes here, the failure to generalize from knowledge of others’ failures to appreciation of your own.
There’s a better counterargument than that in Tetlock—one of the data points he collected was from a group of university undergraduates, and they did worse than the worst experts, worse than blind chance. Thinking for yourself is the worst option Tetlock considered.
Thinking for yourself is the worst option Tetlock considered.
Worse for making predictions, I suppose. But if people never think for themselves, we are never going to have any new ideas. Statistical extrapolation may be great for prediction, but it is poor for originality. So we value thinking for oneself. But the hit-rate is terrtlble. We have to put up with huge amounts of crap to get the gems. Most Ideas are Wrong, as I like to say when people tell me I’m being “too critical”.
Oh, it’s less general than that—it’s worse for political forecasting specifically. Other kinds of prediction (e.g. will this box fit under this table?), thinking for yourself is often one of the better options.
But, you know, political forecasting is one of the things we often care about. So knowing rules of thumb like “trust the experts, but not very much” is quite helpful.
Actually, when I was rereading the comments and saw your mention of Tetlock, I thought you would point out the bit where he noted the hedgehog predictors made inferior predictions within their area of expertise than without.
Fantastic article. The problem is that now I have a pet theory with which to dismiss anything said by a TV pundit with whom I disagree: I’d be better off guessing myself or at random than listening to them.
Maybe I can estimate how many variables various conclusions rest on, and how much uncertainty is in each, in order to estimate the total uncertainty in various possible outcomes. I’ll have to pay special attention to any evidence that undercuts my beliefs and assumptions, to try to avoid confirmation bias.
That’s great, stop watching TV. TV pundits are an awful source of information.
One of my past life decisions I consistently feel very happy about.
TV pundits are entertainers. They’re hired less for their insightful commentary and more for their ability to engage an audience.