But when a solid majority of the experts agree on a conclusion, and you see flaws in their statistics, I think the default assumption should be that they still know the issue better than you and very likely the sum total of the available evidence does support the conclusion, even if the specific statistical arguments youv’e seen from them are wrong.
Experts have perverse incentives to overstate the value of their expertise. An academic paper concluding “the tools of our profession are more powerful than we thought because....” is much more likely to get published than one concluding the reverse. The flaws you see in expert opinions might mostly be pushing the conclusions in the direction that makes the experts’ advice seem more important.
Because of political correctness in academia, you can’t much generalize from how academics treat IQ to how we handle most other topics.
Experts have perverse incentives to overstate the value of their expertise. An academic paper concluding “the tools of our profession are more powerful than we thought because....” is much more likely to get published than one concluding the reverse. The flaws you see in expert opinions might mostly be pushing the conclusions in the direction that makes the experts’ advice seem more important.
Examples? There may be a weak effect of this sort, but my best attempts to survey available examples in my Trusting Expert Consensus post suggest it’s actually pretty hard for an entire discipline to convery on stupid conclusions.
Because of political correctness in academia, you can’t much generalize from how academics treat IQ to how we handle most other topics.
I dunno, academic subjects seem pretty full of ideological landmines. My two other examples in this post, for example, had to do with environmentalism, which is pretty politically charged.
Nutrition (on the “evils” of fats vs carbs), macroeconomics (on the causes of recessions), cultural anthropology (on the causes of economic inequality), and Women Studies (on the causes of the male / female wage gap).
The academic landmines for IQ contain antimatter, as Larry Summers found out.
I’m pretty sure nutrition is a bad example of this, but I won’t get into why, since I was already planning on making my next post about nutrition.
I’m not sure about cultural anthropology or women’s studies. I don’t know enough about those fields.
On macro—my impression is there’s more disagreement among economists about macro than about other things? So if macro is an issue where economists don’t really know what they’re talking about, it supports the heuristic that lack of agreement among experts indicates they don’t know what’s really going on. But OTOH my impression is also that economists do actually know stuff about recessions.
I think this is an extremely bad example. Macro-econ contains some models that are partially predictive and an extensive literature on when to use which hypothesis, and then a few crank theories that simply reject empiricism altogether. The fact that academically ancient or crankish theories are popular among the political class or among blog commentators simply does not mean that the academic study of macroeconomics, when done properly, has nothing accurate and applicable to say about recessions.
Just because I don’t know the macroeconomic literature doesn’t mean I can’t be rational enough to distinguish between the actual literature and the popular misconceptions.
Experts have perverse incentives to overstate the value of their expertise. An academic paper concluding “the tools of our profession are more powerful than we thought because....” is much more likely to get published than one concluding the reverse. The flaws you see in expert opinions might mostly be pushing the conclusions in the direction that makes the experts’ advice seem more important.
Because of political correctness in academia, you can’t much generalize from how academics treat IQ to how we handle most other topics.
Examples? There may be a weak effect of this sort, but my best attempts to survey available examples in my Trusting Expert Consensus post suggest it’s actually pretty hard for an entire discipline to convery on stupid conclusions.
I dunno, academic subjects seem pretty full of ideological landmines. My two other examples in this post, for example, had to do with environmentalism, which is pretty politically charged.
Nutrition (on the “evils” of fats vs carbs), macroeconomics (on the causes of recessions), cultural anthropology (on the causes of economic inequality), and Women Studies (on the causes of the male / female wage gap).
The academic landmines for IQ contain antimatter, as Larry Summers found out.
I’m pretty sure nutrition is a bad example of this, but I won’t get into why, since I was already planning on making my next post about nutrition.
I’m not sure about cultural anthropology or women’s studies. I don’t know enough about those fields.
On macro—my impression is there’s more disagreement among economists about macro than about other things? So if macro is an issue where economists don’t really know what they’re talking about, it supports the heuristic that lack of agreement among experts indicates they don’t know what’s really going on. But OTOH my impression is also that economists do actually know stuff about recessions.
I think this is an extremely bad example. Macro-econ contains some models that are partially predictive and an extensive literature on when to use which hypothesis, and then a few crank theories that simply reject empiricism altogether. The fact that academically ancient or crankish theories are popular among the political class or among blog commentators simply does not mean that the academic study of macroeconomics, when done properly, has nothing accurate and applicable to say about recessions.
Just because I don’t know the macroeconomic literature doesn’t mean I can’t be rational enough to distinguish between the actual literature and the popular misconceptions.