We have to be careful when thinking about “science” as a single entity; the science that physicists do, the science that biologists, and the science that nutritionists do are each very different.
My take on what happened to nutrition science is that the nutrition science research community settled on a paradigm (controlled dietary studies followed by measuring indirect proxies for health) that was inadequate. They then put out a bunch of studies, each of which was only very weak evidence and had an extraordinarily long list of caveats. This got amplified first by reporting p-values which failed to account for those caveats, and then again by the media; and the result was a bunch of dietary recommendations that were some combination of noise, echo chamber effects, and deliberate manipulation, with barely any signal.
But that isn’t a failure of science, per se. That’s a failure of the research and publication methodologies of one particular field. It is concerning that other fields are using similar publication methodologies (especially the use of p-values), and there are some other fields where there is reason to suspect that the signal to noise ratio is also bad. The lesson I take from nutrition science is that you can’t trust a community’s output just because they call their work “science” and have all the trappings thereof; you have to look closely, see if it makes sense, and see how far above the noise floor their models’ predictions really are.
There is a decent talk “Big Fat Fiasco” (about an hour long) that explains what happened with nutrition science.
Some of the interesting parts are near the end of part 2/start of part 3:
Specifically senator McGovern dismissing complaints from scientists that there was not enough evidence that fat caused hear disease by saying:
Senators don’t have the luxury that a research scientist has of waiting until every last shred of evidence is in.
A little latter the video mentions that at the time 90% of all funding for research on heart disease was provided by the US Government and American Heart Association. Thus once both said that fat causes heart disease it was nearly impossible for scientists who got conflicting reports to get funding.
Edited conclusion to make it clearer:
There are two problems here:
1) The attitude that this area is too important to wait for “every last shred of evidence” and thus we must go with science based upon weak evidence.
Where else have I heard this, which appears to be prevalent in climate science and pandemic medicine today.
2) Nearly all funding provided by a few large organizations that are thus subject to politics and group think.
This appears to be true in most sciences today.
As such, unfortunately, it appears the case nutrition science isn’t just an isolated incident.
I suppose the lesson here is that an inability to wait until all evidence is in to being to act does not imply that you should stop investigating new evidence once you begin to act.
The problem is that once you begin to act you’re subject to commitment bias. Namely, as happened in the example, you have a psychological and possibly institutional commitment to the correctness of the theory you’re acting under.
I think not waiting for every last shred of evidence is the root of Bayesian thinking and is also the justification for considering there will be a singularity and that there are existential risks that we can do something about now. Well before the last shred of evidence is even remotely in sight.
We have to be careful when thinking about “science” as a single entity; the science that physicists do, the science that biologists, and the science that nutritionists do are each very different.
The other side of that coin is that we should stop treating criticisms and/or attacks on individual scientific theories as attacks on “science”.
This is a valid point. However, there is an objective fact that’s different between physics/biology and nutrition: in the former, there is a lot of historical progress: stuff discovered and promoted at a high confidence tends to be supported and replicated. In the latter, stuff promoted at high confidence by the media is fairly likely to be contradicted again soon after. So it’s significantly more reasonable to ignore the results of nutrition science when deciding what to eat than it is to ignore the predictions of, say, biology when deciding whether to vaccinate your children.
Ah. Then yeah that’s a problem, but I’m not sure why this would be worse with recent research.
This article gives a pretty good overview of the shortcomings of medical statistics, and includes one of my favorite lines ever:
Such sad statistical situations suggest that the marriage of science and math may be desperately in need of counseling. Perhaps it could be provided by the Rev. Thomas Bayes.
It is a profound and necessary truth that the deep things in science are not found because they are useful, they are found because it was possible to find them.
A general theory of nutrition would be highly useful, but there’s no reason to believe modern scientific methodology can produce such a thing.
It’s not necessarily the science’s fault that nutrition is an easy place for pseudoscience to take root. One might look at immunology, or psychology, and see nearly identical situations. People seem to think that the science of nutrition is uncertain or self-contradictory, mixing the science with the headlines.
We have to be careful when thinking about “science” as a single entity; the science that physicists do, the science that biologists, and the science that nutritionists do are each very different.
My take on what happened to nutrition science is that the nutrition science research community settled on a paradigm (controlled dietary studies followed by measuring indirect proxies for health) that was inadequate. They then put out a bunch of studies, each of which was only very weak evidence and had an extraordinarily long list of caveats. This got amplified first by reporting p-values which failed to account for those caveats, and then again by the media; and the result was a bunch of dietary recommendations that were some combination of noise, echo chamber effects, and deliberate manipulation, with barely any signal.
But that isn’t a failure of science, per se. That’s a failure of the research and publication methodologies of one particular field. It is concerning that other fields are using similar publication methodologies (especially the use of p-values), and there are some other fields where there is reason to suspect that the signal to noise ratio is also bad. The lesson I take from nutrition science is that you can’t trust a community’s output just because they call their work “science” and have all the trappings thereof; you have to look closely, see if it makes sense, and see how far above the noise floor their models’ predictions really are.
There is a decent talk “Big Fat Fiasco” (about an hour long) that explains what happened with nutrition science.
Some of the interesting parts are near the end of part 2/start of part 3:
Specifically senator McGovern dismissing complaints from scientists that there was not enough evidence that fat caused hear disease by saying:
A little latter the video mentions that at the time 90% of all funding for research on heart disease was provided by the US Government and American Heart Association. Thus once both said that fat causes heart disease it was nearly impossible for scientists who got conflicting reports to get funding.
Edited conclusion to make it clearer:
There are two problems here:
1) The attitude that this area is too important to wait for “every last shred of evidence” and thus we must go with science based upon weak evidence.
Where else have I heard this, which appears to be prevalent in climate science and pandemic medicine today.
2) Nearly all funding provided by a few large organizations that are thus subject to politics and group think.
This appears to be true in most sciences today.
As such, unfortunately, it appears the case nutrition science isn’t just an isolated incident.
I suppose the lesson here is that an inability to wait until all evidence is in to being to act does not imply that you should stop investigating new evidence once you begin to act.
The problem is that once you begin to act you’re subject to commitment bias. Namely, as happened in the example, you have a psychological and possibly institutional commitment to the correctness of the theory you’re acting under.
I think not waiting for every last shred of evidence is the root of Bayesian thinking and is also the justification for considering there will be a singularity and that there are existential risks that we can do something about now. Well before the last shred of evidence is even remotely in sight.
The last shred of evidence about existential risks is one of them killing everyone.
The other side of that coin is that we should stop treating criticisms and/or attacks on individual scientific theories as attacks on “science”.
I mostly agree but am bothered by the fact that from an outside view this sounds like No True Scotsman.
This is a valid point. However, there is an objective fact that’s different between physics/biology and nutrition: in the former, there is a lot of historical progress: stuff discovered and promoted at a high confidence tends to be supported and replicated. In the latter, stuff promoted at high confidence by the media is fairly likely to be contradicted again soon after. So it’s significantly more reasonable to ignore the results of nutrition science when deciding what to eat than it is to ignore the predictions of, say, biology when deciding whether to vaccinate your children.
I suspect much of medicine especially the newer stuff is probably nearly as bad as nutrition.
Edit: See Robin Hanson’s many posts on the subject.
Why especially the “newer stuff”?
The two most obvious reasons are:
1) Once the low-hanging fruit is exhausted, people are more likely to make stuff up.
2) Newer stuff has had less time for problems to get exposed.
Just curious, as I’ve heard the opposite asserted with confidence.
1) Very little of the Hansonian critique of medicine involves researchers making stuff up, and I doubt this is a major problem.
2) True, although hopefully research methodology is improving.
This analysis may interest you, I seem to recall it supports your suspicion.
Sorry about that, I didn’t meant more generate results based on statistical noise, then outright faking research.
Ah. Then yeah that’s a problem, but I’m not sure why this would be worse with recent research.
This article gives a pretty good overview of the shortcomings of medical statistics, and includes one of my favorite lines ever:
Because the earlier stuff, e.g., sanitation, vaccines, and antibiotics, had a stronger effect and thus was easier to notice above the noise.
And also the older stuff has been around long enough to get the bad results beaten out of it.
Oppenheimer’s maxim is relevant here:
A general theory of nutrition would be highly useful, but there’s no reason to believe modern scientific methodology can produce such a thing.
It’s not necessarily the science’s fault that nutrition is an easy place for pseudoscience to take root. One might look at immunology, or psychology, and see nearly identical situations. People seem to think that the science of nutrition is uncertain or self-contradictory, mixing the science with the headlines.