The dairy meta-analysis you provide was supported by the “Dairy Innovation Australia Limited”, which makes me skeptical of their findings, given the multiple ways in which data can be aggregated to yield specific conclusions, and the evidence documenting the degree to which financial incentives can affect scientific findings. I haven’t looked at the other study, but I’m happy to accept that the evidence for the health benefits of fish is strong.
I haven’t researched the literature on methionine restriction extensively, but here’s a paper I found after a quick Google Scholar search.
More generally, I think having answers to questions of the following sort would be very helpful in these discussions: “To what degree do hypotheses confirmed by non-human animal models are later vindicated by experimental studies?” and “To what degree do hypotheses confirmed by correlational human studies are later vindicated by experimental studies?”
“To what degree do hypotheses confirmed by correlational human studies are later vindicated by experimental studies?”
That depends on what you mean by a ‘correlational study.’ People who analyze observational data with causality in mind spend a lot of time thinking about potential confounding and what to do about it. For example, here’s an analysis of a very large longitudinal dataset with the aim of determining a causal effect:
which does very sensible things. If you look at sensible analyses of observational data, then ‘the vindication rate’ will be related to how often the needed assumptions actually hold. If you look at non-sensible analyses (e.g. that aren’t adjusting for confounder bias, and so on), then it’s just garbage, no reason to expect better than chance then.
The dairy meta-analysis you provide was supported by the “Dairy Innovation Australia Limited”, which makes me skeptical of their findings, given the multiple ways in which data can be aggregated to yield specific conclusions, and the evidence documenting the degree to which financial incentives can affect scientific findings. I haven’t looked at the other study, but I’m happy to accept that the evidence for the health benefits of fish is strong.
I haven’t researched the literature on methionine restriction extensively, but here’s a paper I found after a quick Google Scholar search.
More generally, I think having answers to questions of the following sort would be very helpful in these discussions: “To what degree do hypotheses confirmed by non-human animal models are later vindicated by experimental studies?” and “To what degree do hypotheses confirmed by correlational human studies are later vindicated by experimental studies?”
That depends on what you mean by a ‘correlational study.’ People who analyze observational data with causality in mind spend a lot of time thinking about potential confounding and what to do about it. For example, here’s an analysis of a very large longitudinal dataset with the aim of determining a causal effect:
http://www.hsph.harvard.edu/wp-content/uploads/sites/1138/2012/09/ije_2009.pdf
which does very sensible things. If you look at sensible analyses of observational data, then ‘the vindication rate’ will be related to how often the needed assumptions actually hold. If you look at non-sensible analyses (e.g. that aren’t adjusting for confounder bias, and so on), then it’s just garbage, no reason to expect better than chance then.
Not sure of actual base-rate but I know it is very poor, which is why most researchers don’t take animal models very seriously.
Not sure, I think also very poor, but having a measure of this would be very valuable.