As a test case, I tried applying this technique to the Dangers of Red Meat, which is apparently a risk factor for colorectal cancer. The abstracts of the first few papers claimed that it is a risk factor with the following qualifications:
if you have the wrong genotype (224 citations)
if the meat is well-done (178 citations)
if you have the wrong genotype, the meat is well done, and you smoke (161 citations)
only for one subtype of colorectal cancer (128 citations)
only for a different subtype not overlapping with the previous one (96 citations)
Correct me if I’m wrong, but most of those look like the result of fishing around for positive results, e.g. “We can’t find a significant result… unless we split people into a bunch of genotype buckets, in which case one of them gives a small enough p-value for this journal.” I haven’t read the studies in question so maybe I’m being unfair here, but still, it feels fishy.
You may be right. It’s not quite M&M colors, though; there was apparently some reason to believe this allele would have an effect on the relationship between red meat and cancer. If anything, you might claim that the fishing around is occurring at the meta level: the buckets are “genetics has an effect”, “the cancer’s location has an effect”, “how the meat is cooked has an effect”, and so on.
I believe at least part of the reason for this is that “the correlation between red meat and cancer is 0.56” or whatever is not an interesting paper anymore, so we add other variables like smoking to see what happens. (Much like “red meat causes cancer” is a more interesting paper than “1% of people have cancer”.) I’m not sure whether this is good or bad.
Do you understand why it’s not… entirely honest… to blame red meat? It shows up as a statistical correlate. It can be used to identify people at risk for these conditions, but then researchers make a leap and infer a causal relationship.
It’s an ideological punchline they can use to get published. And that’s all.
You do understand that scientist don’t just look for correlations but form a bit more complex models than that. Do you seriously think that things like that are not taken into account!? Hell, I am willing to bet that a bunch of the studies test those correlations by comparing for example smokers who eat more red meat versus smokers who eat less/none read meat.
Taking everything into account is difficult, especially when you have no idea exactly what you aught to be taking into account. Even if you manage to do that exactly right, there is still publication bias to deal with. And if you are using science for practical purposes, it’s even harder to make sure that the research is answering the right question in the first place, Sieben’s comments sound anti-science...but really they are frustration directed towards a real problem. There really is a lot of bad science out there, sometimes it is even published in top journals—and even good science is usually extremely limited insofar as you can use it in practice.
I think it’s just important to remember that while scientific papers should be given more weight than almost every other source of evidence, that’s not actually very much weight. You can’t instrumentally rely on a scientific finding unless it’s been replicated multiple times and/or has a well understood mechanism behind it.
You do understand that scientist don’t just look for correlations but form a bit more complex models than that. Do you seriously think that things like that are not taken into account!?
Yes. You should read the papers. They’re garbage.
Remember that study on doctors and how they screwed up the breast cancer Bayesian updating question? Only 15% of them got it right, which is actually surprisingly high.
Okay now how much statistical training do you think people in public health, a department that is a total joke at most universities, have? Because I know how much statistical training the geostatistitians have at UT and they’re brain damaged. They can sure work a software package though...
Hell, I am willing to bet that a bunch of the studies test those correlations by comparing for example smokers who eat more red meat versus smokers who eat less/none read meat.
“A bunch of” ~= the majority. I’m sure there could be a few, but it wouldn’t be characteristic. I’m not saying ALL the studies are going to be bad, just that bulk surveys are likely to be garbage.
Maybe I should have chosen “Theologians’ opinions on God” rather than “Middle aged/classed suburban nutritionists’ opinions on red meat”. I thought everyone here would see through frakking EPIDEMIOLOGICAL STUDIES, but I guess not.
Remember that study on doctors and how they screwed up the breast cancer Bayesian updating question? Only 15% of them got it right, which is actually surprisingly high.
Doctors not researchers in the top peer-reviewed papers.....
I thought everyone here would see through frakking EPIDEMIOLOGICAL STUDIES, but I guess not.
Haven’t been interested at all in the subject and have never looked into it. And anyway if you are right and they are completely fake and wrong, this would not be general evidence that papers are always as good as coin flips.
I am leaving this conversation. If you really believe that the most-cited, accepted, recent articles etc. are as accurate as a coin flip because people have biases and because the statistics are not perfect and if nothing that I’ve said so far has convinced you otherwise then there is no point in continuing.
Also, not to be rude, but I do not see why you would join LessWrong if you think like that. A lot of the material covered here and a lot of the community’s views are based on accepted research. The rest is based on less accepted research. Either way, the belief that research (especially well peer-reviewed research) brings you closer to the truth than coin flips on average is really ingrained in the community.
Doctors not researchers in the top peer-reviewed papers..…
Researchers who got there because other researchers said they were good. It’s circular logic.
Haven’t been interested at all in the subject and have never looked into it. And anyway if you are right and they are completely fake and wrong, this would not be general evidence that papers are always as good as coin flips.
It’s prima facie evidence. That’s all I hoped for. I haven’t actually done a SRS of journals by topic and figured out which ones are really BS. But of the subjects I do know about, almost all of the literature in “top peer reviewed” papers is garbage. This includes my own technical field of engineering/simulation.
I am leaving this conversation. If you really believe that the most-cited, accepted, recent articles etc. are as accurate as a coin flip because people have biases and because the statistics are not perfect and if nothing that I’ve said so far has convinced you otherwise then there is no point in continuing.
Straw man. I did not say the statistics were not “perfect”. And I did not say they were “as accurate as a coin flip”. In the red meat example, they are worse.
Also, not to be rude, but I do not see why you would join LessWrong if you think like that. A lot of the material covered here and a lot of the community’s views are based on accepted research.
A lot of LW is analytical.
The rest is based on less accepted research. Either way, the belief that research (especially well peer-reviewed research) brings you closer to the truth than coin flips on average is really ingrained in the community.
Research is a good starting point to discover the dynamics of a certain issue. It doesn’t mean my final opinion depends on it.
I followed the first link http://care.diabetesjournals.org/content/27/9/2108.short and the abstract there had “After adjusting for age, BMI, total energy intake, exercise, alcohol intake, cigarette smoking, and family history of diabetes, we found positive associations between intakes of red meat and processed meat and risk of type 2 diabetes.”
And then later, “These results remained significant after further adjustment for intakes of dietary fiber, magnesium, glycemic load, and total fat.” though I’m not sure if the latter was separate because it was specifically about /processed/ meat.
So long as they keep the claim as modest as ‘eating red meat “may” increase your risk of type II diabetes.’ it seems reasonable. They could still be wrong of course, but the statement allows for that. I should note here that the study was on women over 45, not a general population of an area.
If there’s better evidence that the search is not finding, that is a problem.
As a test case, I tried applying this technique to the Dangers of Red Meat, which is apparently a risk factor for colorectal cancer. The abstracts of the first few papers claimed that it is a risk factor with the following qualifications:
if you have the wrong genotype (224 citations)
if the meat is well-done (178 citations)
if you have the wrong genotype, the meat is well done, and you smoke (161 citations)
only for one subtype of colorectal cancer (128 citations)
only for a different subtype not overlapping with the previous one (96 citations)
for all subtypes uniformly (100 citations)
no correlation at all (78 citations)
Correct me if I’m wrong, but most of those look like the result of fishing around for positive results, e.g. “We can’t find a significant result… unless we split people into a bunch of genotype buckets, in which case one of them gives a small enough p-value for this journal.” I haven’t read the studies in question so maybe I’m being unfair here, but still, it feels fishy.
You may be right. It’s not quite M&M colors, though; there was apparently some reason to believe this allele would have an effect on the relationship between red meat and cancer. If anything, you might claim that the fishing around is occurring at the meta level: the buckets are “genetics has an effect”, “the cancer’s location has an effect”, “how the meat is cooked has an effect”, and so on.
I believe at least part of the reason for this is that “the correlation between red meat and cancer is 0.56” or whatever is not an interesting paper anymore, so we add other variables like smoking to see what happens. (Much like “red meat causes cancer” is a more interesting paper than “1% of people have cancer”.) I’m not sure whether this is good or bad.
I punched in “red meat” to google scholar.
http://care.diabetesjournals.org/content/27/9/2108.short 197 citations—concluding that eating red meat “may” increase your risk of type II diabetes.
http://ajcn.nutrition.org/content/82/6/1169.short 173 citations—Shows more “correlations” and “associations” for the “beneficial effect of plant food intake and an adverse effect of meat intake on blood pressure.”
Seems accurate.
People who eat red meat tend to:
Do you understand why it’s not… entirely honest… to blame red meat? It shows up as a statistical correlate. It can be used to identify people at risk for these conditions, but then researchers make a leap and infer a causal relationship.
It’s an ideological punchline they can use to get published. And that’s all.
You do understand that scientist don’t just look for correlations but form a bit more complex models than that. Do you seriously think that things like that are not taken into account!? Hell, I am willing to bet that a bunch of the studies test those correlations by comparing for example smokers who eat more red meat versus smokers who eat less/none read meat.
I mean, come on.
Taking everything into account is difficult, especially when you have no idea exactly what you aught to be taking into account. Even if you manage to do that exactly right, there is still publication bias to deal with. And if you are using science for practical purposes, it’s even harder to make sure that the research is answering the right question in the first place, Sieben’s comments sound anti-science...but really they are frustration directed towards a real problem. There really is a lot of bad science out there, sometimes it is even published in top journals—and even good science is usually extremely limited insofar as you can use it in practice.
I think it’s just important to remember that while scientific papers should be given more weight than almost every other source of evidence, that’s not actually very much weight. You can’t instrumentally rely on a scientific finding unless it’s been replicated multiple times and/or has a well understood mechanism behind it.
Yes. You should read the papers. They’re garbage.
Remember that study on doctors and how they screwed up the breast cancer Bayesian updating question? Only 15% of them got it right, which is actually surprisingly high.
Okay now how much statistical training do you think people in public health, a department that is a total joke at most universities, have? Because I know how much statistical training the geostatistitians have at UT and they’re brain damaged. They can sure work a software package though...
“A bunch of” ~= the majority. I’m sure there could be a few, but it wouldn’t be characteristic. I’m not saying ALL the studies are going to be bad, just that bulk surveys are likely to be garbage.
Maybe I should have chosen “Theologians’ opinions on God” rather than “Middle aged/classed suburban nutritionists’ opinions on red meat”. I thought everyone here would see through frakking EPIDEMIOLOGICAL STUDIES, but I guess not.
Doctors not researchers in the top peer-reviewed papers.....
Haven’t been interested at all in the subject and have never looked into it. And anyway if you are right and they are completely fake and wrong, this would not be general evidence that papers are always as good as coin flips.
I am leaving this conversation. If you really believe that the most-cited, accepted, recent articles etc. are as accurate as a coin flip because people have biases and because the statistics are not perfect and if nothing that I’ve said so far has convinced you otherwise then there is no point in continuing.
Also, not to be rude, but I do not see why you would join LessWrong if you think like that. A lot of the material covered here and a lot of the community’s views are based on accepted research. The rest is based on less accepted research. Either way, the belief that research (especially well peer-reviewed research) brings you closer to the truth than coin flips on average is really ingrained in the community.
Researchers who got there because other researchers said they were good. It’s circular logic.
It’s prima facie evidence. That’s all I hoped for. I haven’t actually done a SRS of journals by topic and figured out which ones are really BS. But of the subjects I do know about, almost all of the literature in “top peer reviewed” papers is garbage. This includes my own technical field of engineering/simulation.
Straw man. I did not say the statistics were not “perfect”. And I did not say they were “as accurate as a coin flip”. In the red meat example, they are worse.
A lot of LW is analytical.
Research is a good starting point to discover the dynamics of a certain issue. It doesn’t mean my final opinion depends on it.
I followed the first link http://care.diabetesjournals.org/content/27/9/2108.short and the abstract there had “After adjusting for age, BMI, total energy intake, exercise, alcohol intake, cigarette smoking, and family history of diabetes, we found positive associations between intakes of red meat and processed meat and risk of type 2 diabetes.”
And then later, “These results remained significant after further adjustment for intakes of dietary fiber, magnesium, glycemic load, and total fat.” though I’m not sure if the latter was separate because it was specifically about /processed/ meat.
So long as they keep the claim as modest as ‘eating red meat “may” increase your risk of type II diabetes.’ it seems reasonable. They could still be wrong of course, but the statement allows for that. I should note here that the study was on women over 45, not a general population of an area.
If there’s better evidence that the search is not finding, that is a problem.
Red meat adds a literal sizzle to research papers.