As someone who actually does academic research and has spent countless hours reading the fine details of “prestigious” publications, 90% of the material out there is total garbage, and it is difficult to know if a paper is garbage just by reading the abstract. Peer review doesn’t help either because review boards are lazy and will never double check any of your actual footwork. They will never read your code, rerun the algorithms you claim you used, etc. A simple glitch can change the sign of your answer, but you typically stop looking for glitches in your 10,000 lines of code once the answer “looks right”.
So, if there is any controversy on the issue, remain agnostic. Like a 90⁄10 split in academia is totally reasonable once you factor in the intellectual biases/laziness/incompetence of researchers and publishers. All an article tells you is that somewhere out there, someone with an IQ >= 110 has an opinion they are publishing to further their career. I don’t place very much weight on that.
Don’t become one of these “reserch sez...” people who just regurgitate abstracts. You’ll wind up worried about the dangers of red meat, getting too much sunlight, doing 45 minutes of cardio every day, etc.
You’ll wind up worried about the dangers of red meat, getting too much sunlight
I am worried about getting too much sunlight. Apart from any increased risk of melanoma, It @#%@ hurts! Your skin goes red, painful and sensitive to the touch. A little later the skin peels off. If there was sufficient exposure bleeding is involved.
As a test case, I tried applying this technique to the Dangers of Red Meat, which is apparently a risk factor for colorectal cancer. The abstracts of the first few papers claimed that it is a risk factor with the following qualifications:
if you have the wrong genotype (224 citations)
if the meat is well-done (178 citations)
if you have the wrong genotype, the meat is well done, and you smoke (161 citations)
only for one subtype of colorectal cancer (128 citations)
only for a different subtype not overlapping with the previous one (96 citations)
Correct me if I’m wrong, but most of those look like the result of fishing around for positive results, e.g. “We can’t find a significant result… unless we split people into a bunch of genotype buckets, in which case one of them gives a small enough p-value for this journal.” I haven’t read the studies in question so maybe I’m being unfair here, but still, it feels fishy.
You may be right. It’s not quite M&M colors, though; there was apparently some reason to believe this allele would have an effect on the relationship between red meat and cancer. If anything, you might claim that the fishing around is occurring at the meta level: the buckets are “genetics has an effect”, “the cancer’s location has an effect”, “how the meat is cooked has an effect”, and so on.
I believe at least part of the reason for this is that “the correlation between red meat and cancer is 0.56” or whatever is not an interesting paper anymore, so we add other variables like smoking to see what happens. (Much like “red meat causes cancer” is a more interesting paper than “1% of people have cancer”.) I’m not sure whether this is good or bad.
Do you understand why it’s not… entirely honest… to blame red meat? It shows up as a statistical correlate. It can be used to identify people at risk for these conditions, but then researchers make a leap and infer a causal relationship.
It’s an ideological punchline they can use to get published. And that’s all.
You do understand that scientist don’t just look for correlations but form a bit more complex models than that. Do you seriously think that things like that are not taken into account!? Hell, I am willing to bet that a bunch of the studies test those correlations by comparing for example smokers who eat more red meat versus smokers who eat less/none read meat.
Taking everything into account is difficult, especially when you have no idea exactly what you aught to be taking into account. Even if you manage to do that exactly right, there is still publication bias to deal with. And if you are using science for practical purposes, it’s even harder to make sure that the research is answering the right question in the first place, Sieben’s comments sound anti-science...but really they are frustration directed towards a real problem. There really is a lot of bad science out there, sometimes it is even published in top journals—and even good science is usually extremely limited insofar as you can use it in practice.
I think it’s just important to remember that while scientific papers should be given more weight than almost every other source of evidence, that’s not actually very much weight. You can’t instrumentally rely on a scientific finding unless it’s been replicated multiple times and/or has a well understood mechanism behind it.
You do understand that scientist don’t just look for correlations but form a bit more complex models than that. Do you seriously think that things like that are not taken into account!?
Yes. You should read the papers. They’re garbage.
Remember that study on doctors and how they screwed up the breast cancer Bayesian updating question? Only 15% of them got it right, which is actually surprisingly high.
Okay now how much statistical training do you think people in public health, a department that is a total joke at most universities, have? Because I know how much statistical training the geostatistitians have at UT and they’re brain damaged. They can sure work a software package though...
Hell, I am willing to bet that a bunch of the studies test those correlations by comparing for example smokers who eat more red meat versus smokers who eat less/none read meat.
“A bunch of” ~= the majority. I’m sure there could be a few, but it wouldn’t be characteristic. I’m not saying ALL the studies are going to be bad, just that bulk surveys are likely to be garbage.
Maybe I should have chosen “Theologians’ opinions on God” rather than “Middle aged/classed suburban nutritionists’ opinions on red meat”. I thought everyone here would see through frakking EPIDEMIOLOGICAL STUDIES, but I guess not.
Remember that study on doctors and how they screwed up the breast cancer Bayesian updating question? Only 15% of them got it right, which is actually surprisingly high.
Doctors not researchers in the top peer-reviewed papers.....
I thought everyone here would see through frakking EPIDEMIOLOGICAL STUDIES, but I guess not.
Haven’t been interested at all in the subject and have never looked into it. And anyway if you are right and they are completely fake and wrong, this would not be general evidence that papers are always as good as coin flips.
I am leaving this conversation. If you really believe that the most-cited, accepted, recent articles etc. are as accurate as a coin flip because people have biases and because the statistics are not perfect and if nothing that I’ve said so far has convinced you otherwise then there is no point in continuing.
Also, not to be rude, but I do not see why you would join LessWrong if you think like that. A lot of the material covered here and a lot of the community’s views are based on accepted research. The rest is based on less accepted research. Either way, the belief that research (especially well peer-reviewed research) brings you closer to the truth than coin flips on average is really ingrained in the community.
Doctors not researchers in the top peer-reviewed papers..…
Researchers who got there because other researchers said they were good. It’s circular logic.
Haven’t been interested at all in the subject and have never looked into it. And anyway if you are right and they are completely fake and wrong, this would not be general evidence that papers are always as good as coin flips.
It’s prima facie evidence. That’s all I hoped for. I haven’t actually done a SRS of journals by topic and figured out which ones are really BS. But of the subjects I do know about, almost all of the literature in “top peer reviewed” papers is garbage. This includes my own technical field of engineering/simulation.
I am leaving this conversation. If you really believe that the most-cited, accepted, recent articles etc. are as accurate as a coin flip because people have biases and because the statistics are not perfect and if nothing that I’ve said so far has convinced you otherwise then there is no point in continuing.
Straw man. I did not say the statistics were not “perfect”. And I did not say they were “as accurate as a coin flip”. In the red meat example, they are worse.
Also, not to be rude, but I do not see why you would join LessWrong if you think like that. A lot of the material covered here and a lot of the community’s views are based on accepted research.
A lot of LW is analytical.
The rest is based on less accepted research. Either way, the belief that research (especially well peer-reviewed research) brings you closer to the truth than coin flips on average is really ingrained in the community.
Research is a good starting point to discover the dynamics of a certain issue. It doesn’t mean my final opinion depends on it.
I followed the first link http://care.diabetesjournals.org/content/27/9/2108.short and the abstract there had “After adjusting for age, BMI, total energy intake, exercise, alcohol intake, cigarette smoking, and family history of diabetes, we found positive associations between intakes of red meat and processed meat and risk of type 2 diabetes.”
And then later, “These results remained significant after further adjustment for intakes of dietary fiber, magnesium, glycemic load, and total fat.” though I’m not sure if the latter was separate because it was specifically about /processed/ meat.
So long as they keep the claim as modest as ‘eating red meat “may” increase your risk of type II diabetes.’ it seems reasonable. They could still be wrong of course, but the statement allows for that. I should note here that the study was on women over 45, not a general population of an area.
If there’s better evidence that the search is not finding, that is a problem.
Yes, there is A LOT of garbage. This is why I am recommending using heuristics such as numbers of citations—to maximize the accuracy of the information. And, yes, peer review is not perfect but compare journals/fields that rely on peer-review to those that do not...
Furthermore, Systematic Reviews have a pretty good track record as far as I know and this is why I recommend them.
So, if there is any controversy on the issue, remain agnostic
This post is not so much about academically controversial issues but even in those cases if you don’t have any reasons not to, then siding with the majority will bring you to the truth more often than the alternative.
Don’t become one of these “reserch sez...” people who just regurgitate abstracts. You’ll wind up worried about the dangers of red meat, getting too much sunlight, doing 45 minutes of cardio every day, etc.
This is the type of thing that you see if you do a normal google search instead of a scholarly search. I have not checked but I bet that the most cited recent review articles on those issues can provide you with some pretty good information.
Yes, there is A LOT of garbage. This is why I am recommending using heuristics such as numbers of citations—to maximize the accuracy of the information. And, yes, peer review is not perfect but compare journals/fields that rely on peer-review to those that do not...
My argument really boils down to 2 things. Researchers being systematically biased (ex: red meat), and researchers having a very low probability of actually knowing the right answer but publishing something that fits some narrow set of data (ex: “advanced” simulation). To be sure, I’ve used research to make a lot of informed decisions over my lifetime, but it’s always been straightforward, pretty much unanimous, and with lots of testimonials from online groups to give it statistical mass.
This post is not so much about academically controversial issues but even in those cases if you don’t have any reasons not to, then siding with the majority will bring you to the truth more often than the alternative.
Would you adopt this heuristic in any other scenario where the “right answer” isn’t obvious? Music, books, diet, politics, etc? Even when you restrict your sampling pool to “experts only”, the results are still pretty bad. These people are self-selecting to do research. It’s not like you’re picking a random disinterested intelligent person and asking them to study the problem. No one becomes a nutritionist because they have no opinion on food.
This is the type of thing that you see if you do a normal google search instead of a scholarly search. I have not checked but I bet that the most cited recent review articles on those issues can provide you with some pretty good information.
The overwhelming trend is fear mongering coming out of epidemiological studies.
My argument really boils down to 2 things. Researchers being systematically biased (ex: red meat), and researchers having a very low probability of actually knowing the right answer but publishing something that fits some narrow set of data (ex: “advanced” simulation). To be sure, I’ve used research to make a lot of informed decisions over my lifetime, but it’s always been straightforward, pretty much unanimous, and with lots of testimonials from online groups to give it statistical mass.
I acknowledged that there are problems, nothing is perfect. But I don’t know what you want from me. To convince you that science as a whole works!? Or that information in studies is more accurate than made-up information?
All I am advocating is to look for ‘respected’ studies and look at them. If you don’t think that looking at studies ‘approved’ by the field gives you more accurate information than not doing it I can’t really do much.
Would you adopt this heuristic in any other scenario where the “right answer” isn’t obvious? Music, books, diet, politics, etc? Even when you restrict your sampling pool to “experts only”, the results are still pretty bad.
Yes, I believe in science no matter what scenario I am in. You don’t need to blindly trust it or anything, I put different weights on different claims etc. but I would still take into account information from recent, well-cited meta-analyses or whatever I can get my hands on.
These people are self-selecting to do research. It’s not like you’re picking a random disinterested intelligent person and asking them to study the problem. No one becomes a nutritionist because they have no opinion on food.
So I should worry that researchers are interested in the topic that they are researching. What douchbags, eh?
The overwhelming trend is fear mongering coming out of epidemiological studies.
Okay. Citation? And remember we are not talking about ‘most studies’ or anything. The studies that we are talking about are well cited, by known researchers if possible and systematic reviews if possible.
Or that information in studies is more accurate than made-up information?
This is exactly my point. Studies on many many subjects may not contain information more useful than coin flip, let alone an educated guess.
All I am advocating is to look for ‘respected’ studies and look at them. If you don’t think that looking at studies ‘approved’ by the field gives you more accurate information than not doing it I can’t really do much.
This is question begging. You have to have a theory about why a “respected” study is likely to be correct. I’ve already provided theories explaining why they’re likely to be incorrect a large portion of the time.
Yes, I believe in science no matter what scenario I am in. You don’t need to blindly trust it or anything, I put different weights on different claims etc. but I would still take into account information from recent, well-cited meta-analyses or whatever I can get my hands on.
I believe in science too. But “science” and “science articles” are different things. But you didn’t answer my question, and I really want to drive home that almost no one thinks it’s a good idea to trust “majority expert opinion” in all sorts of areas.
So I should worry that researchers are interested in the topic that they are researching. What douchbags, eh?
Don’t be dense. You know exactly what I mean. A vegetarian goes to grad school and does research on nutrition. What do you think is going to happen?
Okay. Citation? And remember we are not talking about ‘most studies’ or anything. The studies that we are talking about are well cited, by known researchers if possible and systematic reviews if possible.
Citations above where you commented. You can also just punch “red meat” into google scholar and it’s all about how you can die from it.
This is exactly my point. Studies on many many subjects may not contain information more useful than coin flip, let alone an educated guess.
Wow. This is a pretty far-fetched claim..
You have to have a theory about why a “respected” study is likely to be correct. I’ve already provided theories explaining why they’re likely to be incorrect a large portion of the time.
My theory is that respected papers are done in a method more resembling the scientific method than coin flip on average and thus they get more accurate results than a coin flip. There, happy?
I believe in science too. But “science” and “science articles” are different things. But you didn’t answer my question
I did answer your question—the answer was yes.
I really want to drive home that almost no one thinks it’s a good idea to trust “majority expert opinion” in all sorts of areas.
Except, you know, the majority.
Don’t be dense. You know exactly what I mean. A vegetarian goes to grad school and does research on nutrition. What do you think is going to happen?
He is biased. So is the guy that went into grad school with anti-vegetarian views. If those guys are not changing their opinion based on the evidence then the chance is smaller (not nil though) that their papers will be highly cited.
Citations above where you commented.
You call studies that find correlations between things fear mongering? Oh my.
You can also just punch “red meat” into google scholar and it’s all about how you can die from it.
Oh my. Okay, first of all you can die of pretty much anything and pretty much anything has some dangers. Or at least that’s what does fear mongering scientists claim. The studies show you some numbers to guide you in how much danger X (in this case red meat) poses to specific individuals.
Do you have any specific reason to think that those studies are fabricated and that in fact red meat has none of the effects that they claim?
Furthermore, if I tell you that drinking a large amount of water can kill you and do a study to prove it then am I a fear mongering scientist?
My theory is that respected papers are done in a method more resembling the scientific method than coin flip on average and thus they get more accurate results than a coin flip. There, happy?
Thanks for clarifying. I disagree. See the systematic bias/complexity arguments.
I did answer your question—the answer was yes.
Do you really choose your music based on the average opinion of “experts”? Give me a break. Look, if you could randomly draft 20 people who had demonstrated independent rationality and objectivity, assign them to a problem, and take the majority opinion, I would be fine with that. But that’s not what we have at all. Anyone with an IQ above 110 can get any degree they want.
He is biased. So is the guy that went into grad school with anti-vegetarian views. If those guys are not changing their opinion based on the evidence then the chance is smaller (not nil though) that their papers will be highly cited.
Why would the best research win out? Why not the most fashionable research that confirms everyones’ worldviews? Why not the research that has the punchier abstract title? Why not the research that was fudged to show more impressive results?
You call studies that find correlations between things fear mongering? Oh my.
They could probably find a correlation between eating red meat and watching action movies, but that’s not exactly publishable.
Oh my. Okay, first of all you can die of pretty much anything and pretty much anything has some dangers. Or at least that’s what does fear mongering scientists claim. The studies show you some numbers to guide you in how much danger X (in this case red meat) poses to specific individuals.
I mean sure, if you consumed more red meat than was physiologically possible to scarf down without choking, you’d die. But that’s not unique to red meat. They’re claiming that there is a unique property of red meat which causes all these health problems, so not it doesn’t fall under the same category as “pretty much anything can kill you”.
And no, they technically don’t even show danger. All they do is show correlations. Would you also conclude that wearing XXL t-shirts makes you fat?
Do you have any specific reason to think that those studies are fabricated and that in fact red meat has none of the effects that they claim?
Confounding variables mentioned above. Lack of replication/opposite findings in controlled studies. Testimonies from thousands of people on the paleo diet who have reversed their blood chemistry. Fat doctors/nutritionists, etc.
Furthermore, if I tell you that drinking a large amount of water can kill you and do a study to prove it then am I a fear mongering scientist?
If you try to publish dozens of studies on it in the year 2012, yes you are.
“Hey guys I just did ANOTHER study showing that drinking 82 gallons of water in one sitting will kill you (p<0.05)”
That would be fear mongering, although people probably wouldn’t take it seriously.
Do you really choose your music based on the average opinion of “experts”?
Yes, except that I am the only expert on what music I like.
Anyone with an IQ above 110 can get any degree they want.
Are we talking about degrees here. I am pretty sure Ive been talking about top level articles. Or can anyone with an IQ above 110 publish one of those?
Why would the best research win out?
No winning out here. The research will be closer to the truth than a random answer because the accuracy of the theories gets compared to reality buy doing experiments for example. Or because not every single person is completely biased and blind to the results that they get.
And no, they technically don’t even show danger. All they do is show correlations. Would you also conclude that wearing XXL t-shirts makes you fat?
Hey, that’s why they are correlations. I am not stopping you from believing that being predisposed to diabetes and cancer or whatever makes you more likely to eat red meat for example.
As I said in the other thread, I am not participating in this conversation any more.
Yes, except that I am the only expert on what music I like.
Oh, so you agree there are can be good reasons to discount the “expert” establishment, no matter how much “peer review” or citations they have.
Are we talking about degrees here. I am pretty sure Ive been talking about top level articles. Or can anyone with an IQ above 110 publish one of those?
Yes. But getting a degree is normally a prereq for publishing, and everyone who gets a degree publishes something. And yes, you can publish in the “top” journal articles in grad school.
No winning out here. The research will be closer to the truth than a random answer because the accuracy of the theories gets compared to reality buy doing experiments for example. Or because not every single person is completely biased and blind to the results that they get.
Not every single person has to be biased. Just enough of them.
Hey, that’s why they are correlations. I am not stopping you from believing that being predisposed to diabetes and cancer or whatever makes you more likely to eat red meat for example.
But the researchers conclude that red meat increases your risk of heart disease simply because it is associated with heart disease. That is dishonest. If they can get away with blatantly unsubstantiated statements like that in epidemiological papers, what can’t they get away with buried in their SAS databases and algorithms?
As someone who actually does academic research and has spent countless hours reading the fine details of “prestigious” publications, 90% of the material out there is total garbage, and it is difficult to know if a paper is garbage just by reading the abstract. Peer review doesn’t help either because review boards are lazy and will never double check any of your actual footwork. They will never read your code, rerun the algorithms you claim you used, etc. A simple glitch can change the sign of your answer, but you typically stop looking for glitches in your 10,000 lines of code once the answer “looks right”.
So, if there is any controversy on the issue, remain agnostic. Like a 90⁄10 split in academia is totally reasonable once you factor in the intellectual biases/laziness/incompetence of researchers and publishers. All an article tells you is that somewhere out there, someone with an IQ >= 110 has an opinion they are publishing to further their career. I don’t place very much weight on that.
Don’t become one of these “reserch sez...” people who just regurgitate abstracts. You’ll wind up worried about the dangers of red meat, getting too much sunlight, doing 45 minutes of cardio every day, etc.
I am worried about getting too much sunlight. Apart from any increased risk of melanoma, It @#%@ hurts! Your skin goes red, painful and sensitive to the touch. A little later the skin peels off. If there was sufficient exposure bleeding is involved.
Also, as long as we’re picking on specific examples...
I’m confused by the inclusion of this. Is the jury not yet settled on the benefits of daily cardio?
As a test case, I tried applying this technique to the Dangers of Red Meat, which is apparently a risk factor for colorectal cancer. The abstracts of the first few papers claimed that it is a risk factor with the following qualifications:
if you have the wrong genotype (224 citations)
if the meat is well-done (178 citations)
if you have the wrong genotype, the meat is well done, and you smoke (161 citations)
only for one subtype of colorectal cancer (128 citations)
only for a different subtype not overlapping with the previous one (96 citations)
for all subtypes uniformly (100 citations)
no correlation at all (78 citations)
Correct me if I’m wrong, but most of those look like the result of fishing around for positive results, e.g. “We can’t find a significant result… unless we split people into a bunch of genotype buckets, in which case one of them gives a small enough p-value for this journal.” I haven’t read the studies in question so maybe I’m being unfair here, but still, it feels fishy.
You may be right. It’s not quite M&M colors, though; there was apparently some reason to believe this allele would have an effect on the relationship between red meat and cancer. If anything, you might claim that the fishing around is occurring at the meta level: the buckets are “genetics has an effect”, “the cancer’s location has an effect”, “how the meat is cooked has an effect”, and so on.
I believe at least part of the reason for this is that “the correlation between red meat and cancer is 0.56” or whatever is not an interesting paper anymore, so we add other variables like smoking to see what happens. (Much like “red meat causes cancer” is a more interesting paper than “1% of people have cancer”.) I’m not sure whether this is good or bad.
I punched in “red meat” to google scholar.
http://care.diabetesjournals.org/content/27/9/2108.short 197 citations—concluding that eating red meat “may” increase your risk of type II diabetes.
http://ajcn.nutrition.org/content/82/6/1169.short 173 citations—Shows more “correlations” and “associations” for the “beneficial effect of plant food intake and an adverse effect of meat intake on blood pressure.”
Seems accurate.
People who eat red meat tend to:
Do you understand why it’s not… entirely honest… to blame red meat? It shows up as a statistical correlate. It can be used to identify people at risk for these conditions, but then researchers make a leap and infer a causal relationship.
It’s an ideological punchline they can use to get published. And that’s all.
You do understand that scientist don’t just look for correlations but form a bit more complex models than that. Do you seriously think that things like that are not taken into account!? Hell, I am willing to bet that a bunch of the studies test those correlations by comparing for example smokers who eat more red meat versus smokers who eat less/none read meat.
I mean, come on.
Taking everything into account is difficult, especially when you have no idea exactly what you aught to be taking into account. Even if you manage to do that exactly right, there is still publication bias to deal with. And if you are using science for practical purposes, it’s even harder to make sure that the research is answering the right question in the first place, Sieben’s comments sound anti-science...but really they are frustration directed towards a real problem. There really is a lot of bad science out there, sometimes it is even published in top journals—and even good science is usually extremely limited insofar as you can use it in practice.
I think it’s just important to remember that while scientific papers should be given more weight than almost every other source of evidence, that’s not actually very much weight. You can’t instrumentally rely on a scientific finding unless it’s been replicated multiple times and/or has a well understood mechanism behind it.
Yes. You should read the papers. They’re garbage.
Remember that study on doctors and how they screwed up the breast cancer Bayesian updating question? Only 15% of them got it right, which is actually surprisingly high.
Okay now how much statistical training do you think people in public health, a department that is a total joke at most universities, have? Because I know how much statistical training the geostatistitians have at UT and they’re brain damaged. They can sure work a software package though...
“A bunch of” ~= the majority. I’m sure there could be a few, but it wouldn’t be characteristic. I’m not saying ALL the studies are going to be bad, just that bulk surveys are likely to be garbage.
Maybe I should have chosen “Theologians’ opinions on God” rather than “Middle aged/classed suburban nutritionists’ opinions on red meat”. I thought everyone here would see through frakking EPIDEMIOLOGICAL STUDIES, but I guess not.
Doctors not researchers in the top peer-reviewed papers.....
Haven’t been interested at all in the subject and have never looked into it. And anyway if you are right and they are completely fake and wrong, this would not be general evidence that papers are always as good as coin flips.
I am leaving this conversation. If you really believe that the most-cited, accepted, recent articles etc. are as accurate as a coin flip because people have biases and because the statistics are not perfect and if nothing that I’ve said so far has convinced you otherwise then there is no point in continuing.
Also, not to be rude, but I do not see why you would join LessWrong if you think like that. A lot of the material covered here and a lot of the community’s views are based on accepted research. The rest is based on less accepted research. Either way, the belief that research (especially well peer-reviewed research) brings you closer to the truth than coin flips on average is really ingrained in the community.
Researchers who got there because other researchers said they were good. It’s circular logic.
It’s prima facie evidence. That’s all I hoped for. I haven’t actually done a SRS of journals by topic and figured out which ones are really BS. But of the subjects I do know about, almost all of the literature in “top peer reviewed” papers is garbage. This includes my own technical field of engineering/simulation.
Straw man. I did not say the statistics were not “perfect”. And I did not say they were “as accurate as a coin flip”. In the red meat example, they are worse.
A lot of LW is analytical.
Research is a good starting point to discover the dynamics of a certain issue. It doesn’t mean my final opinion depends on it.
I followed the first link http://care.diabetesjournals.org/content/27/9/2108.short and the abstract there had “After adjusting for age, BMI, total energy intake, exercise, alcohol intake, cigarette smoking, and family history of diabetes, we found positive associations between intakes of red meat and processed meat and risk of type 2 diabetes.”
And then later, “These results remained significant after further adjustment for intakes of dietary fiber, magnesium, glycemic load, and total fat.” though I’m not sure if the latter was separate because it was specifically about /processed/ meat.
So long as they keep the claim as modest as ‘eating red meat “may” increase your risk of type II diabetes.’ it seems reasonable. They could still be wrong of course, but the statement allows for that. I should note here that the study was on women over 45, not a general population of an area.
If there’s better evidence that the search is not finding, that is a problem.
Red meat adds a literal sizzle to research papers.
Yes, there is A LOT of garbage. This is why I am recommending using heuristics such as numbers of citations—to maximize the accuracy of the information. And, yes, peer review is not perfect but compare journals/fields that rely on peer-review to those that do not...
Furthermore, Systematic Reviews have a pretty good track record as far as I know and this is why I recommend them.
This post is not so much about academically controversial issues but even in those cases if you don’t have any reasons not to, then siding with the majority will bring you to the truth more often than the alternative.
This is the type of thing that you see if you do a normal google search instead of a scholarly search. I have not checked but I bet that the most cited recent review articles on those issues can provide you with some pretty good information.
My argument really boils down to 2 things. Researchers being systematically biased (ex: red meat), and researchers having a very low probability of actually knowing the right answer but publishing something that fits some narrow set of data (ex: “advanced” simulation). To be sure, I’ve used research to make a lot of informed decisions over my lifetime, but it’s always been straightforward, pretty much unanimous, and with lots of testimonials from online groups to give it statistical mass.
Would you adopt this heuristic in any other scenario where the “right answer” isn’t obvious? Music, books, diet, politics, etc? Even when you restrict your sampling pool to “experts only”, the results are still pretty bad. These people are self-selecting to do research. It’s not like you’re picking a random disinterested intelligent person and asking them to study the problem. No one becomes a nutritionist because they have no opinion on food.
The overwhelming trend is fear mongering coming out of epidemiological studies.
I acknowledged that there are problems, nothing is perfect. But I don’t know what you want from me. To convince you that science as a whole works!? Or that information in studies is more accurate than made-up information?
All I am advocating is to look for ‘respected’ studies and look at them. If you don’t think that looking at studies ‘approved’ by the field gives you more accurate information than not doing it I can’t really do much.
Yes, I believe in science no matter what scenario I am in. You don’t need to blindly trust it or anything, I put different weights on different claims etc. but I would still take into account information from recent, well-cited meta-analyses or whatever I can get my hands on.
So I should worry that researchers are interested in the topic that they are researching. What douchbags, eh?
Okay. Citation? And remember we are not talking about ‘most studies’ or anything. The studies that we are talking about are well cited, by known researchers if possible and systematic reviews if possible.
This is exactly my point. Studies on many many subjects may not contain information more useful than coin flip, let alone an educated guess.
This is question begging. You have to have a theory about why a “respected” study is likely to be correct. I’ve already provided theories explaining why they’re likely to be incorrect a large portion of the time.
I believe in science too. But “science” and “science articles” are different things. But you didn’t answer my question, and I really want to drive home that almost no one thinks it’s a good idea to trust “majority expert opinion” in all sorts of areas.
Don’t be dense. You know exactly what I mean. A vegetarian goes to grad school and does research on nutrition. What do you think is going to happen?
Citations above where you commented. You can also just punch “red meat” into google scholar and it’s all about how you can die from it.
Wow. This is a pretty far-fetched claim..
My theory is that respected papers are done in a method more resembling the scientific method than coin flip on average and thus they get more accurate results than a coin flip. There, happy?
I did answer your question—the answer was yes.
Except, you know, the majority.
He is biased. So is the guy that went into grad school with anti-vegetarian views. If those guys are not changing their opinion based on the evidence then the chance is smaller (not nil though) that their papers will be highly cited.
You call studies that find correlations between things fear mongering? Oh my.
Oh my. Okay, first of all you can die of pretty much anything and pretty much anything has some dangers. Or at least that’s what does fear mongering scientists claim. The studies show you some numbers to guide you in how much danger X (in this case red meat) poses to specific individuals. Do you have any specific reason to think that those studies are fabricated and that in fact red meat has none of the effects that they claim?
Furthermore, if I tell you that drinking a large amount of water can kill you and do a study to prove it then am I a fear mongering scientist?
This is a pretty solid argument.
Thanks for clarifying. I disagree. See the systematic bias/complexity arguments.
Do you really choose your music based on the average opinion of “experts”? Give me a break. Look, if you could randomly draft 20 people who had demonstrated independent rationality and objectivity, assign them to a problem, and take the majority opinion, I would be fine with that. But that’s not what we have at all. Anyone with an IQ above 110 can get any degree they want.
Why would the best research win out? Why not the most fashionable research that confirms everyones’ worldviews? Why not the research that has the punchier abstract title? Why not the research that was fudged to show more impressive results?
They could probably find a correlation between eating red meat and watching action movies, but that’s not exactly publishable.
I mean sure, if you consumed more red meat than was physiologically possible to scarf down without choking, you’d die. But that’s not unique to red meat. They’re claiming that there is a unique property of red meat which causes all these health problems, so not it doesn’t fall under the same category as “pretty much anything can kill you”.
And no, they technically don’t even show danger. All they do is show correlations. Would you also conclude that wearing XXL t-shirts makes you fat?
Confounding variables mentioned above. Lack of replication/opposite findings in controlled studies. Testimonies from thousands of people on the paleo diet who have reversed their blood chemistry. Fat doctors/nutritionists, etc.
If you try to publish dozens of studies on it in the year 2012, yes you are.
“Hey guys I just did ANOTHER study showing that drinking 82 gallons of water in one sitting will kill you (p<0.05)”
That would be fear mongering, although people probably wouldn’t take it seriously.
Yes, except that I am the only expert on what music I like.
Are we talking about degrees here. I am pretty sure Ive been talking about top level articles. Or can anyone with an IQ above 110 publish one of those?
No winning out here. The research will be closer to the truth than a random answer because the accuracy of the theories gets compared to reality buy doing experiments for example. Or because not every single person is completely biased and blind to the results that they get.
Hey, that’s why they are correlations. I am not stopping you from believing that being predisposed to diabetes and cancer or whatever makes you more likely to eat red meat for example.
As I said in the other thread, I am not participating in this conversation any more.
Oh, so you agree there are can be good reasons to discount the “expert” establishment, no matter how much “peer review” or citations they have.
Yes. But getting a degree is normally a prereq for publishing, and everyone who gets a degree publishes something. And yes, you can publish in the “top” journal articles in grad school.
Not every single person has to be biased. Just enough of them.
But the researchers conclude that red meat increases your risk of heart disease simply because it is associated with heart disease. That is dishonest. If they can get away with blatantly unsubstantiated statements like that in epidemiological papers, what can’t they get away with buried in their SAS databases and algorithms?