There might be some factors which the study is failing to control for, but from the link in the grandparent
Included in the analysis were 448,568 men and women without prevalent cancer, stroke, or myocardial infarction, and with complete information on diet, smoking, physical activity and body mass index
The study seems to control for the more obvious associated factors.
Also, the full text states that the consumption of red meat is associated with an increase in mortality when controlling for the confounders assessed in their study, with processed meat being associated with a greater increase, but poultry not being associated with an increase in mortality.
The problem is that the choice to eat differently itself is potentially a confounding factor (people who pick particular diets may not be like people who do not do so in very important ways), and any time you have to deal with, say, 10 factors, and try to smooth them out, you have to question whether any signal you find is even meaningful at all, especially when it is relatively small.
The study in particular notes:
[quote]Men and women in the top categories of red or processed meat intake in general consumed fewer fruits and vegetables than those with low intake. They were more likely to be current smokers and less likely to have a university degree [/quote]
At this point, you have to ask yourself whether you can even do any sort of reasonable meta analysis on the population. You’re seeing clear differences between the populations and you can’t just “compensate for them”. If you take a sub-population which has numerous factors which increase their risk of some disease, and then “compensate” for those factors and still see an elevated level of the disease, it isn’t actually suggestive of anything at all, because you have no way of knowing whether your “compensation” actually compensated for it or not. Statistics is not magic; it cannot magically remove bias from data.
This is the problem with virtually all analysis like this, and is why you should never, ever believe studies like this. Worse still, there’s a good chance you’re looking at the blue M&M problem—if you do enough meta analysis of a large population you will find significant trends which are not really there, and different studies (noted in the paper) indicate different results—that study showed no increase in mortality and morbidity from red meat consumption, an American study showed an increase, and several vegetarian studies showed no difference at all. Because of publication bias (positive results are more likely to be reported than negative results), potential researcher bias (belief that a vegetarian diet is good for you is likelier than normal in a population studying diet, because vegetarians are more interested in diets than the population as a whole), and the fact that we’re looking at conflicting results from studies, I’d say that that is pretty good evidence that there is no real effect and it is all nonsense. If I see five studies on diet, and three of them say one thing and two say another, I’m going to stick with the null hypothesis because it is far more likely that the three studies that say it does something are the result of publication bias of positive results.
At this point, you have to ask yourself whether you can even do any sort of reasonable meta analysis on the population. You’re seeing clear differences between the populations and you can’t just “compensate for them”. If you take a sub-population which has numerous factors which increase their risk of some disease, and then “compensate” for those factors and still see an elevated level of the disease, it isn’t actually suggestive of anything at all, because you have no way of knowing whether your “compensation” actually compensated for it or not. Statistics is not magic; it cannot magically remove bias from data.
Well, if you already know how much each of the associated factors contributes alone via other tests where you were able to isolate those variables, you can make an educated guess that their combined effect is no greater than the sum of their individual effects.
The presence of other studies that didn’t show the same significant results weighs against it, but on the other hand such cases are certainly not unheard of with respect to associations that turn out to be real. The Cochrane Collaboration’s logo comes from a forest plot of results for whether an injection of corticosteroids reduce the chance of early death in premature birth. Five out of seven studies failed to achieve statistical significance, but when their evidence was taken together, it achieved very high signficance, and further research since suggests a reduction of mortality rate between 30-50%.
While a study of the sort linked above certainly doesn’t establish the truth of its findings with the confidence of its statistical significance, “never believe studies like this” doesn’t leave you safe from a treatment-of-evidence standpoint, because even in the case of a real association, the data are frequently going to be messy enough that you’d be hard pressed to locate it statistically. You don’t want to set your bar for evidence so high that, in the event that the association were real, you couldn’t be persuaded to believe in it.
You can’t make an educated guess that a combination of multiple factors is no greater than the sum of their individual effects, and indeed, when you’re talking about disease states, this is the OPPOSITE of what you should assume. The harm done to your body taxes its ability to deal with harm; the more harm you apply to it, whatever the source, the worse things get. Your body only has so much ability to fight off bad things happening to it, so if you add two bad things on top of each other, you’re actually likely to see harm which is worse than the sum of their effects because part of each of the effects is naturally masked by your body’s own repair mechanisms.
On the other hand, you could have something where the negative effects of each of the things counteracts each other.
Moreover (and worse), you’re assuming you have any independent data to begin with. Given that there is a correlation between smoking and red meat consumption, your smoking numbers are already suspect, because we’ve established that the two are not independent variables.
In any event, guessing is not science, it is nonsense. I could guess that the impact of the factors was greater than the sum of the parts, and get a different result, and as you can see, it is perfectly reasonable to make that guess as well. That’s why it is called a guess.
When we’re doing analysis, guessing is bad. You guess BEFORE you do the analysis, not afterwards. All you’re doing when you “guess” how large the impact is, is manipulating the data.
That’s why control groups are so important.
Regarding glucocorticosteroid use in pregnancy, there actually is quite a bit of debate over whether or not their use is actually a good thing due to the fact that cortiocosteroids are tetratogens.
And yes, actually, it is generally better not to believe in true correlations than it is to believe in false ones. Look at all the people who are raising malnourished children on vegan and vegetarian diets.
Well, there’s certainly no shortage of evidence that it’s unhealthy for children to be malnourished, so that amounts to defying one true correlation in favor of the possibility of another.
Supposing that there were a causative relation between red meat consumption and mortality, with a low effect size, under what circumstances would you be persuaded to believe in it?
There might be some factors which the study is failing to control for, but from the link in the grandparent
The study seems to control for the more obvious associated factors.
Also, the full text states that the consumption of red meat is associated with an increase in mortality when controlling for the confounders assessed in their study, with processed meat being associated with a greater increase, but poultry not being associated with an increase in mortality.
The problem is that the choice to eat differently itself is potentially a confounding factor (people who pick particular diets may not be like people who do not do so in very important ways), and any time you have to deal with, say, 10 factors, and try to smooth them out, you have to question whether any signal you find is even meaningful at all, especially when it is relatively small.
The study in particular notes:
[quote]Men and women in the top categories of red or processed meat intake in general consumed fewer fruits and vegetables than those with low intake. They were more likely to be current smokers and less likely to have a university degree [/quote]
At this point, you have to ask yourself whether you can even do any sort of reasonable meta analysis on the population. You’re seeing clear differences between the populations and you can’t just “compensate for them”. If you take a sub-population which has numerous factors which increase their risk of some disease, and then “compensate” for those factors and still see an elevated level of the disease, it isn’t actually suggestive of anything at all, because you have no way of knowing whether your “compensation” actually compensated for it or not. Statistics is not magic; it cannot magically remove bias from data.
This is the problem with virtually all analysis like this, and is why you should never, ever believe studies like this. Worse still, there’s a good chance you’re looking at the blue M&M problem—if you do enough meta analysis of a large population you will find significant trends which are not really there, and different studies (noted in the paper) indicate different results—that study showed no increase in mortality and morbidity from red meat consumption, an American study showed an increase, and several vegetarian studies showed no difference at all. Because of publication bias (positive results are more likely to be reported than negative results), potential researcher bias (belief that a vegetarian diet is good for you is likelier than normal in a population studying diet, because vegetarians are more interested in diets than the population as a whole), and the fact that we’re looking at conflicting results from studies, I’d say that that is pretty good evidence that there is no real effect and it is all nonsense. If I see five studies on diet, and three of them say one thing and two say another, I’m going to stick with the null hypothesis because it is far more likely that the three studies that say it does something are the result of publication bias of positive results.
Well, if you already know how much each of the associated factors contributes alone via other tests where you were able to isolate those variables, you can make an educated guess that their combined effect is no greater than the sum of their individual effects.
The presence of other studies that didn’t show the same significant results weighs against it, but on the other hand such cases are certainly not unheard of with respect to associations that turn out to be real. The Cochrane Collaboration’s logo comes from a forest plot of results for whether an injection of corticosteroids reduce the chance of early death in premature birth. Five out of seven studies failed to achieve statistical significance, but when their evidence was taken together, it achieved very high signficance, and further research since suggests a reduction of mortality rate between 30-50%.
While a study of the sort linked above certainly doesn’t establish the truth of its findings with the confidence of its statistical significance, “never believe studies like this” doesn’t leave you safe from a treatment-of-evidence standpoint, because even in the case of a real association, the data are frequently going to be messy enough that you’d be hard pressed to locate it statistically. You don’t want to set your bar for evidence so high that, in the event that the association were real, you couldn’t be persuaded to believe in it.
You can’t make an educated guess that a combination of multiple factors is no greater than the sum of their individual effects, and indeed, when you’re talking about disease states, this is the OPPOSITE of what you should assume. The harm done to your body taxes its ability to deal with harm; the more harm you apply to it, whatever the source, the worse things get. Your body only has so much ability to fight off bad things happening to it, so if you add two bad things on top of each other, you’re actually likely to see harm which is worse than the sum of their effects because part of each of the effects is naturally masked by your body’s own repair mechanisms.
On the other hand, you could have something where the negative effects of each of the things counteracts each other.
Moreover (and worse), you’re assuming you have any independent data to begin with. Given that there is a correlation between smoking and red meat consumption, your smoking numbers are already suspect, because we’ve established that the two are not independent variables.
In any event, guessing is not science, it is nonsense. I could guess that the impact of the factors was greater than the sum of the parts, and get a different result, and as you can see, it is perfectly reasonable to make that guess as well. That’s why it is called a guess.
When we’re doing analysis, guessing is bad. You guess BEFORE you do the analysis, not afterwards. All you’re doing when you “guess” how large the impact is, is manipulating the data.
That’s why control groups are so important.
Regarding glucocorticosteroid use in pregnancy, there actually is quite a bit of debate over whether or not their use is actually a good thing due to the fact that cortiocosteroids are tetratogens.
And yes, actually, it is generally better not to believe in true correlations than it is to believe in false ones. Look at all the people who are raising malnourished children on vegan and vegetarian diets.
Well, there’s certainly no shortage of evidence that it’s unhealthy for children to be malnourished, so that amounts to defying one true correlation in favor of the possibility of another.
Supposing that there were a causative relation between red meat consumption and mortality, with a low effect size, under what circumstances would you be persuaded to believe in it?