Guys who took Lipitor everyday for five years were also good about not driving into fjords and not playing golf during lighting storms and not getting shot by the rare jealous Nordic husband or whatever
That’s an interesting way to look at confounders to say the least. How many people in the test group actually dropped out compared to the control group and was it accounted for? Should be easy to find out if one actually wanted to do that instead of just making motivated assumptions.
It almost seems the guy doesn’t understand what randomization is for.
It almost seems the guy doesn’t understand what randomization is for.
Just because an experiment is randomized doesn’t mean that this effect doesn’t show up, just that it’s less likely to than in an observational study where there are potentially several incoming causal arrows to the treatment node.* When talking about the most impressive RCT from one of the more spectacular drugs of the past, it’s actually not that silly to suspect that the experimental group got lucky, and all-cause mortality is a measure that is less susceptible to luck.
*That is to say, lack of causation does not imply no correlation in any given sample, just no expected correlation, and no expected correlation does not mean the expected r^2 is 0.
it’s actually not that silly to suspect that the experimental group got lucky
True, but you shouldn’t simply assume that. How silly it is depends on the actual numbers. He hardly provided any evidence, and he didn’t seem to think it was about mere luckiness, but also that there was some selection effect because of dropouts.
Do you know the trial he’s talking about? I can’t find it and would like to know if he’s attacking a strawman.
It appears that he is talking about this trial which has a follow-up performed 8 years later here. This was enrolled in the late 90s, concerned Lipitor, enrolled ~80% men, and included patients from the UK, Ireland, and Nordic countries.
If that is the study, it doesn’t actually agree with what Steve Sailer says. In the first study, there is a 36% reduction in (non-fatal heart attacks + fatal heart disease), but the reduction in all-cause mortality is only 13% and not statistically significant. The trial was stopped early because the treatment arm had become highly statistically significant, which means that the lack of significance in all cause mortality isn’t necessarily surprising.
That’s an interesting way to look at confounders to say the least. How many people in the test group actually dropped out compared to the control group and was it accounted for? Should be easy to find out if one actually wanted to do that instead of just making motivated assumptions.
It almost seems the guy doesn’t understand what randomization is for.
Just because an experiment is randomized doesn’t mean that this effect doesn’t show up, just that it’s less likely to than in an observational study where there are potentially several incoming causal arrows to the treatment node.* When talking about the most impressive RCT from one of the more spectacular drugs of the past, it’s actually not that silly to suspect that the experimental group got lucky, and all-cause mortality is a measure that is less susceptible to luck.
*That is to say, lack of causation does not imply no correlation in any given sample, just no expected correlation, and no expected correlation does not mean the expected r^2 is 0.
True, but you shouldn’t simply assume that. How silly it is depends on the actual numbers. He hardly provided any evidence, and he didn’t seem to think it was about mere luckiness, but also that there was some selection effect because of dropouts.
Do you know the trial he’s talking about? I can’t find it and would like to know if he’s attacking a strawman.
It appears that he is talking about this trial which has a follow-up performed 8 years later here. This was enrolled in the late 90s, concerned Lipitor, enrolled ~80% men, and included patients from the UK, Ireland, and Nordic countries.
If that is the study, it doesn’t actually agree with what Steve Sailer says. In the first study, there is a 36% reduction in (non-fatal heart attacks + fatal heart disease), but the reduction in all-cause mortality is only 13% and not statistically significant. The trial was stopped early because the treatment arm had become highly statistically significant, which means that the lack of significance in all cause mortality isn’t necessarily surprising.
Thanks for looking up the original study!
Thanks. Fits the description and if that really is the trial what he’s saying doesn’t make much sense.