Yes, but it is not a probable outcome, as for it to be true, it would require a counterbalancing group of people who benefit from it or for the subgroups to be extremely small; however, the allegations are that the subgroups are NOT small enough that the effect could have been hidden in this manner, suggesting that there is no effect on said subgroups as the other possibility is unlikely.
Strictly speaking, the subgroup in question only has to be one person smaller than everybody for those two statements to be compatible.
Suppose that there is no effect on 10% of the population, and a consistent effect in 90% of the population that just barely meets the p<.05 standard when measured using that subgroup. If that measurement is make using the whole population, p>.05
95% is an arbitrarily chosen number which is a rule of thumb. Very frequently you will see people doing further investigation into things where p>0.10, or if they simply feel like there was something interesting worth monitoring. This is, of course, a major cause of publication bias, but it is not unreasonable or irrational behavior.
If the effect is really so minor it is going to be extremely difficult to measure in the first place, especially if there is background noise.
It’s not a rule of thumb; it’s used as the primary factor in making policy decisions incorrectly. In this specific example, the regulatory agency made the statement “There is no evidence that artificial colorings are linked to hyperactivity” based on the data that artificial colorings are linked to hyperactivity with p~.13
There are many other cases in medical where 0.05p<.5 is used as evidence against p.
Yes, but it is not a probable outcome, as for it to be true, it would require a counterbalancing group of people who benefit from it or for the subgroups to be extremely small; however, the allegations are that the subgroups are NOT small enough that the effect could have been hidden in this manner, suggesting that there is no effect on said subgroups as the other possibility is unlikely.
Strictly speaking, the subgroup in question only has to be one person smaller than everybody for those two statements to be compatible.
Suppose that there is no effect on 10% of the population, and a consistent effect in 90% of the population that just barely meets the p<.05 standard when measured using that subgroup. If that measurement is make using the whole population, p>.05
95% is an arbitrarily chosen number which is a rule of thumb. Very frequently you will see people doing further investigation into things where p>0.10, or if they simply feel like there was something interesting worth monitoring. This is, of course, a major cause of publication bias, but it is not unreasonable or irrational behavior.
If the effect is really so minor it is going to be extremely difficult to measure in the first place, especially if there is background noise.
It’s not a rule of thumb; it’s used as the primary factor in making policy decisions incorrectly. In this specific example, the regulatory agency made the statement “There is no evidence that artificial colorings are linked to hyperactivity” based on the data that artificial colorings are linked to hyperactivity with p~.13
There are many other cases in medical where 0.05p<.5 is used as evidence against p.