Experts vs. parents
I’m reviewing the literature on the link between food dyes and hyperactivity. Studies evaluate hyperactivity largely by observations made by teachers, psychologists, and/or parents. Observations by trained professionals using defined scales are sometimes considered superior to observations by parents. However, a meta-analysis of 15 studies (Schab+Trinh 2004, “Do artificial food colors promote hyperactivity in children with hyperactive syndromes?”, Developmental and Behavioral Pediatrics 25(6): 423-434), found that:
While health professionals’ ratings (ES = 0.107) and teachers’ ratings (ES = 0.0810) are not statistically significant, parents’ ratings are (ES = 0.441).
(“Effect strength” = standard mean difference = average of (active—placebo) / standard deviation (pooled active and placebo). Thanks to Unnamed for reminding me.)
This isn’t saying that parents reported more hyperactivity than professionals. It’s saying that, across 15 double-blind placebo experiments, the behavior observed by parents had a strong correlation with whether the child received the test substance or the placebo, over four times as strong as that measured by professionals. Conclusion: Observation by parents is much more reliable than observation by trained professionals.
Schab & Trinh offered several reasons why this might be: Administration of test substance might be timed so behavior changes occur primarily at home; parents observe insomnia while teachers observe attention; parents may detect behaviors that are not listed in the DSM for ADHD; and one more—parents may be “particularly attuned to the idiosyncrasies of their own children”. Gee, do you think?
Every parent is an expert on their child’s behavior. Just not an accredited expert.
Disclaimer: At least one study has found the opposite result (Schachter et al. 2001, “How efficacious and safe is short-acting methylphenidate for the treatment of attention-deficit disorder in children and adolescents?”, Can. Med. Assoc. J. 2001, 165:1475-1488). I haven’t read it and don’t know how strong the effect was.
It can go the other way, too. Parents can miss a lot, simply because they’re around their kids all day.
I remember being on an airplane with a mother and her toddler; the little boy was crying desperately “I wanna go to the BATHROOM!” while the mother was doing everything to calm him but taking him to the bathroom. I think, being frazzled, she must have just perceived as wordless screaming what I clearly understood as words.
Even stranger example of this: when I was about six months old, my mother made a cassette tape of my “baby talk.” Years later she listened to the tape: and I wasn’t babbling, I was speaking understandable words. Under the stress of caring for a small child, you don’t actually notice details that are obvious to a non-parent observer.
Interesting! My experience is mostly in the opposite direction, i.e. the baby saying things that only the parents understand, for others it’s just meaningless babble (“He said he was hungry!” “No he didn’t, he said “muh”).
Interesting. On a related note, my mom tells me that when I was little, I would ask for things with what sounded like meaningless babble, and my older brother (+2.5 yrs) would “interpret” and say things like, “Oh, he wants some water!” or “He wants his toys”, and turn out to be correct!
Luckily, things are different today: people say I mumble and think I’m German :-P
As a statistical aside, I see no strong reason to believe a meta-analysis should be any more convincing than a single, large, well-designed study. In fact, by mixing the results of rigorous studies in with the unrigorous ones, you’re probably just diluting the signal to noise ratio.
We should feel good about the fact that some biases of different research designs will cancel each other out, while bad about our inability to weight each study optimally.
Does anyone claim it is? I thought the advantage of a meta-analysis was the cost savings of not having to do a new, large study.
Their effect size measure is “standardized mean difference” which just means that you subtract one mean from another and standardize your scale by dividing by the standard deviation (although there are a few variations, depending mainly on how you estimate the standard deviation). So, for instance, ES = .441 for parents means something like: parents whose kids received the test substance reported their kids as being about 0.4 standard deviations more hyperactive than parents whose kids received placebo.
This statistic does not have a maximum of 1. The general convention for Cohen’s d (a common version of standardized mean difference) is that an effect size of 0.2 is small, 0.5 is medium, and 0.8 is a large effect.
Right! Thanks. Actually I read that yesterday, but forgot it today.
Interesting, but the pessimist in me is noting “even a stopped clock is right twice a day”.
For every one study like this, there’s hundreds of parents yelling that they noticed their kids developed autism right after getting vaccinated, or that they’re sure the power lines near their house are affecting their kids’ growth, or some other such nonsense.
I think you need to be far less general; not every parent is an expert on their child’s behavior, let alone their child’s health.
You need to distinguish between absolute observations (child is hyperactive) and relative observations (child was more hyperactive today than yesterday). The meta-analysis cited above uses relative observations. That’s why I wrote,
Also, this was across 15 studies, not one study.
Also, blinding of the parents seems to be pretty key here.
Very much agreed. One particular worry is that the substances in question are dyes. If someone is observant enough to notice changes in visible color in excretions, or skin color, the blinding is gone.
One another note, the topic of study
almost begs for a bad joke about low-lying excited states...
No one has ever reported observing any such change from food dyes used in these dosages. These are comparable dosage levels to the amount of artificial coloring that most Americans eat every day.
Huh, I had meant that as a point in favor of the studies! Which I suppose it still is, but it hadn’t occurred to me that unblinding might occur in that way.
The fact that it’s 15 small studies rather than one large one actually works against it. Since the studies were conducted differently, the control is shaky.
This is certainly interesting, but I think that we’re jumping the gun by saying that “observation by parents is much more reliable than observation by trained professionals” based on this. For one thing, only 60 of the 219 subjects were observed by healthcare professionals at all, which may account for some of the difference in correlation.
Also, out of the three possible explanations proposed, you’ve chosen one: “parents may be ‘particularly attuned to the idiosyncrasies of their own children’”, seemingly because it is the one that matches your personal experience best. This is a textbook example of confirmation bias. We should not conclude that parents are experts on observing their children’s hyperactivity without investigating the possible explanations.
Let’s examine one of the other hypotheses:
I’d like to make it clear that this does not necessarily mean that parents are better at detecting changes in their children’s behaviour than professionals. It could be that they observe changes in behaviour that are unrelated to ADHD or hyperactivity, and that those changes lead them to the conclusion that their children are not part of the control, at which point the placebo effect becomes relevant. Another possibility is clearly stated in the article:
This may suggest that the modern criteria for ADHD are insufficient, but it doesn’t tell us anything about parents’ knowledge or observational skills in general. If the difference can be accounted for by additional criteria used by parents, then professionals may get similar results if they use the same criteria. In summary, if the trouble with professional observation is that ADHD is poorly or incompletely defined, then the discrepancy will change or disappear when we examine behaviours that are unrelated to that particular disorder.
Finally, this is a meta-study of one particular sort of behaviour. To conclude that parents are generally better at observing their children’s behaviour than experts, we would need to examine a wide range of behaviours. We can form a hypothesis based on this study, but much more investigation is necessary if we are to come to any conclusions.
My mom is definitely better than doctors at telling what’s wrong with me.
Last time I was wondering why I was a bit more tired than normal, she had figured out (and didn’t tell me) that I was getting sick. Sure enough, a few hours later I felt sick and threw up. This kind of thing happens enough to know that she really can tell when something isn’t quite right with me.
When I was a little kid, my parents had to stop feeding me red pancakes because it made me hyper. I’ll see if they came to that conclusion without having previously heard of the hypothesis.
Some of the trials are extremely small. Therefore, sample standard deviation (supposed to be an unbiased estimator of population stddev) may be unusually small or large, giving a very uncertain ES. 12 of the 15 studies included P ratings; 6 included T; 3 included H (Parent, Teacher, Health prof.). I didn’t see how the pooled ES estimates were generated (probably by pooling the populations and ratings), but I’m very interested in the ES of different raters on the same population (so that we moot any question about what the true pop. stddev is).
However, Fig. 2 on p 427 of http://www.cspinet.org/new/pdf/schab.pdf definitely shows that parents’ ratings get a higher ES on those studies where there are other ratings (on the same population) to compare against. Since all the studies are blinded so that the raters don’t know if the kid got a placebo or not, this does effectively prove your interpretation (that either the hyperactivity occurs mostly at home, or that, more likely, the parents can judge a behavioral change in their kid better than a trained stranger who gets a few minutes of observation only).
Teachers spend about as much time with the children as parents do—although their attention during that time is divided among many children.
Did the experts just measure the absolute hyperactivity after treatment, or the difference in hyperactivity before and after treatment? I think measuring the difference would be the better choice to compare to parent’s judgement. (And does it matter, in practice, whether it is the same expert who measures the child before treatment as after treatment?)
They measured absolute hyperactivity over each time interval. When the study was finished, it was unblinded, so only then did they know which children were treated with the test substance during which intervals. They then subtracted the scores while on placebo from the scores while on substance. In short: They measured the difference.
It does matter who judges; standard practice is to find the correlation of all experts on a sample, and verify it’s high, before the study begins.
i think this is a bit of a microcosm of what’s wrong with the way we deal with health.
shouldn’t we take big step back and realize that we all eat too much sugar and don’t get enough mental and physical stimulation. no wonder kids are ‘hyper active’ or people have trouble sleeping or can’t maintain energy through the day.
there are thousands of different chemicals in everything all around us. but there’s some form of glucose or corn syrup or ‘natural flavours’ in practically everything that isn’t picked off a plant.
Anyway my point is I just ignore these kinds of studies. They remind me of the 50′s.
I don’t understand your point. Are you saying that our healthcare system throws pills at problems that don’t need pills, that our food industry puts profits over the health of its customers, or that our society is taking power away from parents just when they need it most?
Or (more likely) something else entirely?