It’s not just personal, in general taking medical advice from reviews in areas you’re not expert in, especially when you haven’t read the review, is probably not great, and as described elsewhere this really doesn’t feel like the sort of thing that would work (compare to magnesium for headache / low energy, which definitely biologically seems somewhat reasonable, although I’m still somewhat iffy on it). That together with personal anecdotes being the sort of thing that Chinese traditional medicine and energy therapy has by bucketloads, and my experience with meta analyses like that regularly falling apart (5-httlpr had a stronger meta analysis with p=0.0001 and four hundred studies, most of which were positive, and then just got executed by a large well done study, https://slatestarcodex.com/2019/05/07/5-httlpr-a-pointed-review/) and priming, with thousands of studies and meta analyses that was also fake and I don’t mean this insultingly because I’ve done this in the past too … a lot … smh … but if you didn’t look closely at the study, it’s probably one of the tens of thousands of bad studies and bad meta analyses that get published so much because so many people want to do them). So I’m pretty confident that the current evidence base for zinc shouldn’t be enough to conclude anything,. And neutral on if it works. But it probably doesn’t because most treatments don’t.
I looked for preregistered trials and found this in 2020 - https://bmjopen.bmj.com/content/10/1/e031662.abstract—preregistration massively helps with both publication bias, post analysis bias, study design, and many other tricks one can do. It found
Results There was no difference in the recovery rate between zinc and placebo participants during the 10-day follow-up (rate ratio for zinc vs placebo=0.68, 95% CI 0.42 to 1.08; p=0.10). The recovery rate for the two groups was similar during the 5-day intervention, but for 2 days after the end of zinc/placebo use, the zinc participants recovered significantly slower compared with the placebo participants (p=0.003). In the zinc group, 37% did not report adverse effects, the corresponding proportion being 69% in the placebo group.
Conclusions A commercially available zinc acetate lozenge was not effective in treating the common cold when instructed to be used for 5 days after the first symptoms. Taste has been a common problem in previous zinc lozenge trials, but a third of zinc participants did not complain of any adverse effects. More research is needed to evaluate the characteristics of zinc lozenges that may be clinically efficacious before zinc lozenges can be widely promoted for common cold treatment.
Which I find much, much more persuasive. Also note the side effect rate and the (admittedly, subgroup, so it is probably meaningless) lower rate of recovery for takers.
you’re right, pre-registration is very useful, that study does make me less confident in zinc and this advice. I appreciate the addition.
Something that may not be obvious is that when I say “here’s this cochrane review/meta-analysis”, I expect readers on LW and my own blog to be well-calibrated on how strong that evidence is (not very). This was not an attempt to make a conclusive case for Zinc, but to suggest a particular use case to people I knew were already sold on the general benefits (plausibly wrongly- and I think they would all appreciate it if someone did a more comprehensive review).
Cochrane reviews in particular are actually, like, literally, the gold standard for medical reviews. They are notorious for finding that “there is weak or no evidence”. So them finding positive is not “well calibrated for not very”, which is why I was genuinely shocked to read that, and correctly found it wasn’t.
I note that that study used lozenges with orange flavoring, which according to the podcast I think we’d expect not to work. (Presumably it contains citric acid, at minimum.)
… what? Read the study. It mentions the potential interaction with citric acid, and avoids it;
The zinc lozenge was a commercially available zinc acetate lozenge with 13 mg elemental zinc per lozenge (University Pharmacy, Helsinki, Finland). The lozenge weighed 0.9 g and had a diameter of 13mm. The lozenges contained isomaltulose, sorbitol, magnesium stearate, orange and peppermint flavours and sucralose. The instruction of the commercial package for patients with common cold is to dissolve slowly six lozenges per day in the mouth, which
Now, this study does use mannitol. And mannitol is one of the things mentioned by some studies as blocking zinc’s action. So maybe we’re out of the water! But wait—four of the eight studies in the meta analysis have acidic flavoring—“lemon and lime oils”, “peppermint oil”, “tannic acid”, etc. even worse, glycine is also mentioned as a zinc binder that hurts its action—yet many of the studies it cited use glycine!
Also, one study this meta analysis cites concludes that “zinc gluconate should not be used to treat cough due to high side effects”.
That said, it’s not clear at all. Maybe a particular combination of excipients did randomly manage to make some trials fail and others succeed! But that’s what you’d expect to happen in a case of no effect but publication bias and excuses. Which seems more likely?
I confess that reading these studies in any depth makes my eyes glaze over. But I did see the first mention of citric acid in the paper (out of two, according to ctrl-F) before writing my comment, and it doesn’t say they avoid the interaction. The mentions are
Several randomised trials have been carried out to test whether zinc lozenges might have treatment effects on the common cold but the findings are mixed. Eight studies have reported significant benefits of zinc lozenges,1–9 whereas 12 studies did not find benefit;9–15 one report published six9 and another two separate trials,10 and one study was published in two separate reports.2 3 Zinc ion can tightly bind to a number of substances, such as citric acid, potentially preventing the release of free zinc ions from lozenges in the oropharyngeal region. Therefore, the formulation of a zinc lozenge is crucial in determining whether a particular lozenge is efficacious. Shortcomings in the formulations and low doses could explain most of the negative findings.16–19
Eby commented that the majority of zinc lozenges on the US market in 2008 were expected to be ineffective against colds.16 Most of the zinc lozenges he surveyed contained citric acid, which binds zinc ions, and many lozenges had such low doses of zinc that they were unlikely to have any pharmacological effects. Thus, although there is evidence from several trials that properly composed zinc lozenges may shorten the duration of colds,1–9 a patient with ordinary common cold cannot easily materialise the benefit by zinc lozenges available from a drugstore, a problem further supported by our findings on the 5-day zinc acetate treatment.
Neither of these says that they used a lozenge without citric acid. I assumed that citric acid was part of the orange flavoring.
Now, this study does use mannitol. And mannitol is one of the things mentioned by some studies as blocking zinc’s action.
(Not important, but I assume you mean “sorbitol” here.)
That said, it’s not clear at all. Maybe a particular combination of excipients did randomly manage to make some trials fail and others succeed! But that’s what you’d expect to happen in a case of no effect but publication bias and excuses. Which seems more likely?
Yeah, it’s a big confusing mess. Given the state of the literature, if I tried the lozenges and they didn’t work, I wouldn’t be too surprised. Nor would I be shocked if I tried some lozenges with citric acid and they did work. But
The literature doesn’t seem to seriously engage with the podcast’s “here are a bunch of ways you can fuck up a zinc lozenge, but don’t do any of these and it’ll be fine” hypothesis? Maybe I just haven’t looked closely, but the studies generally seem to be comparing one brand of lozenge against placebo, not one brand against another. The meta-analyses seem to group studies by salt or dose, not by “does the podcast’s hypothesis predict this will work”. Which isn’t necessarily unreasonable, but I think it does mean that a big confusing mess isn’t as big a red flag as it might be otherwise.
This is something anyone can test for themselves fairly cheaply. I’ve tested it for myself, and I’m fairly confident it works for me. (At least the “these lozenges work” part, not the “these other lozenges don’t work” part.) This seems like a reasonable way to proceed, in the face of a big mess of evidence and a cheap way to test for oneself.
I looked for orange flavor online, especially in bulk on alibaba and flavor supply, as well as smaller consumer packages, and it seemed to in basically every case mean orange essential oils, which I don’t think have citric acid as it is hydrophilic (https://en.m.wikipedia.org/wiki/Orange_oil). If said oil did give citric acid, so would the lemon oil. No clue exactly though.
You didn’t look closely, correct. The literature does engage with the “mannitol, sorbitol, citric acid, glycine, and many others may cause past studies to not find an effect”. At least five of the eight studies referenced in the meta analysis, the meta analysis itself, the other paper, and many other studies make statements to that effect and are designed to avoid it. It’s still a mess though.
As for testing yourself—unless you do a blinded, well done, long term, controlled self experiment a la Gwern, it’s so easy to make a mistake that it probably is meaningless. I have over the internet seen hundreds of “this worked for me!” with many different methods of confirmation and levels of confidence—most of them ended up being very wrong. Here, Gwern takes magnesium—sees a benefit or signs of benefit—then does several detailed and careful self experiments and finds it causes significant harm. https://www.gwern.net/nootropics/Magnesium There are so many other examples.
Fair enough re: flavoring. (I did have a quick look myself earlier, but didn’t find anything. Thanks for being more thorough.)
The literature does engage with the “mannitol, sorbitol, citric acid, glycine, and many others may cause past studies to not find an effect”. At least five of the eight studies referenced in the meta analysis, the meta analysis itself, the other paper, and many other studies make statements to that effect and are designed to avoid it.
To clarify, do they do “we tested a zinc lozenge with some of these things and a zinc lozenge without (and maybe also a placebo)”? That’s the kind of thing I meant by taking it seriously. If we want to compare condition A and condition B, my sense is we can learn a lot less from “study on condition A, study by different group on condition B” than a study comparing them directly.
As for testing yourself—unless you do a blinded, well done, long term, controlled self experiment a la Gwern, it’s so easy to make a mistake that it probably is meaningless.
Eh, honestly I just don’t think this is true in this case. There are ways I could have made mistakes—it’s possible that
The first cold I took them for just got better really fast naturally.
And then I didn’t catch one for a year and a half, despite twice thinking I was coming down with one.
And then when I did get one, it had fast onset and unusually light symptoms. Or I just didn’t remember what it was like to have a cold by then.
Or maybe I got more colds than that and completely forgot about them. I acknowledge that this kind of thing is possible. I don’t think it’s super likely. I definitely don’t think it’s likely enough that I should consider my experience meaningless.
Also what’s going on here? The evidentiary standard and level of evidence is much lower for pro zinc evidence—this (very incorrect) speculation that could’ve been corrected by reading more than two paragraphs of that article, for whatever reason, was much better received than mine. Several times you guys have clearly referenced studies without reading them, and your post got 10 to 1 vs mine that actually read the papers!
To clarify voting: users have different vote strengths based on karma, and the ability to give a “strong” or “weak” vote. My own weak vote is worth two points, and my strong vote is worth 8. If you hover over a comment’s score, you can see how many people voted on it. (More info.)
I have two comments currently at +10, “I note that that study...” and “it will wash off over time...”. Both of them have only two votes, my own +2 vote and someone else’s +8. I’m curious myself why someone (or possibly two different people) thought these comments were worth a strong upvote. But they both have only one person voting for them other than my own default vote.
(For myself, I’ve upvoted the article, and philip_b’s “I have common cold...”, and have my own default votes. I initially downvoted your “I don’t think it is worth...” but retracted that vote after you edited. I’ve made no other votes in this thread currently.)
It’s not just personal, in general taking medical advice from reviews in areas you’re not expert in, especially when you haven’t read the review, is probably not great, and as described elsewhere this really doesn’t feel like the sort of thing that would work (compare to magnesium for headache / low energy, which definitely biologically seems somewhat reasonable, although I’m still somewhat iffy on it). That together with personal anecdotes being the sort of thing that Chinese traditional medicine and energy therapy has by bucketloads, and my experience with meta analyses like that regularly falling apart (5-httlpr had a stronger meta analysis with p=0.0001 and four hundred studies, most of which were positive, and then just got executed by a large well done study, https://slatestarcodex.com/2019/05/07/5-httlpr-a-pointed-review/) and priming, with thousands of studies and meta analyses that was also fake and I don’t mean this insultingly because I’ve done this in the past too … a lot … smh … but if you didn’t look closely at the study, it’s probably one of the tens of thousands of bad studies and bad meta analyses that get published so much because so many people want to do them). So I’m pretty confident that the current evidence base for zinc shouldn’t be enough to conclude anything,. And neutral on if it works. But it probably doesn’t because most treatments don’t.
I looked for preregistered trials and found this in 2020 - https://bmjopen.bmj.com/content/10/1/e031662.abstract—preregistration massively helps with both publication bias, post analysis bias, study design, and many other tricks one can do. It found
Results There was no difference in the recovery rate between zinc and placebo participants during the 10-day follow-up (rate ratio for zinc vs placebo=0.68, 95% CI 0.42 to 1.08; p=0.10). The recovery rate for the two groups was similar during the 5-day intervention, but for 2 days after the end of zinc/placebo use, the zinc participants recovered significantly slower compared with the placebo participants (p=0.003). In the zinc group, 37% did not report adverse effects, the corresponding proportion being 69% in the placebo group.
Conclusions A commercially available zinc acetate lozenge was not effective in treating the common cold when instructed to be used for 5 days after the first symptoms. Taste has been a common problem in previous zinc lozenge trials, but a third of zinc participants did not complain of any adverse effects. More research is needed to evaluate the characteristics of zinc lozenges that may be clinically efficacious before zinc lozenges can be widely promoted for common cold treatment.
Which I find much, much more persuasive. Also note the side effect rate and the (admittedly, subgroup, so it is probably meaningless) lower rate of recovery for takers.
Do you have a tool you use to grab pre-registered studies in particular?
uh.
google scholar? you just search the word? preregistered.
you’re right, pre-registration is very useful, that study does make me less confident in zinc and this advice. I appreciate the addition.
Something that may not be obvious is that when I say “here’s this cochrane review/meta-analysis”, I expect readers on LW and my own blog to be well-calibrated on how strong that evidence is (not very). This was not an attempt to make a conclusive case for Zinc, but to suggest a particular use case to people I knew were already sold on the general benefits (plausibly wrongly- and I think they would all appreciate it if someone did a more comprehensive review).
Cochrane reviews in particular are actually, like, literally, the gold standard for medical reviews. They are notorious for finding that “there is weak or no evidence”. So them finding positive is not “well calibrated for not very”, which is why I was genuinely shocked to read that, and correctly found it wasn’t.
Is there some material I can read on the case for zinc? On this site?
I note that that study used lozenges with orange flavoring, which according to the podcast I think we’d expect not to work. (Presumably it contains citric acid, at minimum.)
… what? Read the study. It mentions the potential interaction with citric acid, and avoids it;
Now, this study does use mannitol. And mannitol is one of the things mentioned by some studies as blocking zinc’s action. So maybe we’re out of the water! But wait—four of the eight studies in the meta analysis have acidic flavoring—“lemon and lime oils”, “peppermint oil”, “tannic acid”, etc. even worse, glycine is also mentioned as a zinc binder that hurts its action—yet many of the studies it cited use glycine!
Also, one study this meta analysis cites concludes that “zinc gluconate should not be used to treat cough due to high side effects”.
That said, it’s not clear at all. Maybe a particular combination of excipients did randomly manage to make some trials fail and others succeed! But that’s what you’d expect to happen in a case of no effect but publication bias and excuses. Which seems more likely?
I confess that reading these studies in any depth makes my eyes glaze over. But I did see the first mention of citric acid in the paper (out of two, according to ctrl-F) before writing my comment, and it doesn’t say they avoid the interaction. The mentions are
Neither of these says that they used a lozenge without citric acid. I assumed that citric acid was part of the orange flavoring.
(Not important, but I assume you mean “sorbitol” here.)
Yeah, it’s a big confusing mess. Given the state of the literature, if I tried the lozenges and they didn’t work, I wouldn’t be too surprised. Nor would I be shocked if I tried some lozenges with citric acid and they did work. But
The literature doesn’t seem to seriously engage with the podcast’s “here are a bunch of ways you can fuck up a zinc lozenge, but don’t do any of these and it’ll be fine” hypothesis? Maybe I just haven’t looked closely, but the studies generally seem to be comparing one brand of lozenge against placebo, not one brand against another. The meta-analyses seem to group studies by salt or dose, not by “does the podcast’s hypothesis predict this will work”. Which isn’t necessarily unreasonable, but I think it does mean that a big confusing mess isn’t as big a red flag as it might be otherwise.
This is something anyone can test for themselves fairly cheaply. I’ve tested it for myself, and I’m fairly confident it works for me. (At least the “these lozenges work” part, not the “these other lozenges don’t work” part.) This seems like a reasonable way to proceed, in the face of a big mess of evidence and a cheap way to test for oneself.
I looked for orange flavor online, especially in bulk on alibaba and flavor supply, as well as smaller consumer packages, and it seemed to in basically every case mean orange essential oils, which I don’t think have citric acid as it is hydrophilic (https://en.m.wikipedia.org/wiki/Orange_oil). If said oil did give citric acid, so would the lemon oil. No clue exactly though.
You didn’t look closely, correct. The literature does engage with the “mannitol, sorbitol, citric acid, glycine, and many others may cause past studies to not find an effect”. At least five of the eight studies referenced in the meta analysis, the meta analysis itself, the other paper, and many other studies make statements to that effect and are designed to avoid it. It’s still a mess though.
As for testing yourself—unless you do a blinded, well done, long term, controlled self experiment a la Gwern, it’s so easy to make a mistake that it probably is meaningless. I have over the internet seen hundreds of “this worked for me!” with many different methods of confirmation and levels of confidence—most of them ended up being very wrong. Here, Gwern takes magnesium—sees a benefit or signs of benefit—then does several detailed and careful self experiments and finds it causes significant harm. https://www.gwern.net/nootropics/Magnesium There are so many other examples.
Fair enough re: flavoring. (I did have a quick look myself earlier, but didn’t find anything. Thanks for being more thorough.)
To clarify, do they do “we tested a zinc lozenge with some of these things and a zinc lozenge without (and maybe also a placebo)”? That’s the kind of thing I meant by taking it seriously. If we want to compare condition A and condition B, my sense is we can learn a lot less from “study on condition A, study by different group on condition B” than a study comparing them directly.
Eh, honestly I just don’t think this is true in this case. There are ways I could have made mistakes—it’s possible that
The first cold I took them for just got better really fast naturally.
And then I didn’t catch one for a year and a half, despite twice thinking I was coming down with one.
And then when I did get one, it had fast onset and unusually light symptoms. Or I just didn’t remember what it was like to have a cold by then.
Or maybe I got more colds than that and completely forgot about them. I acknowledge that this kind of thing is possible. I don’t think it’s super likely. I definitely don’t think it’s likely enough that I should consider my experience meaningless.
Also what’s going on here? The evidentiary standard and level of evidence is much lower for pro zinc evidence—this (very incorrect) speculation that could’ve been corrected by reading more than two paragraphs of that article, for whatever reason, was much better received than mine. Several times you guys have clearly referenced studies without reading them, and your post got 10 to 1 vs mine that actually read the papers!
To clarify voting: users have different vote strengths based on karma, and the ability to give a “strong” or “weak” vote. My own weak vote is worth two points, and my strong vote is worth 8. If you hover over a comment’s score, you can see how many people voted on it. (More info.)
I have two comments currently at +10, “I note that that study...” and “it will wash off over time...”. Both of them have only two votes, my own +2 vote and someone else’s +8. I’m curious myself why someone (or possibly two different people) thought these comments were worth a strong upvote. But they both have only one person voting for them other than my own default vote.
(For myself, I’ve upvoted the article, and philip_b’s “I have common cold...”, and have my own default votes. I initially downvoted your “I don’t think it is worth...” but retracted that vote after you edited. I’ve made no other votes in this thread currently.)
Oh. I can’t hover, makes sense