This is based on self-reports on survey data, which will again exclude asymptomatic cases- if you use the ⅓ figure and assume no long covid among the asymptomatic, that becomes 1.8% of 25-45 year olds with covid developing long covid that affects their daily life, which is well within the Lizardman Constant.
On the other hand, medicine is notoriously bad at measuring persistent, low-level, amorphous-yet-real effects. The Lizardman Constant doesn’t mean prevalences below 4% don’t exist, it means they’re impossible to measure using naive tools.
1.8% seems similar to the lower risk difference estimates between cases and controls I’ve seen (EDIT: 1.8% is the absolute risk, not a difference with controls), and I would guess the point with the Lizardman Constant you make here might not apply to risk differences between cases and controls, unless you want to claim that the constant differs between these two groups. I don’t think this is entirely implausible, although I’d lean against it accounting for most of these risk differences, and I’d guess selection effects or controls that aren’t good enough would be the most likely ways to explain away most of the long COVID risk difference estimates as actually nothing.
I’m guessing it’s so low because of the “affects their daily life” (so risk difference estimates are measuring things not as severe or less frequent, and you filtered these out), or maybe just noise, studies from some samples being unrepresentative, etc.. This should give us a rough upper bound on the risk difference.
For this metareview it’s the absolute percentage, not a comparison. I’m interested in the other studies you think show a similar number relative to a control group.
For this metareview it’s the absolute percentage, not a comparison.
Woops, sorry, I didn’t mean to suggest otherwise.
I’m interested in the other studies you think show a similar number relative to a control group.
Hmm, I only remember this one with a similar number and controls, off the top of my head (I might have been thinking of similar numbers for something else):
Among a sample of over 20,000 study participants who tested positive for COVID-19 between 26 April 2020 and 6 March 2021, 13.7% continued to experience symptoms for at least 12 weeks. This was eight times higher than in a control group of participants who are unlikely to have had COVID-19, suggesting that the prevalence of ongoing symptoms following coronavirus infection is higher than in the general population.
Of study participants who tested positive for COVID-19, symptom prevalence at 12 weeks post-infection was higher for female participants (14.7%) than male participants (12.7%) and was highest among those aged 25 to 34 years (18.2%).
In contrast, the ONS study compared persistent symptoms lasting 12+ weeks using a survival analysis approach between confirmed COVID-19 cases and age- and sex-matched non-COVID controls, with estimates of 13.7% in cases and just 1.7% in controls.
Prospective versus retrospective data collection: Prospective data collection on ongoing symptoms on a daily basis was uniquely performed in the COVID Symptoms Study, which had the lowest estimates of proportions of cases affected, (2.3% for >12 weeks symptoms).[1] Unpublished analysis of the same individuals asked retrospectively about symptoms using the same questionnaire as in CONVALESCENCE cohorts (inclusive method) revealed very similar proportions with symptoms lasting >12 weeks, ranging from 6% of COVID+ cases in men aged 20-30 to 16% in women aged 40- 50. The COVID Symptoms Study did not count symptoms re-emerging after a week of reporting no symptoms, but although relapse rates were higher in the case population (16.0%) versus non-COVID controls (8.4%; P < 0.0005), this does not account for the difference in reporting rates and suggests that recall bias may operate in retrospective self-reports of symptom duration. The ONS study of persistent symptoms in confirmed infections was based on prospective data [3](symptoms experienced in the last week, collected each week for the month from enrolment and then each month for up to a year); whereas symptom durations for the population prevalence estimate [6] is based on retrospective reporting of the initial (confirmed or suspected) infection.
The 1.8% number comes from your own calculations, though, right? Shouldn’t we be comparing the lizardman constant with the reported percentages, rather than this calculated number?
In this case, that might be 2.8%. But I don’t know what the methodology of the survey was. If they just asked a bunch of random people and got them to self-report whether they had covid; maybe we should actually use the percentage of people who claimed to have long covid among everyone asked, which could be lower than 1.8%.
Of course, all of these numbers are smaller than the lizardman constant anyway.
1.8% seems similar to the lower risk difference estimates between cases and controls I’ve seen (EDIT: 1.8% is the absolute risk, not a difference with controls), and I would guess the point with the Lizardman Constant you make here might not apply to risk differences between cases and controls, unless you want to claim that the constant differs between these two groups. I don’t think this is entirely implausible, although I’d lean against it accounting for most of these risk differences, and I’d guess selection effects or controls that aren’t good enough would be the most likely ways to explain away most of the long COVID risk difference estimates as actually nothing.
I’m guessing it’s so low because of the “affects their daily life” (so risk difference estimates are measuring things not as severe or less frequent, and you filtered these out), or maybe just noise, studies from some samples being unrepresentative, etc.. This should give us a rough upper bound on the risk difference.
For this metareview it’s the absolute percentage, not a comparison. I’m interested in the other studies you think show a similar number relative to a control group.
Woops, sorry, I didn’t mean to suggest otherwise.
Hmm, I only remember this one with a similar number and controls, off the top of my head (I might have been thinking of similar numbers for something else):
https://www.nature.com/articles/s41586-021-03553-9 (I’m focusing on Positive cases in figure 3, who are not hospitalized; I think this paper has gotten relatively more attention in the community; see this comment)
Some others are discussed here, mostly higher estimates, like 5x-10x higher, though:
https://www.medrxiv.org/content/10.1101/2021.03.18.21253633v2.full-text, also discussed here and in a reply here (healthcare workers)
https://jamanetwork.com/journals/jama/fullarticle/2778528 (healthcare workers)
https://www.ons.gov.uk/peoplepopulationandcommunity/healthandsocialcare/conditionsanddiseases/bulletins/prevalenceofongoingsymptomsfollowingcoronaviruscovid19infectionintheuk/1april2021 / https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1007511/S1327_Short_Long_COVID_report.pdf, two links for the same study on a UK sample, specifically these quotes and column ONS-CIS [3][6] in Table 1 in the second link:
Another I got from Scott’s article (which summarizes risk estimates from several and discusses biases): https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2776560
The 1.8% number comes from your own calculations, though, right? Shouldn’t we be comparing the lizardman constant with the reported percentages, rather than this calculated number?
In this case, that might be 2.8%. But I don’t know what the methodology of the survey was. If they just asked a bunch of random people and got them to self-report whether they had covid; maybe we should actually use the percentage of people who claimed to have long covid among everyone asked, which could be lower than 1.8%.
Of course, all of these numbers are smaller than the lizardman constant anyway.
You’re right, will make this correction in the main article and then get LW to pull it over.