Previously after Pfizer’s 11⁄5 interim report on Paxlovid in high risk patients, you said an 89%, 95% CI: [64, 97] (n=774) was certain enough to conclude efficacy but not certain enough to stop the trial because the CI was uncomfortably wide. After they had their 12⁄14 final report on Paxlovid in high risk patients, you said an 89%, 95% CI: [72, 96] (n=1379) looked good. At that time they also shared the interim report on Paxlovid in standard risk patients, showing 70%, 95% CI: [-8, 92] (n=854). Now after their 6⁄14 final report on Paxlovid in standard risk patients, you’re saying 51%, 95%: [-44, 83] (n=1145) is just an issue of sample size?
I’ve really appreciated all of your analysis and curating for us here, but I feel like we’re missing a lot about your internal model of evidence for these kinds of studies.
That looks quite straightforward to me. When you’re looking for evidence on the reduction in number of severe cases, you need a larger sample in a population where the proportion of severe cases is very small to begin with.
The important n here is not really the total number of people enrolled in the study, but the number of people who would suffer severe effects. It’s just that you don’t know which ones those will be (and if you did they’d be called “high risk” anyway), so you need to also enroll a much larger number of people to get enough who end up having severe effects in the control group and hopefully not in the treatment group.
Their June 14 press release says that they ended up with only 10 people out of 569 in the control group who developed severe disease, which is obviously not enough to draw conclusions from and they should have known that in advance. Even if Paxlovid had been magically 100% effective, the study was barely large enough to give reasonable confidence that it had any effect.
The problem is that the risk of severe disease in most of the population is low enough to be hard to cheaply study, but high enough to have huge health and economic impacts since the incidence case rate is of similar magnitude to the total population per year. Halving COVID hospitalizations in the standard risk population would save on the order of a hundred billion dollars per year in net medical costs alone, without even considering quality and duration of life and productivity.
So yes, this should absolutely be followed up with larger studies. Even if larger studies somehow cost more than a billion dollars, they would on net be obviously worthwhile. They would either verify that the drug reduces risk by some decent percentage and everyone should take Paxlovid when they contract COVID, or reveal that it doesn’t work well enough to be worthwhile taking for most people and so avoiding the cost of unnecessary medication and its side effects. In practice they would reveal additional information which would also be useful.
Previously after Pfizer’s 11⁄5 interim report on Paxlovid in high risk patients, you said an 89%, 95% CI: [64, 97] (n=774) was certain enough to conclude efficacy but not certain enough to stop the trial because the CI was uncomfortably wide.
After they had their 12⁄14 final report on Paxlovid in high risk patients, you said an 89%, 95% CI: [72, 96] (n=1379) looked good.
At that time they also shared the interim report on Paxlovid in standard risk patients, showing 70%, 95% CI: [-8, 92] (n=854).
Now after their 6⁄14 final report on Paxlovid in standard risk patients, you’re saying 51%, 95%: [-44, 83] (n=1145) is just an issue of sample size?
I’ve really appreciated all of your analysis and curating for us here, but I feel like we’re missing a lot about your internal model of evidence for these kinds of studies.
That looks quite straightforward to me. When you’re looking for evidence on the reduction in number of severe cases, you need a larger sample in a population where the proportion of severe cases is very small to begin with.
The important n here is not really the total number of people enrolled in the study, but the number of people who would suffer severe effects. It’s just that you don’t know which ones those will be (and if you did they’d be called “high risk” anyway), so you need to also enroll a much larger number of people to get enough who end up having severe effects in the control group and hopefully not in the treatment group.
Their June 14 press release says that they ended up with only 10 people out of 569 in the control group who developed severe disease, which is obviously not enough to draw conclusions from and they should have known that in advance. Even if Paxlovid had been magically 100% effective, the study was barely large enough to give reasonable confidence that it had any effect.
The problem is that the risk of severe disease in most of the population is low enough to be hard to cheaply study, but high enough to have huge health and economic impacts since the
incidencecase rate is of similar magnitude to the total population per year. Halving COVID hospitalizations in the standard risk population would save on the order of a hundred billion dollars per year in net medical costs alone, without even considering quality and duration of life and productivity.So yes, this should absolutely be followed up with larger studies. Even if larger studies somehow cost more than a billion dollars, they would on net be obviously worthwhile. They would either verify that the drug reduces risk by some decent percentage and everyone should take Paxlovid when they contract COVID, or reveal that it doesn’t work well enough to be worthwhile taking for most people and so avoiding the cost of unnecessary medication and its side effects. In practice they would reveal additional information which would also be useful.