Everybody’s been talking about Paxlovid, and how ridiculous it is to both stop the trial since it’s so effective but also not approve it immediately. I want to at least float an alternative hypothesis, which I don’t think is very probable at this point, but does strike me as at least plausible (like, 20% probability would be my gut estimate) based on not-very-much investigation.
Early stopping is a pretty standard p-hacking technique. I start out planning to collect 100 data points, but if I manage to get a significant p-value with only 30 data points, then I just stop there. (Indeed, it looks like the Paxlovid study only had 30 actual data points, i.e. people hospitalized.) Rather than only getting “significance” if all 100 data points together are significant, I can declare “significance” if the p-value drops below the line at any time. That gives me a lot more choices in the garden of forking counterfactual paths.
Now, success rates on most clinical trials are not very high. (They vary a lot by area—most areas are about 15-25%. Cancer is far and away the worst, below 4%, and vaccines are the best, over 30%.) So I’d expect that p-hacking is a pretty large chunk of approved drugs, which means pharma companies are heavily selected for things like finding-excuses-to-halt-good-seeming-trials-early.
Early stopping is a pretty standard p-hacking technique.
It was stopped after a pre-planned interim analysis; that means they’re calculating the stopping criteria/p-values with multiple testing correction built in, using sequential analysis.
Everybody’s been talking about Paxlovid, and how ridiculous it is to both stop the trial since it’s so effective but also not approve it immediately. I want to at least float an alternative hypothesis, which I don’t think is very probable at this point, but does strike me as at least plausible (like, 20% probability would be my gut estimate) based on not-very-much investigation.
Early stopping is a pretty standard p-hacking technique. I start out planning to collect 100 data points, but if I manage to get a significant p-value with only 30 data points, then I just stop there. (Indeed, it looks like the Paxlovid study only had 30 actual data points, i.e. people hospitalized.) Rather than only getting “significance” if all 100 data points together are significant, I can declare “significance” if the p-value drops below the line at any time. That gives me a lot more choices in the garden of forking counterfactual paths.
Now, success rates on most clinical trials are not very high. (They vary a lot by area—most areas are about 15-25%. Cancer is far and away the worst, below 4%, and vaccines are the best, over 30%.) So I’d expect that p-hacking is a pretty large chunk of approved drugs, which means pharma companies are heavily selected for things like finding-excuses-to-halt-good-seeming-trials-early.
It was stopped after a pre-planned interim analysis; that means they’re calculating the stopping criteria/p-values with multiple testing correction built in, using sequential analysis.