I’d love to hear from someone at FDA on this. I do not work for FDA, but here’s my guess.
they need to study the data they’ve been given. Although FDA will have been in communication with the drug company and seen their data all along, they probably have a rule that they can only consider data Officially Submitted as part of an Official Application. In a complex organization, most likely lots of people get involved in a big decision, including managers, lawyers, and political appointees who don’t necessarily have a lot of value to add on safety/efficacy questions but who do have enough organizational clout to ensure they don’t get bypassed even in an emergency. I’d guess for sufficiently big decisions, conversations happen with Congressional and White House staffers too.
Legal requirements for notice. FDA might have to give the public notice that they are considering the approval. And the FDA has an advisory committee on vaccines, and they are required to give notice of those meetings so that the public has a chance to attend. There are probably emergency bypasses to these notice requirements, but no one is particularly incentivized to take the risk of departing from normal process.
This comment intended as description, not justification, of the existing practices.
I addressed this in #1 above. Even if they’ve already seen data, they’re starting from scratch as far as evaluating the Officially Submitted Data goes. Plus the number of different people and groups involved.
No, it’s not clear why you need weeks to a bunch to check whether someone did their statistics right. Analysing clinical trial data isn’t rocket science.
What bothers me about this is that in principle, before the experiment was done, the FDA has thresholds for acceptance. They already have defined mathematically what level of risk they deem “safe” and what level of effectiveness is “effective enough”. In theory, all the actual data has in it is the possibility of
a. a math error or deliberate mistake or
b. falsification of results
(this is why many have argued that truly independent organizations should run clinical trials)
...it seems like you could build a machine learning tool similar to that used for credit card fraud transaction detection, and hunt for a and b in about a minute of IRL time...
I think we have a misalignment of incentives. The FDA staffs are a few hundred people and they are most likely taking time mainly to avoid risk to their personal reputations and careers. But this extra time is having the consequence of potentially killing tens of thousands of people.
Hmm, maybe I am responding with too much pessimism for unrelated domains (coding and rationalist-philosophy). But I do have a pretty strong prior that adding more people to a cognitive process doesn’t necessarily make it faster.
I don’t think that’s the obstacle. Lots of different people are looking at the application from different angles, and no one seems to have the sense of urgency we might think is warranted.
I agree. I’m saying that if, hypothetically, this were difficult to check and the FDA couldn’t have checked the intermediate data and if… etc, then you could still hire more people.
I’d love to hear from someone at FDA on this. I do not work for FDA, but here’s my guess.
they need to study the data they’ve been given. Although FDA will have been in communication with the drug company and seen their data all along, they probably have a rule that they can only consider data Officially Submitted as part of an Official Application. In a complex organization, most likely lots of people get involved in a big decision, including managers, lawyers, and political appointees who don’t necessarily have a lot of value to add on safety/efficacy questions but who do have enough organizational clout to ensure they don’t get bypassed even in an emergency. I’d guess for sufficiently big decisions, conversations happen with Congressional and White House staffers too.
Legal requirements for notice. FDA might have to give the public notice that they are considering the approval. And the FDA has an advisory committee on vaccines, and they are required to give notice of those meetings so that the public has a chance to attend. There are probably emergency bypasses to these notice requirements, but no one is particularly incentivized to take the risk of departing from normal process.
This comment intended as description, not justification, of the existing practices.
What kind of study do they need to do besides the statistical tests that are already done. How do you fill weeks with studying the data?
I addressed this in #1 above. Even if they’ve already seen data, they’re starting from scratch as far as evaluating the Officially Submitted Data goes. Plus the number of different people and groups involved.
No, it’s not clear why you need weeks to a bunch to check whether someone did their statistics right. Analysing clinical trial data isn’t rocket science.
even if it were rocket science, you could always just hire more rocket scientists to get it done more quickly?
I think that’s explicitly not true, especially for things of the form “come to consensus on a controversial topic.”
What bothers me about this is that in principle, before the experiment was done, the FDA has thresholds for acceptance. They already have defined mathematically what level of risk they deem “safe” and what level of effectiveness is “effective enough”. In theory, all the actual data has in it is the possibility of
a. a math error or deliberate mistake or
b. falsification of results
(this is why many have argued that truly independent organizations should run clinical trials)
...it seems like you could build a machine learning tool similar to that used for credit card fraud transaction detection, and hunt for a and b in about a minute of IRL time...
I think we have a misalignment of incentives. The FDA staffs are a few hundred people and they are most likely taking time mainly to avoid risk to their personal reputations and careers. But this extra time is having the consequence of potentially killing tens of thousands of people.
If a lot of the work is making sure that the stats / methods check out, that’s a local validity issue that scales with more people, right?
Hmm, maybe I am responding with too much pessimism for unrelated domains (coding and rationalist-philosophy). But I do have a pretty strong prior that adding more people to a cognitive process doesn’t necessarily make it faster.
I don’t think that’s the obstacle. Lots of different people are looking at the application from different angles, and no one seems to have the sense of urgency we might think is warranted.
I agree. I’m saying that if, hypothetically, this were difficult to check and the FDA couldn’t have checked the intermediate data and if… etc, then you could still hire more people.
Instead of hiring more rocket scientists, I would think it would make sense to invest in software that automates the process.