What bothers me about this is that in principle, before the experiment was done, the FDA has thresholds for acceptance. They already have defined mathematically what level of risk they deem “safe” and what level of effectiveness is “effective enough”. In theory, all the actual data has in it is the possibility of
a. a math error or deliberate mistake or
b. falsification of results
(this is why many have argued that truly independent organizations should run clinical trials)
...it seems like you could build a machine learning tool similar to that used for credit card fraud transaction detection, and hunt for a and b in about a minute of IRL time...
I think we have a misalignment of incentives. The FDA staffs are a few hundred people and they are most likely taking time mainly to avoid risk to their personal reputations and careers. But this extra time is having the consequence of potentially killing tens of thousands of people.
Hmm, maybe I am responding with too much pessimism for unrelated domains (coding and rationalist-philosophy). But I do have a pretty strong prior that adding more people to a cognitive process doesn’t necessarily make it faster.
I don’t think that’s the obstacle. Lots of different people are looking at the application from different angles, and no one seems to have the sense of urgency we might think is warranted.
I agree. I’m saying that if, hypothetically, this were difficult to check and the FDA couldn’t have checked the intermediate data and if… etc, then you could still hire more people.
even if it were rocket science, you could always just hire more rocket scientists to get it done more quickly?
I think that’s explicitly not true, especially for things of the form “come to consensus on a controversial topic.”
What bothers me about this is that in principle, before the experiment was done, the FDA has thresholds for acceptance. They already have defined mathematically what level of risk they deem “safe” and what level of effectiveness is “effective enough”. In theory, all the actual data has in it is the possibility of
a. a math error or deliberate mistake or
b. falsification of results
(this is why many have argued that truly independent organizations should run clinical trials)
...it seems like you could build a machine learning tool similar to that used for credit card fraud transaction detection, and hunt for a and b in about a minute of IRL time...
I think we have a misalignment of incentives. The FDA staffs are a few hundred people and they are most likely taking time mainly to avoid risk to their personal reputations and careers. But this extra time is having the consequence of potentially killing tens of thousands of people.
If a lot of the work is making sure that the stats / methods check out, that’s a local validity issue that scales with more people, right?
Hmm, maybe I am responding with too much pessimism for unrelated domains (coding and rationalist-philosophy). But I do have a pretty strong prior that adding more people to a cognitive process doesn’t necessarily make it faster.
I don’t think that’s the obstacle. Lots of different people are looking at the application from different angles, and no one seems to have the sense of urgency we might think is warranted.
I agree. I’m saying that if, hypothetically, this were difficult to check and the FDA couldn’t have checked the intermediate data and if… etc, then you could still hire more people.
Instead of hiring more rocket scientists, I would think it would make sense to invest in software that automates the process.