The way I think about this is more “sometimes it makes sense to lower your standards for trying things”. Ie here the upside is incredibly large: if trying the thing works, it means a significant improvement to quality of life. OTOH, the downside is relatively small: some non-crazy amount of money and/or weeks of unwanted side effects. With that as the upside/downside, I think an eg. 1% chance of something working is plenty.
When I reviewed Vitamin D, I said I was about 75% sure it didn’t work against COVID. When I reviewed ivermectin, I said I was about 90% sure.
Another way of looking at this is that I must think there’s a 25% chance Vitamin D works, and a 10% chance ivermectin does. Both substances are generally safe with few side effects. So (as many commenters brought up) there’s a Pascal’s Wager like argument that someone with COVID should take both. The downside is some mild inconvenience and cost (both drugs together probably cost $20 for a week-long course). The upside is a well-below-50% but still pretty substantial probability that they could save my life.
The broader problem here is more one of predictive validity, and only accidentally or as a special-case ‘you should try more things’ (the value of that is doing it in a human, not the more-things per se). Appropriately, a new Scannell paper just came out, further discussing the logic of pipelines/screening/selection.
Why did they discover so many amazing drugs back in the 1930s-1950s? Why do we discover so few now? Why is the late Shulgin so influential? Well, it’s because they were ‘testing’ all of the drug candidates (where the n is extremely, extremely small by contemporary standards of various kinds of in vitro or in silico screening) in humans. (The secret ingredient to both Soylent Green, useful COVID vaccine trials, and good drugs? It’s humans. Always has been.)
The key point being any humans, not necessarily yourself. When you feed random wacky chemicals to humans and one of them tells you it did something funny, this has low predictive validity, as it is bare anecdote afflicted by all sorts of biases… but it still has way more predictive power than poking some mutant cells in a petri dish, and that’s why you can look impressive and brute force thousands of those petri dishes and write papers and in the end, still get back fewer useful drugs in the end than the quack irresponsibly dosing patients at random because ‘inflammation’ & listening to their complaints.
So as a general principle, you want to push your samples as far ‘up the stack’ as possible, and be willing to trade off a lot of samples to move up a bit. Better randomized than correlation; better in vitro than in silico; better in vivo than in vitro; better mice than worms; better dogs than mice; better random humans than dogs; better you than random humans...
The way I think about this is more “sometimes it makes sense to lower your standards for trying things”. Ie here the upside is incredibly large: if trying the thing works, it means a significant improvement to quality of life. OTOH, the downside is relatively small: some non-crazy amount of money and/or weeks of unwanted side effects. With that as the upside/downside, I think an eg. 1% chance of something working is plenty.
Related: Pascalian Medicine
The broader problem here is more one of predictive validity, and only accidentally or as a special-case ‘you should try more things’ (the value of that is doing it in a human, not the more-things per se). Appropriately, a new Scannell paper just came out, further discussing the logic of pipelines/screening/selection.
Why did they discover so many amazing drugs back in the 1930s-1950s? Why do we discover so few now? Why is the late Shulgin so influential? Well, it’s because they were ‘testing’ all of the drug candidates (where the n is extremely, extremely small by contemporary standards of various kinds of in vitro or in silico screening) in humans. (The secret ingredient to both Soylent Green, useful COVID vaccine trials, and good drugs? It’s humans. Always has been.)
The key point being any humans, not necessarily yourself. When you feed random wacky chemicals to humans and one of them tells you it did something funny, this has low predictive validity, as it is bare anecdote afflicted by all sorts of biases… but it still has way more predictive power than poking some mutant cells in a petri dish, and that’s why you can look impressive and brute force thousands of those petri dishes and write papers and in the end, still get back fewer useful drugs in the end than the quack irresponsibly dosing patients at random because ‘inflammation’ & listening to their complaints.
So as a general principle, you want to push your samples as far ‘up the stack’ as possible, and be willing to trade off a lot of samples to move up a bit. Better randomized than correlation; better in vitro than in silico; better in vivo than in vitro; better mice than worms; better dogs than mice; better random humans than dogs; better you than random humans...