The intervention of using a larger quantity of the thing that has some effect is mind-bogglingly likely to not be tried at all, at least by anyone reporting the results and doing studies.
The first case I did work on for MetaMed had exactly this phenomenon: There was a well-known treatment that helped some but didn’t cure the condition in question, which helped both with symptoms and biomarkers (and where the biomarkers correlated very well with symptoms), and seemed to have a linear dose-response relationship, but we could find no evidence that anyone had ever tried using enough to get biomarkers down to where they were in healthy people. We also couldn’t find any sign that doing so would pose any risks.
So if there’s a situation in which it seems like brute force would solve your problem if only you’d use enough, but no one ever seems to think of or try using enough, it’s probably because no one’s thought about or tried using enough.
I do think this generalizes, but even the object level point is valuable. Simply doing more of the thing that’s working is an often neglected option, despite that seeming like the more obvious possible thing.
I was skeptical when I read this yesterday that a medical system with so much money and so many lives on the line could miss something so obvious.
Then today I run across a JAMA article from FDA researchers saying the same thing:
“Failure to determine the most appropriate dose for clinical use was a major reason for nonapproval. Dosing is frequently decided early in drug development, and optimization of doses to maximize efficacy and minimize toxicity is seldom formally explored in phase 3 studies. Adaptive trial designs and other strategies (such as treating phase 3 trial participants with a randomized sequence of different doses) may help to optimize doses.”
Basically, ‘stop making us reject your drugs for stupid reasons like not trying to optimize the dose’.
I encountered this while I was reading about an obscure estradiol ester, Estradiol undecylate, used for hormone replacement therapy and treating prostate cancer. It’s very useful because it has a super long half-life, but it was discontinued. I had to reread the article to be sure I understood that the standard dose chosen arbitrarily in the first trials was hundreds of times larger than necessary, leading to massive estrogen overdoses and severe side effects that killed many people due to cardiovascular complications, and yet these insane doses were typical for decades and might’ve caused its discontinuation.
I think the main cause is that people who view themselves are solving a problem are often using the procedure “look at the current pattern and try to find issues with it.” A process that complements this well is “look at what’s worked historically, and do more of it.”
The intervention of using a larger quantity of the thing that has some effect is mind-bogglingly likely to not be tried at all, at least by anyone reporting the results and doing studies.
The first case I did work on for MetaMed had exactly this phenomenon: There was a well-known treatment that helped some but didn’t cure the condition in question, which helped both with symptoms and biomarkers (and where the biomarkers correlated very well with symptoms), and seemed to have a linear dose-response relationship, but we could find no evidence that anyone had ever tried using enough to get biomarkers down to where they were in healthy people. We also couldn’t find any sign that doing so would pose any risks.
So if there’s a situation in which it seems like brute force would solve your problem if only you’d use enough, but no one ever seems to think of or try using enough, it’s probably because no one’s thought about or tried using enough.
I do think this generalizes, but even the object level point is valuable. Simply doing more of the thing that’s working is an often neglected option, despite that seeming like the more obvious possible thing.
I was skeptical when I read this yesterday that a medical system with so much money and so many lives on the line could miss something so obvious.
Then today I run across a JAMA article from FDA researchers saying the same thing:
“Failure to determine the most appropriate dose for clinical use was a major reason for nonapproval. Dosing is frequently decided early in drug development, and optimization of doses to maximize efficacy and minimize toxicity is seldom formally explored in phase 3 studies. Adaptive trial designs and other strategies (such as treating phase 3 trial participants with a randomized sequence of different doses) may help to optimize doses.”
Basically, ‘stop making us reject your drugs for stupid reasons like not trying to optimize the dose’.
I encountered this while I was reading about an obscure estradiol ester, Estradiol undecylate, used for hormone replacement therapy and treating prostate cancer. It’s very useful because it has a super long half-life, but it was discontinued. I had to reread the article to be sure I understood that the standard dose chosen arbitrarily in the first trials was hundreds of times larger than necessary, leading to massive estrogen overdoses and severe side effects that killed many people due to cardiovascular complications, and yet these insane doses were typical for decades and might’ve caused its discontinuation.
I think the main cause is that people who view themselves are solving a problem are often using the procedure “look at the current pattern and try to find issues with it.” A process that complements this well is “look at what’s worked historically, and do more of it.”
Some examples I wrote about a while back: lesswrong.com/lw/iro/systematic_lucky_breaks/
Can you give the condition and treatment?