The official study is neither the beginning not the end of knowledge. If people were being really competent and thorough, the study could have have collected all kinds of additional patient metadata.
The patient’s body is made of atoms that move according to physical laws. It is basically a machine. With the correct mechanistic modeling (possibly very very complicated) grounded in various possible measurements (some simple, some maybe more complicated) all motions of the atoms of the body are potentially subject to scientific mastery and intentional control.
From patient to patient, there are commonalities. Places where things work the same. This allows shortcuts… transfer of predictions from one patient to another.
Since the body operates as it does for physical reasons, if a patient had a unique arrangement of atoms, that could produce a unique medical situation…
...and yet the unique medical situation will still obey the laws of physics and chemistry and biochemistry and so on. From such models, with lots of data, one could still hypothetically be very very confident even about how to treat a VERY unique organism.
Veterinarians tend to be better at first principles medicine than mere human doctors. There are fewer vet jobs, and fewer vet schools, and helping animals has more of a prestige halo among undergrads than helping humans, and the school applications are more competitive, and the domain itself is vastly larger, so more generic reasoning tends to be taught and learned and used.
If a single human doctor was smart and competent and thorough, they could have calibrated hunches about what tests the doctors who ran the “1% and 2% study” COULD have performed.
If a single doctor was smart and competent and thorough, they could look at the study that said “overall in humans in general in a large group: side effect X was 1% in controls and 2% with the real drug” AND they could sequence the entire genome of the patient and make predictions from this sequence data. The two kinds of data could potentially be reconciled and used together for the specific patient.
BUT, if a single doctor was smart and competent and thorough, they could ALSO (perhaps) have direct access to the list of all allergic reactions the patient is capable of generating because they directly sampled the antibodies in the patient, and now have a computerized report of that entire dataset and what it means.
Heck, with alphafold in the pipeline now, a hypothetical efficacy study could hypothetically have sequenced every STUDY patient, and every patient’s unique gene sequences and unique drug-target-folding could be predicted.
A study output might not be “effective or not” but rather just be a large computer model where the model can take any plausible human biodata package and say which reactions (good, bad, or interesting) the drug would have for the specific person with 99.9% confidence one way or the other.
Drugs aren’t magic essences. Their “non-100%” efficacy rates are not ontologically immutable facts. Our current “it might work, it might not” summaries of drug effects… are caused partly by our tolerance for ignorance, rather than only by the drug’s intrinsically random behavior.
We can model a drug as a magic fetish the patient puts in their mouth, and which sometimes works or somethings doesn’t, as a brute fact, characterized only in terms of central tendencies…
...but this modeling approach is based on our limits, which are not set in stone.
Science is not over. Our doctors are still basically like witch doctors compared to the theoretical limits imposed by the laws of physics.
The current barriers to good medical treatment are strongly related to how much time and effort it takes to talk to people and follow up and measure things… and thus they are related to wealth, and thus economics, and thus economic regulatory systems.
Our government and universities are bad, and so our medical regulations are bad, and so our medicine is bad. It is not against the laws of physics for medicine to be better than this.
Concretely: do you have a physical/scientific hunch here? It kinda seems like you’re advocating “2% because that’s what the study said”?
What is the maximally structurally plausible probability of an allergic reaction, as a complication for that patient, in response to treatment: ~2% or ~11% or ~20%?
The patient’s body is made of atoms that move according to physical laws.
Yes, but making treatment decisions based pathophysiological theories goes counter to what evidence-based medicine is about. The idea of this method is that it’s going to be used by doctors practicing evidence-based medicine.
You can argue that evidence-based medicine is a flawed paradigm and doctors should instead practice physical-law-based medicine (or whatever you want to call it) but that’s a more general discussion then the one about this particular heuristic.
This comment touches on the central tension between the current paradigm in medicine, i.e. “evidence-based medicine” and an alternative and intuitively appealing approach based on a biological understanding of mechanism of disease.
In evidence-based medicine, decisions are based on statistical analysis of randomized trials; what matters is whether we can be confident that the medication probabilistically has improved outcomes when tested on humans as a unit. We don’t care really care too much about the mechanism behind the causal effect, just whether we can be sure it is real.
The exaggerated strawman alternative approach to EBM would be Star Trek medicine, where the ship’s doctor can reliably scan an alien’s biology, determine which molecule is needed to correct the pathology, synthesize that molecule and administer it as treatment.
If we have a complete understanding of what Nancy Cartwright calls “the nomological machine”, Star Trek medicine should work in theory. However, you are going to need a very complete, accurate and detailed map of the human body to make it work. Given the complexity of the human body, I think we are very far from being able to do this in practice.
There have been many cases in recent history where doctors believed they understood biology well enough to predict the consequences, yet were proved wrong by randomized trials. See for example Vinay Prasad’s book “Ending Medical Reversal”.
My personal view is that we are very far from being able to ground clinical decisions in mechanistic knowledge instead of randomized trials. Trying to do so would probably be dangerous given the current state of biological understanding. However, we can probably improve on naive evidence-based medicine by carving out a role for mechanistic knowledge to complement data analysis. Mechanisms seems particularly important for reasoning correctly about extrapolation, the purpose of my research program is to clarify one way such mechanisms can be used. It doesn’t always work perfectly, but I am not aware of any examples where an alternative approach works better.
The official study is neither the beginning not the end of knowledge. If people were being really competent and thorough, the study could have have collected all kinds of additional patient metadata.
The patient’s body is made of atoms that move according to physical laws. It is basically a machine. With the correct mechanistic modeling (possibly very very complicated) grounded in various possible measurements (some simple, some maybe more complicated) all motions of the atoms of the body are potentially subject to scientific mastery and intentional control.
From patient to patient, there are commonalities. Places where things work the same. This allows shortcuts… transfer of predictions from one patient to another.
Since the body operates as it does for physical reasons, if a patient had a unique arrangement of atoms, that could produce a unique medical situation…
...and yet the unique medical situation will still obey the laws of physics and chemistry and biochemistry and so on. From such models, with lots of data, one could still hypothetically be very very confident even about how to treat a VERY unique organism.
Veterinarians tend to be better at first principles medicine than mere human doctors. There are fewer vet jobs, and fewer vet schools, and helping animals has more of a prestige halo among undergrads than helping humans, and the school applications are more competitive, and the domain itself is vastly larger, so more generic reasoning tends to be taught and learned and used.
If a single human doctor was smart and competent and thorough, they could have calibrated hunches about what tests the doctors who ran the “1% and 2% study” COULD have performed.
If a single doctor was smart and competent and thorough, they could look at the study that said “overall in humans in general in a large group: side effect X was 1% in controls and 2% with the real drug” AND they could sequence the entire genome of the patient and make predictions from this sequence data. The two kinds of data could potentially be reconciled and used together for the specific patient.
BUT, if a single doctor was smart and competent and thorough, they could ALSO (perhaps) have direct access to the list of all allergic reactions the patient is capable of generating because they directly sampled the antibodies in the patient, and now have a computerized report of that entire dataset and what it means.
Heck, with alphafold in the pipeline now, a hypothetical efficacy study could hypothetically have sequenced every STUDY patient, and every patient’s unique gene sequences and unique drug-target-folding could be predicted.
A study output might not be “effective or not” but rather just be a large computer model where the model can take any plausible human biodata package and say which reactions (good, bad, or interesting) the drug would have for the specific person with 99.9% confidence one way or the other.
Drugs aren’t magic essences. Their “non-100%” efficacy rates are not ontologically immutable facts. Our current “it might work, it might not” summaries of drug effects… are caused partly by our tolerance for ignorance, rather than only by the drug’s intrinsically random behavior.
We can model a drug as a magic fetish the patient puts in their mouth, and which sometimes works or somethings doesn’t, as a brute fact, characterized only in terms of central tendencies…
...but this modeling approach is based on our limits, which are not set in stone.
Science is not over. Our doctors are still basically like witch doctors compared to the theoretical limits imposed by the laws of physics.
The current barriers to good medical treatment are strongly related to how much time and effort it takes to talk to people and follow up and measure things… and thus they are related to wealth, and thus economics, and thus economic regulatory systems.
Our government and universities are bad, and so our medical regulations are bad, and so our medicine is bad. It is not against the laws of physics for medicine to be better than this.
Concretely: do you have a physical/scientific hunch here? It kinda seems like you’re advocating “2% because that’s what the study said”?
Yes, but making treatment decisions based pathophysiological theories goes counter to what evidence-based medicine is about. The idea of this method is that it’s going to be used by doctors practicing evidence-based medicine.
You can argue that evidence-based medicine is a flawed paradigm and doctors should instead practice physical-law-based medicine (or whatever you want to call it) but that’s a more general discussion then the one about this particular heuristic.
This comment touches on the central tension between the current paradigm in medicine, i.e. “evidence-based medicine” and an alternative and intuitively appealing approach based on a biological understanding of mechanism of disease.
In evidence-based medicine, decisions are based on statistical analysis of randomized trials; what matters is whether we can be confident that the medication probabilistically has improved outcomes when tested on humans as a unit. We don’t care really care too much about the mechanism behind the causal effect, just whether we can be sure it is real.
The exaggerated strawman alternative approach to EBM would be Star Trek medicine, where the ship’s doctor can reliably scan an alien’s biology, determine which molecule is needed to correct the pathology, synthesize that molecule and administer it as treatment.
If we have a complete understanding of what Nancy Cartwright calls “the nomological machine”, Star Trek medicine should work in theory. However, you are going to need a very complete, accurate and detailed map of the human body to make it work. Given the complexity of the human body, I think we are very far from being able to do this in practice.
There have been many cases in recent history where doctors believed they understood biology well enough to predict the consequences, yet were proved wrong by randomized trials. See for example Vinay Prasad’s book “Ending Medical Reversal”.
My personal view is that we are very far from being able to ground clinical decisions in mechanistic knowledge instead of randomized trials. Trying to do so would probably be dangerous given the current state of biological understanding. However, we can probably improve on naive evidence-based medicine by carving out a role for mechanistic knowledge to complement data analysis. Mechanisms seems particularly important for reasoning correctly about extrapolation, the purpose of my research program is to clarify one way such mechanisms can be used. It doesn’t always work perfectly, but I am not aware of any examples where an alternative approach works better.