I’m going to be a heretic and argue that the problem with cancer research is institutional, not biological. The biological problem is clearly very hard, but the institutional problem is impossible.
You might or might not be familiar with the term “OODA loop,” originally developed by fighter pilots:
If the war on cancer was a dogfight, you’d need an order from the President every time you wanted to adjust your ailerons. Your OODA loop is 10-20 years long. If you’re in an F-16 with Sidewinder missiles, and I’m in a Wright Flyer with a Colt .45, I’m still going to kill you under these conditions. Cancer is not (usually) a Wright Flyer with a Colt .45.
Lots of programmers are reading this. Here’s an example of what life as a programmer would be like if you had to work with a 10-year OODA loop. You write an OS, complete with documentation and test suites, on paper. 10 years later, the code is finally typed in and you see if the test suites run. If bug—your OS failed! Restart the loop. I think it’s pretty obvious that given these institutional constraints, we’d still be running CP/M. Oncology is still running CP/M.
Most cancer researchers are not even in the loop, really. For one thing, 90% of your research is irreproducible:
Even when the science is reproducible, your cell lines and mouse models are crap and bear little or no resemblance to real tumors. You know this, of course. But you keep on banging your heads against the wall.
What would a tight OODA loop look like? Imagine I’m Steve Jobs, with infinite money, and I have cancer. Everyone’s cancer is its own disease (if not several), so the researchers are fighting one disease (or several), instead of an infinite family of diseases. They are not trying to cure pancreatic cancer—they are trying to cure Steveoma.
Second, they operate with no rules. They can find an exploit in Steve’s cancer genome on Wednesday, design a molecule to hack it on Thursday, synthesize it on Friday and start titrating it into the patient on Saturday. Pharmacokinetics? Just keep doubling the dose until the patient feels side effects. Hey, it worked for Alexander Shulgin.
Moreover, Steve isn’t on just one drug. He’s got thirty or forty teams attacking every vulnerability, theoretical or practical, that may exist in his cancer cells. Why shouldn’t he be attacking his cancer in 30 ways at the same time? He’s a billionaire, after all.
Not everyone is a billionaire. But if you do this for enough billionaires, the common elements in the problem will start repeating and the researchers will learn a repertoire of common hacks. Eventually, the unusual becomes usual—and cheap. This is the way all technology is developed.
Of course, someone might screw up and a patient might die. You’ll note that a lot of cancer patients die anyway. Steve got a lot, but he didn’t get this—why not? It would be illegal, that’s why. Sounds like something the Nazis would do. Nazis! In our hospitals! Oh noes!
The entire thrust of our medical regulatory system, from the Flexner Report to today, is the belief that it’s better for 1000 patients to die of neglect, than 1 from quackery. Until this irrational fear of quack medicine is cured, there will be no real progress in the field.
The entire process we call “drug development” is an attempt to gain six-sigma confidence that we are not practicing quack medicine. Especially for cancer, do we need all these sigmas? And are we obtaining them in an efficient way? I can’t imagine how anyone would even begin to argue the point.
What is the source of this phobia? It is ultimately a political fear—based on public opinion. Its root is in the morbid, irrational fear of poisoning. But it also has a political constituency—all the people it employs. In that it has much in common with other “anti-industries,” like the software patent mafia.
He is right of course.
Edit: I didn’t think I would have to clarify this, but the “He is right of course” comment was referring to the bolded text.
Following JoshuaZ, I also don’t think this remark should stand unchallenged.
Why shouldn’t he be attacking his cancer in 30 ways at the same time?
Off-target effects, which are difficult to predict even for a single, well-understood drug. Also, the CYPs in your liver can turn a safe drug into a much scarier metabolite. And the drugs themselves can also modify the activity of the CYPs. Combined with dynamic dosing (“keep doubling the dose until the patient feels side effects”) the blood levels of the 30 drugs will be all over the place.
But if you do this for enough billionaires, the common elements in the problem will start repeating and the researchers will learn a repertoire of common hacks.
What are the common elements present when the patient has been dosed with varying amounts of 30 different drugs? If the cancer is cured, how should the credit be split among the teams? If the patient dies, who gets the blame?
The anti-quackery property of the current research regime is not just to prevent patients from being hoodwinked. It’s epistemic hygiene to keep the researchers from fooling themselves into thinking they’re helping when they’re really doing nothing or causing active harm.
I was talking about the bolded part (though I happen to approve of the text that follows it too) when I said that he is of course right. Our dealings with medicine seem tainted by an irrational risk aversion.
I’m wondering if the comparison with a dogfight is fair, though. With only the conservative treatment Steve has months or years to live, while a single wrong move kills him quickly. Dogfights are the opposite: the conservative approach (absence of a single right move) has a significant chance of doing you in.
In other words, the expected lifetime is reversed in a fight vs treatment between doing something and doing nothing.
A doctor in australia has wanted to use the product our company makes to try in vivo treatment of cancer, and we are unable to let him because of how insanely liable we would be and the high cost of having a GMP facility (that would in no actual way improve the product) means it’s unlikely to ever be a thing.
The ethical principles of experimenting on human beings are pretty subtle. It’s not just about protecting from quackery, though he is right that there is a legacy of Nuremburg involved. Read, for example, the guides that the Institutional Review Boards that approve scientific research must follow.
*Respect for persons involves a recognition of the personal dignity and autonomy of individuals, and special protection of those persons with diminished autonomy.
*Beneficence entails an obligation to protect persons from harm by maximizing anticipated benefits and minimizing possible risks of harm.
*Justice requires that the benefits and burdens of research be distributed fairly.
The most relevant principle here is “beneficience”. Unless the experimenter can claim to be in equipoise, about which of two procedures will be more beneficial, they’re obligated to use the presumed better option (which means no randomization). You can get away with more in pursuit of practice than you can in pursuit of research, but practice is deliberately restricted to prevent obtaining generalizable knowledge.
Roughly put, society has decided that it would rather that the only experiments that we perform are ones where there’s no appreciable possibility of harm to the participants, than allow that it is sometimes necessary for the progress of science that noble volunteers try things which we can’t be sure are good, and might be expected to be a bit worse, so that society can learn when they turn out to be better, or when they teach us things that suggest the better option. In a more rational society, everyone would have to accept that their treatment might not be the best possible for them (according to our current state of ignorance), but would require that the treatment be designed in order to lead to generalizable knowledge for the future.
Shocking! Why, who’d expect it from such a pillar of society!
(Sure, he’s 110% right in this isolated argument, and the medical industry is indeed a blatant, painfully obvious mafia. But one could make a bit of a case against this by arguing disproportionately risky outliers: e.g. what if we try to make AIDS attack itself but instead make it air- and waterborne, and then it slips away from the lab? What if we protect the AI industry from intrusive regulation early on when it’s still safe, then suddenly it’s an arms race of several UFAI projects, each hoping to be a little less bad than the others?)
What if we protect the AI industry from intrusive regulation early on when it’s still safe, then suddenly it’s an arms race of several UFAI projects, each hoping to be a little less bad than the others?
imagines US congress trying to legislate friendliness or regulate AI safety
Weirder, absurder stuff has happened—and certainly has been speculated about. In fact, Stanislaw Lem has written several novels and stories that largely depict Western bureaucracy trying to cope with AIs and other emerging technologies, and the insane disreality produced by that (His Master’s Voice, Peace on Earth, Golem XIV and the unashamed Dick ripoff… er, homage Memoirs Found in a Bathtub). I’ve read the first three, they’re great. For that matter, check out Dick’s short stories and essays too.
Shocking! Why, who’d expect it from such a pillar of society!
I know! I mean if something like this happens who know what might be next. People might start finding reasons why death is good or Robin Hanson might turn into a cynic.
As a medical student, this quote has significantly perturbed how I think about the epistemology of the field, though I’m still processing it. Well done!
Frankly, that bit came across as more or less projection. Although he is marginally correct that there’s does seem to on occasion be an unhealthy attitude here that we’re the only smart people.
He’s not right. He’s marginally correct: First, he ignores that even under current circumstances a lot of people die from quackry (and in fact, the example he uses of Steve Jobs is arguably an example since he used various ineffective alternative medicines until it was too late). Moreover, cancer mortality rates are declining, so the system isn’t as ineffective as he makes it out to be. His basic thrust may be valid- there’s no question that the FDA has become more bureaucratic, and that some laws and regulations are preventing research that might otherwise go ahead. But he is massively overstating the strength of his case.
First, he ignores that even under current circumstances a lot of people die from quackry (and in fact, the example he uses of Steve Jobs is arguably an example since he used various ineffective alternative medicines until it was too late)
Steve Jobs sought out quackery. You seem to be confused by what is meant by quackery here:
The entire thrust of our medical regulatory system, from the Flexner Report to today, is the belief that it’s better for 1000 patients to die of neglect, than 1 from quackery. Until this irrational fear of quack medicine is cured, there will be no real progress in the field.
People who die because they rely on alternative medicine aren’t going to be helped in the slightest by an additional six or five or four sigmas of certainty within the walled garden of our medical regulatory system. Medical malpractice and incompetence also is not the correct meaning of “death by quackery” in the above text. Death by quackery quite clearly refers to death caused as deaths caused by experimental treatments figuring out what the hell is happening.
You indeed miss a far better reason to criticize Moldbug here. A good reason for Moldbug being wrong is that even with those expensive six sigmas of certainty many people end up distrusting established medicine enough to seek alternatives. If you reduce the sigmas of certainty, more people will wander into the wild weeds outside the garden. These people seem more likely to be hurt than not.
Not only that, even controlling for these people, the six sigma’s of certainty might also be buying us placebo for the masses. But this is easy to overestimate, since it is easy to forget how very ignorant people really are. They accept the “doctor’s orders” and trust them with their lives not because the doctor is right or extremely likely to be right but because he is high status and it is expected of people to follow “doctor’s orders”. The reasons doctors are high status in our society has little to do with them being good at what they do. Doctors have been respected in the West for a long time and not so ancient is a time when it is plausible to argue that they killed more people than they saved. The truth of that last questions matters far less than the fact that it can be raised at all! Leaving aside doctors in particular it seems a near human universal that healers or at least one class of healers is high status regardless of their efficacy.
Nevertheless, today I believe doctors save many more than they kill. I want doctors to treat me, and I want them to become much better at treating me. And if there’s no better choice, I will cheerfully pay the price of more people turning to quackery, because I won’t do it myself.
Moldbug on Cancer (and medicine in general)
He is right of course.
Edit: I didn’t think I would have to clarify this, but the “He is right of course” comment was referring to the bolded text.
Following JoshuaZ, I also don’t think this remark should stand unchallenged.
Off-target effects, which are difficult to predict even for a single, well-understood drug. Also, the CYPs in your liver can turn a safe drug into a much scarier metabolite. And the drugs themselves can also modify the activity of the CYPs. Combined with dynamic dosing (“keep doubling the dose until the patient feels side effects”) the blood levels of the 30 drugs will be all over the place.
What are the common elements present when the patient has been dosed with varying amounts of 30 different drugs? If the cancer is cured, how should the credit be split among the teams? If the patient dies, who gets the blame?
The anti-quackery property of the current research regime is not just to prevent patients from being hoodwinked. It’s epistemic hygiene to keep the researchers from fooling themselves into thinking they’re helping when they’re really doing nothing or causing active harm.
I was talking about the bolded part (though I happen to approve of the text that follows it too) when I said that he is of course right. Our dealings with medicine seem tainted by an irrational risk aversion.
Fair enough. Only my last point sort of engages with the bolded text.
I think there are much sounder ways to buy fewer undertreatment deaths from those extra sigmas of confidence than the plan that Moldbug proposes.
I’m wondering if the comparison with a dogfight is fair, though. With only the conservative treatment Steve has months or years to live, while a single wrong move kills him quickly. Dogfights are the opposite: the conservative approach (absence of a single right move) has a significant chance of doing you in.
In other words, the expected lifetime is reversed in a fight vs treatment between doing something and doing nothing.
A doctor in australia has wanted to use the product our company makes to try in vivo treatment of cancer, and we are unable to let him because of how insanely liable we would be and the high cost of having a GMP facility (that would in no actual way improve the product) means it’s unlikely to ever be a thing.
The ethical principles of experimenting on human beings are pretty subtle. It’s not just about protecting from quackery, though he is right that there is a legacy of Nuremburg involved. Read, for example, the guides that the Institutional Review Boards that approve scientific research must follow.
*Respect for persons involves a recognition of the personal dignity and autonomy of individuals, and special protection of those persons with diminished autonomy.
*Beneficence entails an obligation to protect persons from harm by maximizing anticipated benefits and minimizing possible risks of harm.
*Justice requires that the benefits and burdens of research be distributed fairly.
The most relevant principle here is “beneficience”. Unless the experimenter can claim to be in equipoise, about which of two procedures will be more beneficial, they’re obligated to use the presumed better option (which means no randomization). You can get away with more in pursuit of practice than you can in pursuit of research, but practice is deliberately restricted to prevent obtaining generalizable knowledge.
Roughly put, society has decided that it would rather that the only experiments that we perform are ones where there’s no appreciable possibility of harm to the participants, than allow that it is sometimes necessary for the progress of science that noble volunteers try things which we can’t be sure are good, and might be expected to be a bit worse, so that society can learn when they turn out to be better, or when they teach us things that suggest the better option. In a more rational society, everyone would have to accept that their treatment might not be the best possible for them (according to our current state of ignorance), but would require that the treatment be designed in order to lead to generalizable knowledge for the future.
Shocking! Why, who’d expect it from such a pillar of society!
(Sure, he’s 110% right in this isolated argument, and the medical industry is indeed a blatant, painfully obvious mafia. But one could make a bit of a case against this by arguing disproportionately risky outliers: e.g. what if we try to make AIDS attack itself but instead make it air- and waterborne, and then it slips away from the lab? What if we protect the AI industry from intrusive regulation early on when it’s still safe, then suddenly it’s an arms race of several UFAI projects, each hoping to be a little less bad than the others?)
imagines US congress trying to legislate friendliness or regulate AI safety
ಠ_ಠ
Weirder, absurder stuff has happened—and certainly has been speculated about. In fact, Stanislaw Lem has written several novels and stories that largely depict Western bureaucracy trying to cope with AIs and other emerging technologies, and the insane disreality produced by that (His Master’s Voice, Peace on Earth, Golem XIV and the unashamed Dick ripoff… er, homage Memoirs Found in a Bathtub). I’ve read the first three, they’re great.
For that matter, check out Dick’s short stories and essays too.
I know! I mean if something like this happens who know what might be next. People might start finding reasons why death is good or Robin Hanson might turn into a cynic.
As a medical student, this quote has significantly perturbed how I think about the epistemology of the field, though I’m still processing it. Well done!
I see he’s commented about LessWrong, also.
For Moldbug to say that we’re merely colossally ignorant seems like a serious compliment to us.
Frankly, that bit came across as more or less projection. Although he is marginally correct that there’s does seem to on occasion be an unhealthy attitude here that we’re the only smart people.
On occasion? (note that the “People in my group are smarter/better/otherwise better” idea is not at all unique to LW)
Have you read Eugene Volokh’s writings on the right to medical self-defense?
He’s not right. He’s marginally correct: First, he ignores that even under current circumstances a lot of people die from quackry (and in fact, the example he uses of Steve Jobs is arguably an example since he used various ineffective alternative medicines until it was too late). Moreover, cancer mortality rates are declining, so the system isn’t as ineffective as he makes it out to be. His basic thrust may be valid- there’s no question that the FDA has become more bureaucratic, and that some laws and regulations are preventing research that might otherwise go ahead. But he is massively overstating the strength of his case.
Steve Jobs sought out quackery. You seem to be confused by what is meant by quackery here:
People who die because they rely on alternative medicine aren’t going to be helped in the slightest by an additional six or five or four sigmas of certainty within the walled garden of our medical regulatory system. Medical malpractice and incompetence also is not the correct meaning of “death by quackery” in the above text. Death by quackery quite clearly refers to death caused as deaths caused by experimental treatments figuring out what the hell is happening.
You indeed miss a far better reason to criticize Moldbug here. A good reason for Moldbug being wrong is that even with those expensive six sigmas of certainty many people end up distrusting established medicine enough to seek alternatives. If you reduce the sigmas of certainty, more people will wander into the wild weeds outside the garden. These people seem more likely to be hurt than not.
Not only that, even controlling for these people, the six sigma’s of certainty might also be buying us placebo for the masses. But this is easy to overestimate, since it is easy to forget how very ignorant people really are. They accept the “doctor’s orders” and trust them with their lives not because the doctor is right or extremely likely to be right but because he is high status and it is expected of people to follow “doctor’s orders”. The reasons doctors are high status in our society has little to do with them being good at what they do. Doctors have been respected in the West for a long time and not so ancient is a time when it is plausible to argue that they killed more people than they saved. The truth of that last questions matters far less than the fact that it can be raised at all! Leaving aside doctors in particular it seems a near human universal that healers or at least one class of healers is high status regardless of their efficacy.
Nevertheless, today I believe doctors save many more than they kill. I want doctors to treat me, and I want them to become much better at treating me. And if there’s no better choice, I will cheerfully pay the price of more people turning to quackery, because I won’t do it myself.