This is why HCTs are used to treat cancer, and also why indefinite chemotherapy would not be expected to help patients. It continues to damage healthy tissue, while not killing the lingering malignant quiescent stem cells.
The original article’s point is that “additional chemo might get rid of the last little bits of cancer that are too small to show up on scans”. So even if it was known that HCT’s don’t work on normal and malignant stem cells, there is still the question of whether there are lingering proliferating cells that aren’t showing up on scans.
Of course, this doesn’t mean the chemo is worth it. From the original article: “more chemo means a higher chance that the cancer won’t reappear, but also means a higher chance of serious side effects, and that we were going there to get his opinion on whether in this case the pros outweighed the cons or vice-versa”.
If indefinite HCT was beneficial for treatment of cancer, we should expect that some evidence would be available for it. Absence of evidence is evidence of absence.
True. But 1) how strong is this evidence of absence? Perhaps it’s weak since the standards for journals to publish stuff are pretty high. And 2) the author of the original article was clear that he didn’t know how beneficial treatment was or what the risks were, and thus was seeking the expertise of the doctor.
My take is that the doctor was trying to coordinate care for his patient, build rapport with the family, and trying to resist being drawn into a rabbit hole discussion by one ill-informed and overconfident family member.
From the original article: “And good people, maybe I’m being unfair and underestimating this guy, but I swear to you that this fancy oncologist in this very prestigious institution didn’t seem to understand the difference between these two types of ‘no evidence.’ and “*The most generous possible interpretation of what went on, but which would require me to attribute to him a thought process that he did not express at all, is that he understands the difference between the two types of “no evidence” but has come to believe that doctors’ interpretations of imperfect evidence will systematically lead them to over-treat and so has adopted a rule of “do nothing unless there is strong evidence that you should do something” as a second-best optimum.”
If the doctor was trying to do what you said, I have a moderately strong expectation that the author would have picked up on that. Instead, as indicated in the quotes above, the author picked up a different vibe.
Another piece of common sense is “doctor knows best.”
I think that is way too charitable. My model is that they usually meet some level of competency, but a) are often painfully unaware of research that has been done over the past ~10 years, and b) lack some basic reasoning ability. I can dig up links/excerpts if you are interested, but here are some things that come to mind:
In Peak: Secrets from the New Science of Expertise, I recall the author talking about research showing that older doctors are usually worse than younger doctors because they don’t keep up with recent findings. I also recall a psychologist named Tori Olds talking about this on her YouTube channel.
I recall studies showing doctors badly screwing up Bayesian reasoning.
I recall https://www.painscience.com/ talking about how it’s worth taking a shot on things that are low cost and that have weak/no side effects even if they aren’t likely to work, and how doctors usually don’t approach it that way.
In Eliezer’s dath ilan post, I recall him saying stuff about (b).
Given the incentives of the medical system, I wouldn’t expect competency a priori.
Zvi has talked a lot and done a great job layout out the failures of organizations like the CDC. I know that is different from individual doctors. The words aren’t coming to me right now, but I see some relation and it causes me to update in the direction away from “doctor knows best”.
My personal experiences with doctors have been pretty bad.
Before responding, I think this is an opportunity for a productive and charitable back and forth, which I’d like to have with you! This also might be challenging, because there are already a bunch of threads to this argument. So I’ll respond to a couple pieces of what you’ve said, and feel free to only respond to part of what I’ve said.
The original article seemed to have three points, or question-clusters.
Medical: For N chemo treatments, why not do N + 1 treatments? How do we determine the stopping point, or how should we?
Social: Was this oncologist a trustworthy authority figure? What’s the best way to ensure we’re making the best decisions for my relative’s cancer treatment? How do we navigate these conversations and relationships, balancing the range of priorities and constraints of all involved?
Philosophical: Is absence of evidence evidence of absence? How do we reason under uncertainty, given what seems to us like common sense, our priors, and our extremely limited understanding of the causal mechanisms at play?
Here’s my interpretation and critique of the author’s implied answer to the medical question. I didn’t do this in my top-level comment, but let’s charitably grant that he was fully aware of the quiescent stem cell issue.
The author seems to be implying that it’s common sense to apply at least N + 1 treatments in this case, to kill any remaining proliferating cells. But this rule has no clear stopping point. Why not have the relative do chemo for the rest of their life? Charitably, the OP didn’t intend for their relative to get chemo forever—they had some common-sense stopping point in mind. But where?
The doctor’s view is that “there was “no evidence” that additional chemo, after there are no signs of disease, did *any* additional good at all, and that the treatments therefore should have been stopped a long time ago and should certainly stop now.” The doctor’s preferred stopping point for chemo was “as soon as we see no signs of disease,” but his actual stopping point in this case seems to be “quite a few rounds of chemo after we stopped seeing any signs of the disease.”
So when the OP initiated this conversation, the cancer had been in remission for a while. So what’s the “common-sense” criterion for a stopping point?
The doctor has one, and it also makes “common sense.” If you can’t see it, and you’ve been fighting it past the point of not being able to see it for a while, it’s probably not there. We know that chemo is harming the body and quality of life of the patient, and will continue to do so until the treatment is stopped. We can also resume the chemo if the cancer re-emerges.
I think my charitable interpretation fo the doctor’s perspective (a charity the OP did not grant the oncologist), is at the very least a reasonable argument. It’s one that the OP made no effort to suss out, either in the conversation with the doctor or in reflection afterward.
This brings us to the social issue. My interpretation of the doctor is that he’s dealing with a relative who is:
Uninformed about the biomedical literature, by his own admission.
Making one “common-sense” argument, without making an effort to see the “common sense” in the doctor’s perspective.
Not doing research prior to attending this meeting.
Assessing the doctor’s competence to make good decisions for the patient.
Now, I think that you and I agree that it is good and necessary for patients, or their advocates, to assess the competence of experts generally, including doctors. This is a tricky problem, and it’s been written about at length in the rationalsphere.
My perspective is that when you do this, there’s a tradeoff involved. On the one hand, if you do it well, you increase your chance of working with a competent expert. On the other hand, if you do it poorly, you may disrupt the formation of appropriate trust, and complicate the work of establishing relationships and sharing information.
In this case, I am arguing that the OP, by his own admission, did not do some of the common-sense things you’d do if you were trying to play the role of expert-vetter well. In this case, especially if you see yourself as a very rational, smart person, you’d make an effort to do some research in advance, and to understand the doctor’s point of view. I agree that doctors, as you say, are not always up-to-date on the literature, and I’ve been on the receiving end of some bad care myself. However, a simple action you can take to safeguard against such cases, when you already know the diagnosis, is to find that literature yourself.
The OP seems not to have done that. He also wasn’t asking about the doctor’s up-to-dateness, but rather the doctor’s willingness to address his point about absence of evidence. So I assert that the OP seems to have been playing the role of expert-vetter poorly. Furthermore, they perceived and portrayed themselves as doing it well, a sort of Dunning-Krueger effect.
They then use this as a dig against their doctor, and their post is now being used as a lesson in rationality. This seems concerning to me. Not having been present, I don’t want to assume I really know what was going on. But it rubs me the wrong way, and doesn’t seem like a central case of either instrumental or epistemological rationality in action.
I don’t want to get into the philosophical level, because this isn’t a main source of my objection to the Overcoming Bias post. So I will leave that to the side.
You don’t have to persuade me that doctors are not always as well-informed as we’d like them to be. And certainly I already know that our medical system is deeply flawed, even broken. As I say, I’m on board with the idea that patients or their relatives should inform themselves, and have a collaborative role with their doctors.
I’m saying that I think the poster of the Overcoming Bias post comes across to me as having done a below-average job of occupying that role. That’s just the perception I get from reading the post. Maybe I’d feel different if I’d been there in person to observe, but I can only go off the information given.
Before responding, I think this is an opportunity for a productive and charitable back and forth, which I’d like to have with you! This also might be challenging, because there are already a bunch of threads to this argument. So I’ll respond to a couple pieces of what you’ve said, and feel free to only respond to part of what I’ve said.
Likewise! And sounds good! :)
The original article seemed to have three points, or question-clusters.
I have a feeling that we agree here but am not sure, so I will say it. My read is that there was one, singular, focal point of the article: that you should update incrementally instead of having some (arbitrary) threshold before you update at all. That’s the central point and it feels to me like the threads you are opening are tangential.
Medical: Regardless of whether N+1 treatments makes sense, one should still update incrementally.
Social: Regardless of what this particular doctor happened to mean, how much trust one should have in them, how much trust one should have in doctors more broadly, how one should navigate the social dance, etc., it is still true that we should update incrementally.
Philosophical: I’m not seeing how absence of evidence is relevant. Well, it’s reason to make incremental updates. But I see the core question of the article as “should we make incremental updates or should we wait until the evidence is sufficiently strong before making any update at all”. You also bring up the question of eg. “how much weight should we give ‘it seems like common sense that X’”. It is a good question, but I see it as tangential. The question that this article focuses on is whether it deserves any weight at all.
I’m open to discussing these (IMO) tangential points, but I think it’s important to note that they are DH4, not DH6, or even DH5.
The author seems to be implying that it’s common sense to apply at least N + 1 treatments in this case, to kill any remaining proliferating cells.
I think you are mistaken. The author said that he notices a tradeoff at play and wanted to get the doctors opinion on that tradeoff. Eg. the tradeoff might come out in favor of not applying N+1 treatments. From the article:
“Going into the appointment, I had the idea (based on nothing but what seemed to me like common sense) that there was a tradeoff: more chemo means a higher chance that the cancer won’t reappear, but also means a higher chance of serious side effects, and that we were going there to get his opinion on whether in this case the pros outweighed the cons or vice-versa.”
I would also (charitably) assume that the author feels uncertain about whether there are other tradeoffs/considerations at play, and wanted to hear from the doctor about that as well. Ie. first figure out all of the tradeoffs and then make a decision based on the weights.
As for my take on cancer treatment, I’m at the same point as the author: I notice some tradeoffs but a) don’t know how strong they are and b) probably don’t have a complete picture.
The doctor has one, and it also makes “common sense.” If you can’t see it, and you’ve been fighting it past the point of not being able to see it for a while, it’s probably not there. We know that chemo is harming the body and quality of life of the patient, and will continue to do so until the treatment is stopped. We can also resume the chemo if the cancer re-emerges.
Here is my model of how the author would reply to this: “You say it’s probably not there. That might be true. I don’t know how likely that is and wanted to get the doctor’s opinion on it. I agree that chemo is harming the body. I see that as a con. But there is also a ‘pro’ of ‘we might prevent a relapse’. I don’t know how to weigh the pros and cons and want to get the doctor’s opinion on how much weight should be assigned to each. The problem is that the doctor expressed a belief that ‘we might prevent a relapse’ doesn’t even belong on the ‘pros’ list to begin with, and this belief stems from the incorrect notion that evidence needs to meet some threshold before we update at all.”
My perspective is that when you do this, there’s a tradeoff involved.
Agreed that there are tradeoffs and that they roughly take the shape you describe.
So I assert that the OP seems to have been playing the role of expert-vetter poorly.
Hm. I agree that it would have been good for the author to have done the research. It strikes me as either a) laziness or b) a lack of altruism (ie. if it were himself who had the cancer, or a closer relative, he would have been motivated enough to do the research). Both of which are things we all struggle with. Still, we should strive to do better. But on the other hand, I think getting into all of that would have distracted from the main point of the blog post, and so it feels to me like a good decision to leave it out.
I like the framework you’ve offered of counterargument, refutation, and refutation of the central point. I think it might be productive to identify, via a quote, our perception the central point of the linked article.
What he said instead was that there was “no evidence” that additional chemo, after there are no signs of disease, did *any* additional good at all, and that the treatments therefore should have been stopped a long time ago and should certainly stop now.
So then I asked him whether by “no evidence” he meant that there have been lots of studies directly on this point which came back with the result that more chemo doesn’t help, or whether he meant that there was no evidence because there were few or no relevant studies. If the former was true, then it’d be pretty much game over: the case for discontinuing the chemo would be overwhelming.
But if the latter was true, then things would be much hazier: in the absence of conclusive evidence one way or the other, one would have to operate in the realm of interpreting imperfect evidence; one would have to make judgments based on anecdotal evidence, by theoretical knowledge of how the body works and how cancer works, or whatever.
I think that there are three ways of interpreting the central point of these sentences.
The material fact of whether or not studies directly on this point exist or do not exist.
The medical strategy claim that the existence or nonexistence of these studies should have been the primary driver in “deciding how to decide” whether or not to continue chemo for this patient.
The biomedical science claim that if “conclusive” studies exist on the effect of N rounds of chemo on risk of cancer reoccurrence, then we should use them as our base rate. if not, we have to rely on “hazier” methods.
It seems to me unlikely that even the blog’s author thought that this doctor did not understand point (1). I don’t think this was the central point. If it was, publication bias means that there isn’t as much of a distinction between “evidence” and “no evidence“ as we might wish. Absence of evidence is even more evidence of absence if publication bias prevents publication of data against the efficacy of additional chemo treatment.
(2) might have been the central point. If so, then here is how I would attempt to refute it:
“Deciding how to decide” should be more heavily reliant on likely treatment options should the cancer reoccur, and the visible impacts of continued chemo on the patient. The OP’s framing of the existence of conclusive studies as making a sharp difference in what ought to be done is just false. The risk of cancer reoccurring given N chemo treatments isn’t the only factor at play informing the patient’s risk of dying from that cancer, and the patient’s goals exist on the other side of the is/ought gap.
If (3) is the central point, then I agree that in the absence of high-quality, “conclusive” studies, we have to find some other basis on which to assess a base rate. The question is, how will we do that? Or more practically and relevantly, whose judgment will we privilege in this way? The author frames the doctor as having not understood this distinction. Building a causal model is a rather subjective process, and employing it instrumentally involves coordinating a group of people around a common model of reality in order to attain an objective. We cannot ignore the way these coordination and power dynamics impact our “hazy” group rationality processes. They are inseparable from it.
The original article’s point is that “additional chemo might get rid of the last little bits of cancer that are too small to show up on scans”. So even if it was known that HCT’s don’t work on normal and malignant stem cells, there is still the question of whether there are lingering proliferating cells that aren’t showing up on scans.
Of course, this doesn’t mean the chemo is worth it. From the original article: “more chemo means a higher chance that the cancer won’t reappear, but also means a higher chance of serious side effects, and that we were going there to get his opinion on whether in this case the pros outweighed the cons or vice-versa”.
True. But 1) how strong is this evidence of absence? Perhaps it’s weak since the standards for journals to publish stuff are pretty high. And 2) the author of the original article was clear that he didn’t know how beneficial treatment was or what the risks were, and thus was seeking the expertise of the doctor.
From the original article: “And good people, maybe I’m being unfair and underestimating this guy, but I swear to you that this fancy oncologist in this very prestigious institution didn’t seem to understand the difference between these two types of ‘no evidence.’ and “*The most generous possible interpretation of what went on, but which would require me to attribute to him a thought process that he did not express at all, is that he understands the difference between the two types of “no evidence” but has come to believe that doctors’ interpretations of imperfect evidence will systematically lead them to over-treat and so has adopted a rule of “do nothing unless there is strong evidence that you should do something” as a second-best optimum.”
If the doctor was trying to do what you said, I have a moderately strong expectation that the author would have picked up on that. Instead, as indicated in the quotes above, the author picked up a different vibe.
I think that is way too charitable. My model is that they usually meet some level of competency, but a) are often painfully unaware of research that has been done over the past ~10 years, and b) lack some basic reasoning ability. I can dig up links/excerpts if you are interested, but here are some things that come to mind:
https://www.painscience.com/ gives lots of examples of (a).
In Peak: Secrets from the New Science of Expertise, I recall the author talking about research showing that older doctors are usually worse than younger doctors because they don’t keep up with recent findings. I also recall a psychologist named Tori Olds talking about this on her YouTube channel.
I recall studies showing doctors badly screwing up Bayesian reasoning.
I recall https://www.painscience.com/ talking about how it’s worth taking a shot on things that are low cost and that have weak/no side effects even if they aren’t likely to work, and how doctors usually don’t approach it that way.
In Eliezer’s dath ilan post, I recall him saying stuff about (b).
Given the incentives of the medical system, I wouldn’t expect competency a priori.
Zvi has talked a lot and done a great job layout out the failures of organizations like the CDC. I know that is different from individual doctors. The words aren’t coming to me right now, but I see some relation and it causes me to update in the direction away from “doctor knows best”.
My personal experiences with doctors have been pretty bad.
Before responding, I think this is an opportunity for a productive and charitable back and forth, which I’d like to have with you! This also might be challenging, because there are already a bunch of threads to this argument. So I’ll respond to a couple pieces of what you’ve said, and feel free to only respond to part of what I’ve said.
The original article seemed to have three points, or question-clusters.
Medical: For N chemo treatments, why not do N + 1 treatments? How do we determine the stopping point, or how should we?
Social: Was this oncologist a trustworthy authority figure? What’s the best way to ensure we’re making the best decisions for my relative’s cancer treatment? How do we navigate these conversations and relationships, balancing the range of priorities and constraints of all involved?
Philosophical: Is absence of evidence evidence of absence? How do we reason under uncertainty, given what seems to us like common sense, our priors, and our extremely limited understanding of the causal mechanisms at play?
Here’s my interpretation and critique of the author’s implied answer to the medical question. I didn’t do this in my top-level comment, but let’s charitably grant that he was fully aware of the quiescent stem cell issue.
The author seems to be implying that it’s common sense to apply at least N + 1 treatments in this case, to kill any remaining proliferating cells. But this rule has no clear stopping point. Why not have the relative do chemo for the rest of their life? Charitably, the OP didn’t intend for their relative to get chemo forever—they had some common-sense stopping point in mind. But where?
The doctor’s view is that “there was “no evidence” that additional chemo, after there are no signs of disease, did *any* additional good at all, and that the treatments therefore should have been stopped a long time ago and should certainly stop now.” The doctor’s preferred stopping point for chemo was “as soon as we see no signs of disease,” but his actual stopping point in this case seems to be “quite a few rounds of chemo after we stopped seeing any signs of the disease.”
So when the OP initiated this conversation, the cancer had been in remission for a while. So what’s the “common-sense” criterion for a stopping point?
The doctor has one, and it also makes “common sense.” If you can’t see it, and you’ve been fighting it past the point of not being able to see it for a while, it’s probably not there. We know that chemo is harming the body and quality of life of the patient, and will continue to do so until the treatment is stopped. We can also resume the chemo if the cancer re-emerges.
I think my charitable interpretation fo the doctor’s perspective (a charity the OP did not grant the oncologist), is at the very least a reasonable argument. It’s one that the OP made no effort to suss out, either in the conversation with the doctor or in reflection afterward.
This brings us to the social issue. My interpretation of the doctor is that he’s dealing with a relative who is:
Uninformed about the biomedical literature, by his own admission.
Making one “common-sense” argument, without making an effort to see the “common sense” in the doctor’s perspective.
Not doing research prior to attending this meeting.
Assessing the doctor’s competence to make good decisions for the patient.
Now, I think that you and I agree that it is good and necessary for patients, or their advocates, to assess the competence of experts generally, including doctors. This is a tricky problem, and it’s been written about at length in the rationalsphere.
My perspective is that when you do this, there’s a tradeoff involved. On the one hand, if you do it well, you increase your chance of working with a competent expert. On the other hand, if you do it poorly, you may disrupt the formation of appropriate trust, and complicate the work of establishing relationships and sharing information.
In this case, I am arguing that the OP, by his own admission, did not do some of the common-sense things you’d do if you were trying to play the role of expert-vetter well. In this case, especially if you see yourself as a very rational, smart person, you’d make an effort to do some research in advance, and to understand the doctor’s point of view. I agree that doctors, as you say, are not always up-to-date on the literature, and I’ve been on the receiving end of some bad care myself. However, a simple action you can take to safeguard against such cases, when you already know the diagnosis, is to find that literature yourself.
The OP seems not to have done that. He also wasn’t asking about the doctor’s up-to-dateness, but rather the doctor’s willingness to address his point about absence of evidence. So I assert that the OP seems to have been playing the role of expert-vetter poorly. Furthermore, they perceived and portrayed themselves as doing it well, a sort of Dunning-Krueger effect.
They then use this as a dig against their doctor, and their post is now being used as a lesson in rationality. This seems concerning to me. Not having been present, I don’t want to assume I really know what was going on. But it rubs me the wrong way, and doesn’t seem like a central case of either instrumental or epistemological rationality in action.
I don’t want to get into the philosophical level, because this isn’t a main source of my objection to the Overcoming Bias post. So I will leave that to the side.
You don’t have to persuade me that doctors are not always as well-informed as we’d like them to be. And certainly I already know that our medical system is deeply flawed, even broken. As I say, I’m on board with the idea that patients or their relatives should inform themselves, and have a collaborative role with their doctors.
I’m saying that I think the poster of the Overcoming Bias post comes across to me as having done a below-average job of occupying that role. That’s just the perception I get from reading the post. Maybe I’d feel different if I’d been there in person to observe, but I can only go off the information given.
Likewise! And sounds good! :)
I have a feeling that we agree here but am not sure, so I will say it. My read is that there was one, singular, focal point of the article: that you should update incrementally instead of having some (arbitrary) threshold before you update at all. That’s the central point and it feels to me like the threads you are opening are tangential.
Medical: Regardless of whether N+1 treatments makes sense, one should still update incrementally.
Social: Regardless of what this particular doctor happened to mean, how much trust one should have in them, how much trust one should have in doctors more broadly, how one should navigate the social dance, etc., it is still true that we should update incrementally.
Philosophical: I’m not seeing how absence of evidence is relevant. Well, it’s reason to make incremental updates. But I see the core question of the article as “should we make incremental updates or should we wait until the evidence is sufficiently strong before making any update at all”. You also bring up the question of eg. “how much weight should we give ‘it seems like common sense that X’”. It is a good question, but I see it as tangential. The question that this article focuses on is whether it deserves any weight at all.
I’m open to discussing these (IMO) tangential points, but I think it’s important to note that they are DH4, not DH6, or even DH5.
I think you are mistaken. The author said that he notices a tradeoff at play and wanted to get the doctors opinion on that tradeoff. Eg. the tradeoff might come out in favor of not applying N+1 treatments. From the article:
“Going into the appointment, I had the idea (based on nothing but what seemed to me like common sense) that there was a tradeoff: more chemo means a higher chance that the cancer won’t reappear, but also means a higher chance of serious side effects, and that we were going there to get his opinion on whether in this case the pros outweighed the cons or vice-versa.”
I would also (charitably) assume that the author feels uncertain about whether there are other tradeoffs/considerations at play, and wanted to hear from the doctor about that as well. Ie. first figure out all of the tradeoffs and then make a decision based on the weights.
As for my take on cancer treatment, I’m at the same point as the author: I notice some tradeoffs but a) don’t know how strong they are and b) probably don’t have a complete picture.
Here is my model of how the author would reply to this: “You say it’s probably not there. That might be true. I don’t know how likely that is and wanted to get the doctor’s opinion on it. I agree that chemo is harming the body. I see that as a con. But there is also a ‘pro’ of ‘we might prevent a relapse’. I don’t know how to weigh the pros and cons and want to get the doctor’s opinion on how much weight should be assigned to each. The problem is that the doctor expressed a belief that ‘we might prevent a relapse’ doesn’t even belong on the ‘pros’ list to begin with, and this belief stems from the incorrect notion that evidence needs to meet some threshold before we update at all.”
Agreed that there are tradeoffs and that they roughly take the shape you describe.
Hm. I agree that it would have been good for the author to have done the research. It strikes me as either a) laziness or b) a lack of altruism (ie. if it were himself who had the cancer, or a closer relative, he would have been motivated enough to do the research). Both of which are things we all struggle with. Still, we should strive to do better. But on the other hand, I think getting into all of that would have distracted from the main point of the blog post, and so it feels to me like a good decision to leave it out.
I like the framework you’ve offered of counterargument, refutation, and refutation of the central point. I think it might be productive to identify, via a quote, our perception the central point of the linked article.
I think that there are three ways of interpreting the central point of these sentences.
The material fact of whether or not studies directly on this point exist or do not exist.
The medical strategy claim that the existence or nonexistence of these studies should have been the primary driver in “deciding how to decide” whether or not to continue chemo for this patient.
The biomedical science claim that if “conclusive” studies exist on the effect of N rounds of chemo on risk of cancer reoccurrence, then we should use them as our base rate. if not, we have to rely on “hazier” methods.
It seems to me unlikely that even the blog’s author thought that this doctor did not understand point (1). I don’t think this was the central point. If it was, publication bias means that there isn’t as much of a distinction between “evidence” and “no evidence“ as we might wish. Absence of evidence is even more evidence of absence if publication bias prevents publication of data against the efficacy of additional chemo treatment.
(2) might have been the central point. If so, then here is how I would attempt to refute it:
“Deciding how to decide” should be more heavily reliant on likely treatment options should the cancer reoccur, and the visible impacts of continued chemo on the patient. The OP’s framing of the existence of conclusive studies as making a sharp difference in what ought to be done is just false. The risk of cancer reoccurring given N chemo treatments isn’t the only factor at play informing the patient’s risk of dying from that cancer, and the patient’s goals exist on the other side of the is/ought gap.
If (3) is the central point, then I agree that in the absence of high-quality, “conclusive” studies, we have to find some other basis on which to assess a base rate. The question is, how will we do that? Or more practically and relevantly, whose judgment will we privilege in this way? The author frames the doctor as having not understood this distinction. Building a causal model is a rather subjective process, and employing it instrumentally involves coordinating a group of people around a common model of reality in order to attain an objective. We cannot ignore the way these coordination and power dynamics impact our “hazy” group rationality processes. They are inseparable from it.