Before responding, I think this is an opportunity for a productive and charitable back and forth, which I’d like to have with you! This also might be challenging, because there are already a bunch of threads to this argument. So I’ll respond to a couple pieces of what you’ve said, and feel free to only respond to part of what I’ve said.
Likewise! And sounds good! :)
The original article seemed to have three points, or question-clusters.
I have a feeling that we agree here but am not sure, so I will say it. My read is that there was one, singular, focal point of the article: that you should update incrementally instead of having some (arbitrary) threshold before you update at all. That’s the central point and it feels to me like the threads you are opening are tangential.
Medical: Regardless of whether N+1 treatments makes sense, one should still update incrementally.
Social: Regardless of what this particular doctor happened to mean, how much trust one should have in them, how much trust one should have in doctors more broadly, how one should navigate the social dance, etc., it is still true that we should update incrementally.
Philosophical: I’m not seeing how absence of evidence is relevant. Well, it’s reason to make incremental updates. But I see the core question of the article as “should we make incremental updates or should we wait until the evidence is sufficiently strong before making any update at all”. You also bring up the question of eg. “how much weight should we give ‘it seems like common sense that X’”. It is a good question, but I see it as tangential. The question that this article focuses on is whether it deserves any weight at all.
I’m open to discussing these (IMO) tangential points, but I think it’s important to note that they are DH4, not DH6, or even DH5.
The author seems to be implying that it’s common sense to apply at least N + 1 treatments in this case, to kill any remaining proliferating cells.
I think you are mistaken. The author said that he notices a tradeoff at play and wanted to get the doctors opinion on that tradeoff. Eg. the tradeoff might come out in favor of not applying N+1 treatments. From the article:
“Going into the appointment, I had the idea (based on nothing but what seemed to me like common sense) that there was a tradeoff: more chemo means a higher chance that the cancer won’t reappear, but also means a higher chance of serious side effects, and that we were going there to get his opinion on whether in this case the pros outweighed the cons or vice-versa.”
I would also (charitably) assume that the author feels uncertain about whether there are other tradeoffs/considerations at play, and wanted to hear from the doctor about that as well. Ie. first figure out all of the tradeoffs and then make a decision based on the weights.
As for my take on cancer treatment, I’m at the same point as the author: I notice some tradeoffs but a) don’t know how strong they are and b) probably don’t have a complete picture.
The doctor has one, and it also makes “common sense.” If you can’t see it, and you’ve been fighting it past the point of not being able to see it for a while, it’s probably not there. We know that chemo is harming the body and quality of life of the patient, and will continue to do so until the treatment is stopped. We can also resume the chemo if the cancer re-emerges.
Here is my model of how the author would reply to this: “You say it’s probably not there. That might be true. I don’t know how likely that is and wanted to get the doctor’s opinion on it. I agree that chemo is harming the body. I see that as a con. But there is also a ‘pro’ of ‘we might prevent a relapse’. I don’t know how to weigh the pros and cons and want to get the doctor’s opinion on how much weight should be assigned to each. The problem is that the doctor expressed a belief that ‘we might prevent a relapse’ doesn’t even belong on the ‘pros’ list to begin with, and this belief stems from the incorrect notion that evidence needs to meet some threshold before we update at all.”
My perspective is that when you do this, there’s a tradeoff involved.
Agreed that there are tradeoffs and that they roughly take the shape you describe.
So I assert that the OP seems to have been playing the role of expert-vetter poorly.
Hm. I agree that it would have been good for the author to have done the research. It strikes me as either a) laziness or b) a lack of altruism (ie. if it were himself who had the cancer, or a closer relative, he would have been motivated enough to do the research). Both of which are things we all struggle with. Still, we should strive to do better. But on the other hand, I think getting into all of that would have distracted from the main point of the blog post, and so it feels to me like a good decision to leave it out.
I like the framework you’ve offered of counterargument, refutation, and refutation of the central point. I think it might be productive to identify, via a quote, our perception the central point of the linked article.
What he said instead was that there was “no evidence” that additional chemo, after there are no signs of disease, did *any* additional good at all, and that the treatments therefore should have been stopped a long time ago and should certainly stop now.
So then I asked him whether by “no evidence” he meant that there have been lots of studies directly on this point which came back with the result that more chemo doesn’t help, or whether he meant that there was no evidence because there were few or no relevant studies. If the former was true, then it’d be pretty much game over: the case for discontinuing the chemo would be overwhelming.
But if the latter was true, then things would be much hazier: in the absence of conclusive evidence one way or the other, one would have to operate in the realm of interpreting imperfect evidence; one would have to make judgments based on anecdotal evidence, by theoretical knowledge of how the body works and how cancer works, or whatever.
I think that there are three ways of interpreting the central point of these sentences.
The material fact of whether or not studies directly on this point exist or do not exist.
The medical strategy claim that the existence or nonexistence of these studies should have been the primary driver in “deciding how to decide” whether or not to continue chemo for this patient.
The biomedical science claim that if “conclusive” studies exist on the effect of N rounds of chemo on risk of cancer reoccurrence, then we should use them as our base rate. if not, we have to rely on “hazier” methods.
It seems to me unlikely that even the blog’s author thought that this doctor did not understand point (1). I don’t think this was the central point. If it was, publication bias means that there isn’t as much of a distinction between “evidence” and “no evidence“ as we might wish. Absence of evidence is even more evidence of absence if publication bias prevents publication of data against the efficacy of additional chemo treatment.
(2) might have been the central point. If so, then here is how I would attempt to refute it:
“Deciding how to decide” should be more heavily reliant on likely treatment options should the cancer reoccur, and the visible impacts of continued chemo on the patient. The OP’s framing of the existence of conclusive studies as making a sharp difference in what ought to be done is just false. The risk of cancer reoccurring given N chemo treatments isn’t the only factor at play informing the patient’s risk of dying from that cancer, and the patient’s goals exist on the other side of the is/ought gap.
If (3) is the central point, then I agree that in the absence of high-quality, “conclusive” studies, we have to find some other basis on which to assess a base rate. The question is, how will we do that? Or more practically and relevantly, whose judgment will we privilege in this way? The author frames the doctor as having not understood this distinction. Building a causal model is a rather subjective process, and employing it instrumentally involves coordinating a group of people around a common model of reality in order to attain an objective. We cannot ignore the way these coordination and power dynamics impact our “hazy” group rationality processes. They are inseparable from it.
Likewise! And sounds good! :)
I have a feeling that we agree here but am not sure, so I will say it. My read is that there was one, singular, focal point of the article: that you should update incrementally instead of having some (arbitrary) threshold before you update at all. That’s the central point and it feels to me like the threads you are opening are tangential.
Medical: Regardless of whether N+1 treatments makes sense, one should still update incrementally.
Social: Regardless of what this particular doctor happened to mean, how much trust one should have in them, how much trust one should have in doctors more broadly, how one should navigate the social dance, etc., it is still true that we should update incrementally.
Philosophical: I’m not seeing how absence of evidence is relevant. Well, it’s reason to make incremental updates. But I see the core question of the article as “should we make incremental updates or should we wait until the evidence is sufficiently strong before making any update at all”. You also bring up the question of eg. “how much weight should we give ‘it seems like common sense that X’”. It is a good question, but I see it as tangential. The question that this article focuses on is whether it deserves any weight at all.
I’m open to discussing these (IMO) tangential points, but I think it’s important to note that they are DH4, not DH6, or even DH5.
I think you are mistaken. The author said that he notices a tradeoff at play and wanted to get the doctors opinion on that tradeoff. Eg. the tradeoff might come out in favor of not applying N+1 treatments. From the article:
“Going into the appointment, I had the idea (based on nothing but what seemed to me like common sense) that there was a tradeoff: more chemo means a higher chance that the cancer won’t reappear, but also means a higher chance of serious side effects, and that we were going there to get his opinion on whether in this case the pros outweighed the cons or vice-versa.”
I would also (charitably) assume that the author feels uncertain about whether there are other tradeoffs/considerations at play, and wanted to hear from the doctor about that as well. Ie. first figure out all of the tradeoffs and then make a decision based on the weights.
As for my take on cancer treatment, I’m at the same point as the author: I notice some tradeoffs but a) don’t know how strong they are and b) probably don’t have a complete picture.
Here is my model of how the author would reply to this: “You say it’s probably not there. That might be true. I don’t know how likely that is and wanted to get the doctor’s opinion on it. I agree that chemo is harming the body. I see that as a con. But there is also a ‘pro’ of ‘we might prevent a relapse’. I don’t know how to weigh the pros and cons and want to get the doctor’s opinion on how much weight should be assigned to each. The problem is that the doctor expressed a belief that ‘we might prevent a relapse’ doesn’t even belong on the ‘pros’ list to begin with, and this belief stems from the incorrect notion that evidence needs to meet some threshold before we update at all.”
Agreed that there are tradeoffs and that they roughly take the shape you describe.
Hm. I agree that it would have been good for the author to have done the research. It strikes me as either a) laziness or b) a lack of altruism (ie. if it were himself who had the cancer, or a closer relative, he would have been motivated enough to do the research). Both of which are things we all struggle with. Still, we should strive to do better. But on the other hand, I think getting into all of that would have distracted from the main point of the blog post, and so it feels to me like a good decision to leave it out.
I like the framework you’ve offered of counterargument, refutation, and refutation of the central point. I think it might be productive to identify, via a quote, our perception the central point of the linked article.
I think that there are three ways of interpreting the central point of these sentences.
The material fact of whether or not studies directly on this point exist or do not exist.
The medical strategy claim that the existence or nonexistence of these studies should have been the primary driver in “deciding how to decide” whether or not to continue chemo for this patient.
The biomedical science claim that if “conclusive” studies exist on the effect of N rounds of chemo on risk of cancer reoccurrence, then we should use them as our base rate. if not, we have to rely on “hazier” methods.
It seems to me unlikely that even the blog’s author thought that this doctor did not understand point (1). I don’t think this was the central point. If it was, publication bias means that there isn’t as much of a distinction between “evidence” and “no evidence“ as we might wish. Absence of evidence is even more evidence of absence if publication bias prevents publication of data against the efficacy of additional chemo treatment.
(2) might have been the central point. If so, then here is how I would attempt to refute it:
“Deciding how to decide” should be more heavily reliant on likely treatment options should the cancer reoccur, and the visible impacts of continued chemo on the patient. The OP’s framing of the existence of conclusive studies as making a sharp difference in what ought to be done is just false. The risk of cancer reoccurring given N chemo treatments isn’t the only factor at play informing the patient’s risk of dying from that cancer, and the patient’s goals exist on the other side of the is/ought gap.
If (3) is the central point, then I agree that in the absence of high-quality, “conclusive” studies, we have to find some other basis on which to assess a base rate. The question is, how will we do that? Or more practically and relevantly, whose judgment will we privilege in this way? The author frames the doctor as having not understood this distinction. Building a causal model is a rather subjective process, and employing it instrumentally involves coordinating a group of people around a common model of reality in order to attain an objective. We cannot ignore the way these coordination and power dynamics impact our “hazy” group rationality processes. They are inseparable from it.