I like the framework you’ve offered of counterargument, refutation, and refutation of the central point. I think it might be productive to identify, via a quote, our perception the central point of the linked article.
What he said instead was that there was “no evidence” that additional chemo, after there are no signs of disease, did *any* additional good at all, and that the treatments therefore should have been stopped a long time ago and should certainly stop now.
So then I asked him whether by “no evidence” he meant that there have been lots of studies directly on this point which came back with the result that more chemo doesn’t help, or whether he meant that there was no evidence because there were few or no relevant studies. If the former was true, then it’d be pretty much game over: the case for discontinuing the chemo would be overwhelming.
But if the latter was true, then things would be much hazier: in the absence of conclusive evidence one way or the other, one would have to operate in the realm of interpreting imperfect evidence; one would have to make judgments based on anecdotal evidence, by theoretical knowledge of how the body works and how cancer works, or whatever.
I think that there are three ways of interpreting the central point of these sentences.
The material fact of whether or not studies directly on this point exist or do not exist.
The medical strategy claim that the existence or nonexistence of these studies should have been the primary driver in “deciding how to decide” whether or not to continue chemo for this patient.
The biomedical science claim that if “conclusive” studies exist on the effect of N rounds of chemo on risk of cancer reoccurrence, then we should use them as our base rate. if not, we have to rely on “hazier” methods.
It seems to me unlikely that even the blog’s author thought that this doctor did not understand point (1). I don’t think this was the central point. If it was, publication bias means that there isn’t as much of a distinction between “evidence” and “no evidence“ as we might wish. Absence of evidence is even more evidence of absence if publication bias prevents publication of data against the efficacy of additional chemo treatment.
(2) might have been the central point. If so, then here is how I would attempt to refute it:
“Deciding how to decide” should be more heavily reliant on likely treatment options should the cancer reoccur, and the visible impacts of continued chemo on the patient. The OP’s framing of the existence of conclusive studies as making a sharp difference in what ought to be done is just false. The risk of cancer reoccurring given N chemo treatments isn’t the only factor at play informing the patient’s risk of dying from that cancer, and the patient’s goals exist on the other side of the is/ought gap.
If (3) is the central point, then I agree that in the absence of high-quality, “conclusive” studies, we have to find some other basis on which to assess a base rate. The question is, how will we do that? Or more practically and relevantly, whose judgment will we privilege in this way? The author frames the doctor as having not understood this distinction. Building a causal model is a rather subjective process, and employing it instrumentally involves coordinating a group of people around a common model of reality in order to attain an objective. We cannot ignore the way these coordination and power dynamics impact our “hazy” group rationality processes. They are inseparable from it.
I like the framework you’ve offered of counterargument, refutation, and refutation of the central point. I think it might be productive to identify, via a quote, our perception the central point of the linked article.
I think that there are three ways of interpreting the central point of these sentences.
The material fact of whether or not studies directly on this point exist or do not exist.
The medical strategy claim that the existence or nonexistence of these studies should have been the primary driver in “deciding how to decide” whether or not to continue chemo for this patient.
The biomedical science claim that if “conclusive” studies exist on the effect of N rounds of chemo on risk of cancer reoccurrence, then we should use them as our base rate. if not, we have to rely on “hazier” methods.
It seems to me unlikely that even the blog’s author thought that this doctor did not understand point (1). I don’t think this was the central point. If it was, publication bias means that there isn’t as much of a distinction between “evidence” and “no evidence“ as we might wish. Absence of evidence is even more evidence of absence if publication bias prevents publication of data against the efficacy of additional chemo treatment.
(2) might have been the central point. If so, then here is how I would attempt to refute it:
“Deciding how to decide” should be more heavily reliant on likely treatment options should the cancer reoccur, and the visible impacts of continued chemo on the patient. The OP’s framing of the existence of conclusive studies as making a sharp difference in what ought to be done is just false. The risk of cancer reoccurring given N chemo treatments isn’t the only factor at play informing the patient’s risk of dying from that cancer, and the patient’s goals exist on the other side of the is/ought gap.
If (3) is the central point, then I agree that in the absence of high-quality, “conclusive” studies, we have to find some other basis on which to assess a base rate. The question is, how will we do that? Or more practically and relevantly, whose judgment will we privilege in this way? The author frames the doctor as having not understood this distinction. Building a causal model is a rather subjective process, and employing it instrumentally involves coordinating a group of people around a common model of reality in order to attain an objective. We cannot ignore the way these coordination and power dynamics impact our “hazy” group rationality processes. They are inseparable from it.