No. This is not about interpretation of probabilities. It is about choosing what aspect of reality to rely on for extrapolation. You will get different extrapolations depending on whether you rely on a risk ratio, a risk difference or an odds ratio. This will lead to real differences in predictions for what happens under intervention.
Even if clinical decisions are entirely left to an algorithm, the algorithm will need to select a mathematical object to rely on for extrapolation. The person who writes the algorithm needs to tell the algorithm what to use, and the answer to that question is contested. This paper contributes to that discussion, and proposes a concrete solution. One that has been known for 65 years, but never used in practice.
This paradigm doesn’t matter if the physician has in mind a cost/benefit matrix for the treatment, in which it would be fairly easy to plug in raw experimental data no matter how the researchers chose to analyze it.
Having cost/benefit in mind is not enough. If you don’t use a heuristic like the one Anders writes about, you need either causal models or something like prediction-based medicine which gives you a way to decide which of two algorithms for decision making is better by looking at the Briers score (or a similar statistic).
You are right that once you have a prediction for risk if untreated, and a prediction risk if treated, you just need a cost/benefit analysis. However, you won’t get to that stage without a paradigm for extrapolation, whether implicit or explicit. I prefer making that paradigm explicit.
If you want to plug in raw experimental data, you are going to need data from people who are exactly like the patient in every way. Then, you will be relying on a paradigm for extrapolation which claims that the conditional counterfactual risks (rather than the magnitude of the effect) can be extrapolated from the study to the patient. It is a different paradigm, and one that can only be justified if the conditioning set includes every cause of the outcome.
In my view, this is completely unrealistic. I prefer a paradigm for extrapolation that aims to extrapolate the scale-specific magnitude of the effect. If this is the goal, our conditioning set only needs to include those covariates that predict the magnitude of the effect of treatment, which is a small subset of all covariates that cause the outcome.
On this specific point, my view is consistent with almost all thinking in medical statistics, with the exception of some very recent work in causal modeling (who prefer the approach based on counterfactual risks). My disagreement with this work in causal modeling is at the core of my last discussion about this on Less Wrong. See for example “Effect Heterogeneity and External Validity in Medicine” and the European Journal of Epidemiology paper that it links to
No. This is not about interpretation of probabilities. It is about choosing what aspect of reality to rely on for extrapolation. You will get different extrapolations depending on whether you rely on a risk ratio, a risk difference or an odds ratio. This will lead to real differences in predictions for what happens under intervention.
Even if clinical decisions are entirely left to an algorithm, the algorithm will need to select a mathematical object to rely on for extrapolation. The person who writes the algorithm needs to tell the algorithm what to use, and the answer to that question is contested. This paper contributes to that discussion, and proposes a concrete solution. One that has been known for 65 years, but never used in practice.
It is ultimately about interpretation.
This paradigm doesn’t matter if the physician has in mind a cost/benefit matrix for the treatment, in which it would be fairly easy to plug in raw experimental data no matter how the researchers chose to analyze it.
More broadly, see the comment by ChristianKl.
Having cost/benefit in mind is not enough. If you don’t use a heuristic like the one Anders writes about, you need either causal models or something like prediction-based medicine which gives you a way to decide which of two algorithms for decision making is better by looking at the Briers score (or a similar statistic).
I very emphatically disagree with this.
You are right that once you have a prediction for risk if untreated, and a prediction risk if treated, you just need a cost/benefit analysis. However, you won’t get to that stage without a paradigm for extrapolation, whether implicit or explicit. I prefer making that paradigm explicit.
If you want to plug in raw experimental data, you are going to need data from people who are exactly like the patient in every way. Then, you will be relying on a paradigm for extrapolation which claims that the conditional counterfactual risks (rather than the magnitude of the effect) can be extrapolated from the study to the patient. It is a different paradigm, and one that can only be justified if the conditioning set includes every cause of the outcome.
In my view, this is completely unrealistic. I prefer a paradigm for extrapolation that aims to extrapolate the scale-specific magnitude of the effect. If this is the goal, our conditioning set only needs to include those covariates that predict the magnitude of the effect of treatment, which is a small subset of all covariates that cause the outcome.
On this specific point, my view is consistent with almost all thinking in medical statistics, with the exception of some very recent work in causal modeling (who prefer the approach based on counterfactual risks). My disagreement with this work in causal modeling is at the core of my last discussion about this on Less Wrong. See for example “Effect Heterogeneity and External Validity in Medicine” and the European Journal of Epidemiology paper that it links to