But of course, common sense says they have to be boosting survival some!
But ‘common sense’ is just one particular causal model, and not obviously the correct one. If you think the problem is “the lungs are filling up with fluid, and so you can’t get oxygen to the blood” then ventilators seem obviously effective (tho a drainage solution would be even better, and I recklessly speculate this is one of the reasons people are trying proning); if you think the problem is “hemoglobin is being denatured such that the blood no longer effectively transmits oxygen” then the story for ventilators helping falls apart.
But in the meantime, I would like if every excited contrarian buzz about a treatment showing counterintuitive harm could be accompanied by SOME statement addressing the fact that you’d expect to see those results regardless of harm.
Surely the actual solution here is superior statistical techniques, right? Like I thought this was one of the primary value-adds of the causal modeling paradigm, was being able to more accurately estimate these sorts of effects in the presence of confounders.
Like, there must have been some threshold of survival for the leukemia drugs at which point it’s not worth switching to them, and presumably we can quantitatively calculate that threshold, and doing so is better than having a qualitative sense of that there’s an effect to be corrected for here.
Yeah, I didn’t mean to imply that causal modeling wasn’t the obvious solution—you’re right about the existence of the leukemia threshold. But I guess in my experience of these mistakes, I often see people taking the action “try to do superior statistical techniques” and that not working for them (including rationalists and not just terrible science reporting sites), whereas I think “identify the places where your model is terrible and call that out” is a better first step for knowing how to build the superior models.
In the ventilator case, for example, I’m not trying to advocate blindly following common sense, but I do think it’s important to incorporate common sense heavily. If people said, “There’s no evidence for respirators working, maybe hemoglobin is being denatured”, I certainly wouldn’t advocate for more common sense. But instead I tend to see “the statistics show respirators aren’t working, maybe we shouldn’t use them”, which seems to imply that common sense isn’t being given a say at all. It seems to me like always having common sense as one of your causal models is both an easy sell and a vital piece of the machine making sure your statistical techniques don’t go off the rails at any of their many opportunities.
But ‘common sense’ is just one particular causal model, and not obviously the correct one. If you think the problem is “the lungs are filling up with fluid, and so you can’t get oxygen to the blood” then ventilators seem obviously effective (tho a drainage solution would be even better, and I recklessly speculate this is one of the reasons people are trying proning); if you think the problem is “hemoglobin is being denatured such that the blood no longer effectively transmits oxygen” then the story for ventilators helping falls apart.
Surely the actual solution here is superior statistical techniques, right? Like I thought this was one of the primary value-adds of the causal modeling paradigm, was being able to more accurately estimate these sorts of effects in the presence of confounders.
Like, there must have been some threshold of survival for the leukemia drugs at which point it’s not worth switching to them, and presumably we can quantitatively calculate that threshold, and doing so is better than having a qualitative sense of that there’s an effect to be corrected for here.
Yeah, I didn’t mean to imply that causal modeling wasn’t the obvious solution—you’re right about the existence of the leukemia threshold. But I guess in my experience of these mistakes, I often see people taking the action “try to do superior statistical techniques” and that not working for them (including rationalists and not just terrible science reporting sites), whereas I think “identify the places where your model is terrible and call that out” is a better first step for knowing how to build the superior models.
In the ventilator case, for example, I’m not trying to advocate blindly following common sense, but I do think it’s important to incorporate common sense heavily. If people said, “There’s no evidence for respirators working, maybe hemoglobin is being denatured”, I certainly wouldn’t advocate for more common sense. But instead I tend to see “the statistics show respirators aren’t working, maybe we shouldn’t use them”, which seems to imply that common sense isn’t being given a say at all. It seems to me like always having common sense as one of your causal models is both an easy sell and a vital piece of the machine making sure your statistical techniques don’t go off the rails at any of their many opportunities.