Sure. I am not an MD, but this is my view as an outsider: the medical profession is quite conservative, and people that publish medical papers have an incentive to publish “fast and loose,” and not necessarily be very careful (because hey you can just write another paper later if a better method comes along!)
Because medicine deals with people, you often can’t do random treatment assignment in your studies (for ethical reasons), so you often have evidence in the form of observational data, where you have a ton of confounding of various types. Causal inference can help people make use of observational data properly. This is important—most data we have is observational. And if you are not careful, you are going to get garbage from it. For example, there was a fairly recent paper by Robins et al. that basically showed that their way of adjusting for confounding in observational data was correct because they reproduced a result found in an RCT.
There is room both for people to go to graduate school and work on new methods for properly dealing with observational data for drawing causal conclusions, and for popularizers.
Popularizing this stuff is a lot of what Pearl does, and is also partly the purpose of Hernan’s and Robin’s new book:
Here on Less Wrong there are a significant number of mathematically inclined software engineers who know some probability theory, meaning they’ve read/worked through at least one of Jaynes and Pearl but may not have gone to graduate school. How could someone with this background contribute to making causal inference more accessible to researchers? Any tools that are particularly under-developed or missing?
I am not sure I know what the most impactful thing to do is, by edu level. Let me think about it.
My intuition is the best thing for “raising the sanity waterline” is what the LW community would do with any other bias: just preaching association/causation to the masses that would otherwise read bad scientific reporting and conclude garbage about e.g. nutrition. Scientists will generally not outright lie, but are incentivized to overstate a bit, and reporters are incentivized to overstate a bit more. In general, we trust scientific output too much, so much of it is contingent on modeling assumptions, etc.
Explaining good clear examples of gotchas in observational data is good: e.g. doctors give sicker people a pill, so it might look like the pill is making people sick. It’s like the causality version of the “rare cancer ⇒ likely you have a false positive by Bayes theorem”. Unlike Bayes theorem, this is the kind of thing people immediately grasp if you point it out, because we have good causal processing natively, unlike our native probability processing which is terrible. Association/causation is just another type of bias to be aware of, it just happens to come up a lot when we read scientific literature.
If you are looking for some specific stuff to do as a programmer, email me :). There is plenty to do.
Sure. I am not an MD, but this is my view as an outsider: the medical profession is quite conservative, and people that publish medical papers have an incentive to publish “fast and loose,” and not necessarily be very careful (because hey you can just write another paper later if a better method comes along!)
Because medicine deals with people, you often can’t do random treatment assignment in your studies (for ethical reasons), so you often have evidence in the form of observational data, where you have a ton of confounding of various types. Causal inference can help people make use of observational data properly. This is important—most data we have is observational. And if you are not careful, you are going to get garbage from it. For example, there was a fairly recent paper by Robins et al. that basically showed that their way of adjusting for confounding in observational data was correct because they reproduced a result found in an RCT.
There is room both for people to go to graduate school and work on new methods for properly dealing with observational data for drawing causal conclusions, and for popularizers.
Popularizing this stuff is a lot of what Pearl does, and is also partly the purpose of Hernan’s and Robin’s new book:
https://www.facebook.com/causalinference
Full disclosure: obviously this is my area, so I am going to say that. So don’t take my word for it :).
Here on Less Wrong there are a significant number of mathematically inclined software engineers who know some probability theory, meaning they’ve read/worked through at least one of Jaynes and Pearl but may not have gone to graduate school. How could someone with this background contribute to making causal inference more accessible to researchers? Any tools that are particularly under-developed or missing?
I am not sure I know what the most impactful thing to do is, by edu level. Let me think about it.
My intuition is the best thing for “raising the sanity waterline” is what the LW community would do with any other bias: just preaching association/causation to the masses that would otherwise read bad scientific reporting and conclude garbage about e.g. nutrition. Scientists will generally not outright lie, but are incentivized to overstate a bit, and reporters are incentivized to overstate a bit more. In general, we trust scientific output too much, so much of it is contingent on modeling assumptions, etc.
Explaining good clear examples of gotchas in observational data is good: e.g. doctors give sicker people a pill, so it might look like the pill is making people sick. It’s like the causality version of the “rare cancer ⇒ likely you have a false positive by Bayes theorem”. Unlike Bayes theorem, this is the kind of thing people immediately grasp if you point it out, because we have good causal processing natively, unlike our native probability processing which is terrible. Association/causation is just another type of bias to be aware of, it just happens to come up a lot when we read scientific literature.
If you are looking for some specific stuff to do as a programmer, email me :). There is plenty to do.