Which academic disciplines care about causality? (I’m guessing statistics, CS, philosophy… anything else?)
Is there anything like a mainstream agreement on how to model/establish causality? E.g. does more or less everyone agree that Pearl’s book, which I haven’t read, is the right approach? If not, is it possible to list the main competing approaches? Does there exist a reasonably neutral high-level summary of the field?
Which academic disciplines care about causality? (I’m guessing statistics, CS, philosophy… anything else?)
On some level any empirical science cares, because the empirical sciences all care about cause-effect relationships. In practice, the ‘penetration rate’ is path-dependent (that is, depends on the history of the field, personalities involved, etc.)
To add to your list, there are people in public health (epidemiology, biostatistics), social science, psychology, political science, economics/econometrics, computational bio/omics that care quite a bit. Very few philosophers (excepting the CMU gang, and a few other places) think about causal inference at the level of detail a statistician would. CS/ML do not care very much (even though Pearl is CS).
Is there anything like a mainstream agreement on how to model/establish causality? E.g. does more or less
everyone agree that Pearl’s book, which I haven’t read, is the right approach? If not, is it possible to list the main
competing approaches?
I think there is as much agreement as there can reasonably be for a concept such as causality (that is, a philosophically laden concept that’s fun to argue about). People model it in lots of ways, I will try to give a rough taxonomy, and will tell you where Pearl lies
Interventionist vs non-interventionist
Most modern causal inference folks are interventionists (including Pearl, Rubin, Robins, etc.). The ‘Nicene crede’ for interventionists is: (a) an intervention (forced assignment) is key for representing cause/effect, (b) interventions and conditioning are not the same thing, (c) you express interventions in terms of ordinary probabilities using the g-formula/truncated factorization/manipulated distribution (different names for the same thing). The concept of an intervention is old (goes back to Neyman (1920s), I think, possibly even earlier).
To me, non-interventionists fall into three categories: ‘naive,’ ‘abstract’, and ‘indifferent.’ Naive non-interventionists are not using interventions because they haven’t thought about things hard enough, and will thus get things wrong. Some EDT folks are in this category. People who ask ‘but why can’t we just use conditional probabilities’ are often in this set. Abstract non-interventionists are not using interventions because they have in mind some formalism that has interventions as a special case, and they have no particular need for the special case. I think David Lewis was in this camp. Joe Halpern might be in this set, I will ask him sometime. Indifferent non-interventionists operate in a field where there is little difference between conditioning and interventions (due to lack of interesting confounding), so there is no need to model interventions explicitly. Reinforcement learning people, and people who only work with RCT data are in this set.
Counterfactualists vs non-counterfactualists
Most modern causal inference folks are counterfactualist (including Pearl, Rubin, Robins, etc.). To a counterfactualist it is important to think about a hypothetical outcome under a hypothetical intervention. Obviously all counterfactualists are interventionist. A noted non-counterfactualist interventionist is Phil Dawid. Counterfactuals are also due to Neyman, but were revived and extended by Rubin in the 70s.
Graphical vs non-graphical
Whether you like using graphs or not. Modern causal inference is split on this point. Folks in the Rubin camp do not like graphs (for reasons that are not entirely clear—what I heard is they find them distracting from important statistical modeling issues (??)). Folks in the Pearl/SGS/Robins/Dawid/etc. camp like graphs. You don’t have to have a particular commitment to any earlier point to have an opinion on graphs (indeed lots of graphical models are not about causality at all). In the context of causality, graphs were first used by Sewall Wright for pedigree analysis (1920s). Lauritzen, Pearl, etc. gave a modern synthesis of graphical models. Spirtes/Glymour/Scheines and Pearl revived a causal interpretation of graphs in the 90s.
“Popperians” vs “non-Popperians”
Whether you restrict yourself to testable assumptions. Pearl is non-Popperian, his models make assumptions that can only be tested via a time machine or an Everett branch jumping algorithm. Rubin is also non-Popperian because of “principal stratification.” People that do “mediation analysis” are generally non-Popperian. Dawid, Robins, and Richardson are Popperians—they try to stick to testable assumptions only. I think even for Popperians, some of their assumptions must be untestable (but I think this is probably necessary for statistical inference in general). I think Dawid might claim all counterfactualists are non-Popperian in some sense.
I am “a graphical non-Popperian counterfactualist” (and thus interventionist).
Does there exist a reasonably neutral high-level summary of the field?
Which academic disciplines care about causality? (I’m guessing statistics, CS, philosophy… anything else?)
Is there anything like a mainstream agreement on how to model/establish causality? E.g. does more or less everyone agree that Pearl’s book, which I haven’t read, is the right approach? If not, is it possible to list the main competing approaches? Does there exist a reasonably neutral high-level summary of the field?
On some level any empirical science cares, because the empirical sciences all care about cause-effect relationships. In practice, the ‘penetration rate’ is path-dependent (that is, depends on the history of the field, personalities involved, etc.)
To add to your list, there are people in public health (epidemiology, biostatistics), social science, psychology, political science, economics/econometrics, computational bio/omics that care quite a bit. Very few philosophers (excepting the CMU gang, and a few other places) think about causal inference at the level of detail a statistician would. CS/ML do not care very much (even though Pearl is CS).
I think there is as much agreement as there can reasonably be for a concept such as causality (that is, a philosophically laden concept that’s fun to argue about). People model it in lots of ways, I will try to give a rough taxonomy, and will tell you where Pearl lies
Interventionist vs non-interventionist
Most modern causal inference folks are interventionists (including Pearl, Rubin, Robins, etc.). The ‘Nicene crede’ for interventionists is: (a) an intervention (forced assignment) is key for representing cause/effect, (b) interventions and conditioning are not the same thing, (c) you express interventions in terms of ordinary probabilities using the g-formula/truncated factorization/manipulated distribution (different names for the same thing). The concept of an intervention is old (goes back to Neyman (1920s), I think, possibly even earlier).
To me, non-interventionists fall into three categories: ‘naive,’ ‘abstract’, and ‘indifferent.’ Naive non-interventionists are not using interventions because they haven’t thought about things hard enough, and will thus get things wrong. Some EDT folks are in this category. People who ask ‘but why can’t we just use conditional probabilities’ are often in this set. Abstract non-interventionists are not using interventions because they have in mind some formalism that has interventions as a special case, and they have no particular need for the special case. I think David Lewis was in this camp. Joe Halpern might be in this set, I will ask him sometime. Indifferent non-interventionists operate in a field where there is little difference between conditioning and interventions (due to lack of interesting confounding), so there is no need to model interventions explicitly. Reinforcement learning people, and people who only work with RCT data are in this set.
Counterfactualists vs non-counterfactualists
Most modern causal inference folks are counterfactualist (including Pearl, Rubin, Robins, etc.). To a counterfactualist it is important to think about a hypothetical outcome under a hypothetical intervention. Obviously all counterfactualists are interventionist. A noted non-counterfactualist interventionist is Phil Dawid. Counterfactuals are also due to Neyman, but were revived and extended by Rubin in the 70s.
Graphical vs non-graphical
Whether you like using graphs or not. Modern causal inference is split on this point. Folks in the Rubin camp do not like graphs (for reasons that are not entirely clear—what I heard is they find them distracting from important statistical modeling issues (??)). Folks in the Pearl/SGS/Robins/Dawid/etc. camp like graphs. You don’t have to have a particular commitment to any earlier point to have an opinion on graphs (indeed lots of graphical models are not about causality at all). In the context of causality, graphs were first used by Sewall Wright for pedigree analysis (1920s). Lauritzen, Pearl, etc. gave a modern synthesis of graphical models. Spirtes/Glymour/Scheines and Pearl revived a causal interpretation of graphs in the 90s.
“Popperians” vs “non-Popperians”
Whether you restrict yourself to testable assumptions. Pearl is non-Popperian, his models make assumptions that can only be tested via a time machine or an Everett branch jumping algorithm. Rubin is also non-Popperian because of “principal stratification.” People that do “mediation analysis” are generally non-Popperian. Dawid, Robins, and Richardson are Popperians—they try to stick to testable assumptions only. I think even for Popperians, some of their assumptions must be untestable (but I think this is probably necessary for statistical inference in general). I think Dawid might claim all counterfactualists are non-Popperian in some sense.
I am “a graphical non-Popperian counterfactualist” (and thus interventionist).
We are working on it.