Reductionism is not the ultimate tool for causal inference
This is a link post for a Substack post I made.
Sometimes, when doing causal inference, people strongly wish to see a mechanism. For instance, for the claim that some trait is genetically heritable, they want to see the genes through which inheritance passes, and the mechanisms through which the genes act. That is, they want a reductionistic model, where the overall causal effect is broken down into mechanisms that are causally linked together.
Certainly this can be valuable at times! But we shouldn’t think of it as being the only valid method for understanding causality. Here are some reasons why.
Circularity
You want to know the mechanism in order to know the causal effects. But a mechanism is itself dependent on causal claims; it requires picking out a set of variables, making claims about the causal relations between those variables, and adding up the causal effects to an overall mechanism.
If you have two variables X and Z, and there is a claim that X influences Z, then you might ask about what the mechanism is, i.e. through which mediators Y it is that X works its influence on Z. But rather than solving the causal inference problem, this increases the causal inference problem, because it requires figuring out the effects of X on Y, and figuring out the effects of Y on Z. In the worst case, this could lead to infinite regress, as you jump all the way down to quantum field theory in chase of mechanisms. At some point you’ve got to resort to a mechanism-less causal theory.
That is, it’s nice that we today know that the reason objects can push each other upon contact is due to the electrons in the atoms of the objects repelling each other. But we had a good understanding of the fact that contact allows objects to cause a force on each other far before we had a reductionistic understanding of how.
Multiple causes and multiple mediation
Having more educated parents is an advantage for students when doing homework, because parents can help their children with that homework.
But there are an enormous number of different homework tasks, and a fairly high number of different education types and levels your parents could have. In order to understand the effect of parental education or intelligence on helping with homework in a reductionistic manner, you would have to take every single piece of homework, figure out how much more educated parents help, and add it up to a total effect.
This is generally not very practical. I think it would be great if we could figure out ways to make it more practical (maybe sampling?), but so far this approach seems very rarely used in science, particularly not in social science.
Suppressor effects
Hospitals are filled with sick people, who have all sorts of germs that might infect you. This provides a mechanism by which hospitals will worsen your health. Therefore we can conclude that going to the hospital will cause your health to get worse.
Now there will certainly be some conditions where this is true, but as a general statement, the claim is misleading. The reason it is misleading is because getting infected by germs is not the only mechanism by which hospitals influence your health; specifically, hospitals also dispense treatments, which might sometimes improve your health.
In cases that we understand well, suppressor effects will be obvious. But cases we understand well are also the ones that are least in need of causal inference, so this is not much of a help.
Agency
Suppose there is some pizza restaurant in a city. Whenever people want to eat a pizza, they order a pizza there. Seems like a clear-cut case of a mechanism for why people eat pizza, right?
But imagine that this pizza restaurant was closed; in that case, there might be some other pizza restaurant that people would order from instead. Causal effects can be defined in terms of counterfactuals; “what would the outcome be if this variable was different?”; in the case of the pizza restaurant, the answer is, mostly the same. The pizza restaurant is not a cause of why people eat pizza, even if it is in some way mechanistically related.
Measurement invasiveness
Some of the above problems can be solved by being extremely thorough in observing everything that’s going on. Unfortunately, this is also very invasive.
In medicine, observing everything that’s going on would require you to do pretty much every medical test. In psychology or sociology, it would require lots of violations of privacy. No matter what way you try to maximize observation, it requires much more time than observing more narrowly. Plus, measurement often disturbs the system you are studying in some way, making it questionable whether your results will extrapolate to the cases you don’t measure.
There are alternatives to reductionism
There is a whole zoo of different causal inference methods out there. A classical scientific alternative to reductionism is experiments; “just” modify the variable in question and see what happens. Of course this has its own problems, some of which overlap with the problems listed above, but it’s still at times a worthwhile approach.
Other times, there may be non-reductionistic theory that is viable. Or rather than true experiments, there may be natural experiments you can use. And probably you will need to use some reductionism sometimes, but it does not have to be full reductionism. For instance, while we do not know exactly what effects specific genes have, we do know that genes can have a nearly endless variety of biological effects.
The general rule should be to use the methods which most efficiently and reliably increase causal knowledge of the world, subject to the various practical constraints we face; we should not limit ourselves to only reductionistic explanations, as reductionism simply cannot work as the only strategy.
Thanks to Justis Mills for proofreading this post.
- 13 Mar 2022 11:23 UTC; 3 points) 's comment on Anecdotes Can Be Strong Evidence and Bayes Theorem Proves It by (
- 29 Jan 2022 8:17 UTC; 1 point) 's comment on Causality, Transformative AI and alignment—part I by (
And that’s why the brain processes and encodes things hierarchically. There are statistical regularities in most systems at multiple levels of abstraction that we naturally take advantage of as we build up our intuitive mental models of the world.
It seems like there’s a default level of abstraction that most humans work at when reasoning about causality. Somewhere above the level of raw pixels or individual sound frequencies (great artists should be able to deal competently at this level); somewhere below the level of global political, economic, and ecological phenomena (great leaders should be able to deal competently at this level); somewhere around the level of individual social interactions and simple tool use (just using the words “individual” and “simple” should indicate that this is the default level).
It takes a lot of scientific inquiry and accumulated knowledge to dig beneath these levels to a more reductionist view of reality. And even then, our brains simply lack the raw processing power to extract practical insights from running mental simulations of bedrock-level causality (not even supercomputers can do that for most problems of interest).
Abstraction is inescapable, even when we know that reality is technically reducible.