If you pick parameters of your causal model randomly, then almost surely the model will be faithful (formally, in Robins’ phrasing: “in finite dimensional parametric families, the subset of unfaithful distributions typically has Lebesgue measure zero on the parameter space”). People interpret this to mean that faithfulness violations are rare enough to be ignored. It is not so, sadly.
First, Nature doesn’t pick causal models randomly. In fact, cancellations are quite useful (homeostasis, and gene regulation are often “implemented” by faithfulness violations).
Second, we may have a model that is weakly faithful (that is hard to tell from unfaithful with few samples). What is worse is it is difficult to say in advance how many samples one would need to tell apart a faithful vs an unfaithful model. In statistical terms this is sometimes phrased as the existence of “pointwise consistent” but the non-existence of “uniformly consistent” tests.
Kevin Kelly at CMU thinks a lot about “having to change your mind” due to lack of uniform consistency.
Much of what Pearl et al do (identification of causal effects, counterfactual reasoning, actual cause, etc.) does not rely on faithfulness. Faithfulness typically comes up when one wishes to learn causal structure from data. Even in this setting there exist methods which do not require faithfulness (I think the LiNGAM algorithm does not).
First, Nature doesn’t pick causal models randomly. In fact, cancellations are quite useful (homeostasis, and gene regulation are often “implemented” by faithfulness violations).
The very subject of my paper. I don’t think the magnitude of the obstacle has yet been fully appreciated by people who are trying to extend methods of causal discovery in that direction. And in the folklore, there are frequent statements like this one, which is simply false:
“Empirically observed covariation is a necessary but not sufficient condition for causality.”
I think the way causal discovery is sold sometimes is not as a way of establishing causal structure from data, but as a way of narrowing down the set of experiments one would have to run to establish causal structure definitively, in domains which are poorly understood but in which we can experiment (comp. bio., etc).
If phrased in this way, assuming faithfulness is not “so bad.” It is true that many folks in causal inference and related areas are quite skeptical of faithfulness type assumptions and rightly so. To me, it’s the lack of uniform consistency that’s the real killer.
If you pick parameters of your causal model randomly, then almost surely the model will be faithful (formally, in Robins’ phrasing: “in finite dimensional parametric families, the subset of unfaithful distributions typically has Lebesgue measure zero on the parameter space”). People interpret this to mean that faithfulness violations are rare enough to be ignored. It is not so, sadly.
First, Nature doesn’t pick causal models randomly. In fact, cancellations are quite useful (homeostasis, and gene regulation are often “implemented” by faithfulness violations).
Second, we may have a model that is weakly faithful (that is hard to tell from unfaithful with few samples). What is worse is it is difficult to say in advance how many samples one would need to tell apart a faithful vs an unfaithful model. In statistical terms this is sometimes phrased as the existence of “pointwise consistent” but the non-existence of “uniformly consistent” tests.
I suggest the following paper for more on this:
http://www.hss.cmu.edu/philosophy/scheines/uniform-consistency.pdf
See also this (the distinction comes from analysis): http://en.wikipedia.org/wiki/Uniform_convergence
Kevin Kelly at CMU thinks a lot about “having to change your mind” due to lack of uniform consistency.
Much of what Pearl et al do (identification of causal effects, counterfactual reasoning, actual cause, etc.) does not rely on faithfulness. Faithfulness typically comes up when one wishes to learn causal structure from data. Even in this setting there exist methods which do not require faithfulness (I think the LiNGAM algorithm does not).
The very subject of my paper. I don’t think the magnitude of the obstacle has yet been fully appreciated by people who are trying to extend methods of causal discovery in that direction. And in the folklore, there are frequent statements like this one, which is simply false:
(Edward Tufte, quoted here.)
I think the way causal discovery is sold sometimes is not as a way of establishing causal structure from data, but as a way of narrowing down the set of experiments one would have to run to establish causal structure definitively, in domains which are poorly understood but in which we can experiment (comp. bio., etc).
If phrased in this way, assuming faithfulness is not “so bad.” It is true that many folks in causal inference and related areas are quite skeptical of faithfulness type assumptions and rightly so. To me, it’s the lack of uniform consistency that’s the real killer.
In Part II of this talk (http://videolectures.net/uai2011_shpitser_causal/) I gave is an example of how you can do completely ridiculous, magical things if you assume a type of faithfulness. See 31:07.