Problem 1 is mostly handled automagically by doing Bayesian inference to choose our model. The key thing to notice is that the “unnatural” variable’s construction requires that we know what value to give p, which is something we’d typically have to learn from the data itself. Which means, before seeing the data, that particular value of p would typically have low prior. Furthermore, as more data lets us estimate things more precisely, we’d have to make p more precise in order to keep perfectly negating things, and p-to-within-precision has lower and lower prior as the precision gets smaller. (Though there will be cases where we expect a priori for parameters to take on values which make the causal structure ambiguous—agentic systems under selection pressure are a central example, and that’s one of the things which make agentic systems particularly interesting to study.)
Problems 2-3 are mostly handled by building models of a whole bunch of variables at once. We’re not just looking for e.g. a single variable which correlates with low neonatal IQ but not with lead. Slightly more generally, for instance, a combination of variables which correlates with low neonatal IQ but not with lead, conditional on some other variables, would suffice (assuming we correctly account for multiple hypothesis testing). And the “conditional on some other variables” part could, in principle, account for SES, insofar as we use enough variables to basically determine SES to precision sufficient for our purposes.
More generally than that: once we’ve accepted that correlational data can basically power Bayesian inference of causality, we can in-principle just do Bayesian inference on everything at once, and account for all the evidence about causality which the data provides—some of which will be variables that correlate with one thing but not the other, but some of which will be other stuff than that (like e.g. Markov blankets informing graph structure, or two independent variables becoming dependent when conditioning on a third). Variables which correlate with one thing but not the other are just one particularly-intuitive small-system example of the kind of evidence we can get about causal structure.
(Also I strong-upvoted your comment for rapid updating, kudos.)
I don’t think problem 1 is so easy to handle. It’s true that I’ll have a hard time finding a variable that’s perfectly independent of swimming but correlated with camping. However, I don’t need to be perfect to trick your model.
Suppose every 4th of July, you go camping at one particular spot that does not have a lake. Then we observe that July 4th correlates with camping but does not correlate with swimming (or even negatively correlates with swimming). The model updates towards swimming causing camping. Getting more data on these variables only reinforces the swimming->camping direction.
To update in the other direction, you need to find a variable that correlates with swimming but not with camping. But what if you never find one? What if there’s no simple thing that causes swimming. Say I go swimming based on the roll of a die, but you don’t get to ever see the die. Then you’re toast!
Slightly more generally, for instance, a combination of variables which correlates with low neonatal IQ but not with lead, conditional on some other variables, would suffice (assuming we correctly account for multiple hypothesis testing). And the “conditional on some other variables” part could, in principle, account for SES, insofar as we use enough variables to basically determine SES to precision sufficient for our purposes.
Oh, sure, I get that, but I don’t think you’ll manage to do this, in practice. Like, go ahead and prove me wrong, I guess? Is there a paper that does this for anything I care about? (E.g. exercise and overweight, or lead and IQ, or anything else of note). Ideally I’d get to download the data and check if the results are robust to deleting a variable or to duplicating a variable (when duplicating, I’ll add noise so that they variables aren’t exactly identical).
If you prefer, I can try to come up with artificial data for the lead/IQ thing in which I generate all variables to be downstream of non-observed SES but in which IQ is also slightly downstream of lead (and other things are slightly downstream of other things in a randomly chosen graph). I’ll then let you run your favorite algorithm on it. What’s your favorite algorithm, by the way? What’s been mentioned so far sounds like it should take exponential time (e.g. enumerating over all ordering of the variables, drawing the Bayes net given the ordering, and then picking the one with fewest parameters—that takes exponential time).
(This is getting into the weeds enough that I can’t address the points very quickly anymore, they’d require longer responses, but I’m leaving a minor note about this part:
Suppose every 4th of July, you go camping at one particular spot that does not have a lake. Then we observe that July 4th correlates with camping but does not correlate with swimming (or even negatively correlates with swimming).
For purposes of causality, negative correlation is the same as positive. The only distinction we care about, there, is zero or nonzero correlation.)
For purposes of causality, negative correlation is the same as positive. The only distinction we care about, there, is zero or nonzero correlation.)
That makes sense. I was wrong to emphasize the “even negatively”, and should instead stick to something like “slightly negatively”. You have to care about large vs. small correlations or else you’ll never get started doing any inference (no correlations are ever exactly 0).
Problem 1 is mostly handled automagically by doing Bayesian inference to choose our model. The key thing to notice is that the “unnatural” variable’s construction requires that we know what value to give p, which is something we’d typically have to learn from the data itself. Which means, before seeing the data, that particular value of p would typically have low prior. Furthermore, as more data lets us estimate things more precisely, we’d have to make p more precise in order to keep perfectly negating things, and p-to-within-precision has lower and lower prior as the precision gets smaller. (Though there will be cases where we expect a priori for parameters to take on values which make the causal structure ambiguous—agentic systems under selection pressure are a central example, and that’s one of the things which make agentic systems particularly interesting to study.)
Problems 2-3 are mostly handled by building models of a whole bunch of variables at once. We’re not just looking for e.g. a single variable which correlates with low neonatal IQ but not with lead. Slightly more generally, for instance, a combination of variables which correlates with low neonatal IQ but not with lead, conditional on some other variables, would suffice (assuming we correctly account for multiple hypothesis testing). And the “conditional on some other variables” part could, in principle, account for SES, insofar as we use enough variables to basically determine SES to precision sufficient for our purposes.
More generally than that: once we’ve accepted that correlational data can basically power Bayesian inference of causality, we can in-principle just do Bayesian inference on everything at once, and account for all the evidence about causality which the data provides—some of which will be variables that correlate with one thing but not the other, but some of which will be other stuff than that (like e.g. Markov blankets informing graph structure, or two independent variables becoming dependent when conditioning on a third). Variables which correlate with one thing but not the other are just one particularly-intuitive small-system example of the kind of evidence we can get about causal structure.
(Also I strong-upvoted your comment for rapid updating, kudos.)
I don’t think problem 1 is so easy to handle. It’s true that I’ll have a hard time finding a variable that’s perfectly independent of swimming but correlated with camping. However, I don’t need to be perfect to trick your model.
Suppose every 4th of July, you go camping at one particular spot that does not have a lake. Then we observe that July 4th correlates with camping but does not correlate with swimming (or even negatively correlates with swimming). The model updates towards swimming causing camping. Getting more data on these variables only reinforces the swimming->camping direction.
To update in the other direction, you need to find a variable that correlates with swimming but not with camping. But what if you never find one? What if there’s no simple thing that causes swimming. Say I go swimming based on the roll of a die, but you don’t get to ever see the die. Then you’re toast!
Oh, sure, I get that, but I don’t think you’ll manage to do this, in practice. Like, go ahead and prove me wrong, I guess? Is there a paper that does this for anything I care about? (E.g. exercise and overweight, or lead and IQ, or anything else of note). Ideally I’d get to download the data and check if the results are robust to deleting a variable or to duplicating a variable (when duplicating, I’ll add noise so that they variables aren’t exactly identical).
If you prefer, I can try to come up with artificial data for the lead/IQ thing in which I generate all variables to be downstream of non-observed SES but in which IQ is also slightly downstream of lead (and other things are slightly downstream of other things in a randomly chosen graph). I’ll then let you run your favorite algorithm on it. What’s your favorite algorithm, by the way? What’s been mentioned so far sounds like it should take exponential time (e.g. enumerating over all ordering of the variables, drawing the Bayes net given the ordering, and then picking the one with fewest parameters—that takes exponential time).
(This is getting into the weeds enough that I can’t address the points very quickly anymore, they’d require longer responses, but I’m leaving a minor note about this part:
For purposes of causality, negative correlation is the same as positive. The only distinction we care about, there, is zero or nonzero correlation.)
That makes sense. I was wrong to emphasize the “even negatively”, and should instead stick to something like “slightly negatively”. You have to care about large vs. small correlations or else you’ll never get started doing any inference (no correlations are ever exactly 0).