I don’t think problem 1 is so easy to handle. It’s true that I’ll have a hard time finding a variable that’s perfectly independent of swimming but correlated with camping. However, I don’t need to be perfect to trick your model.
Suppose every 4th of July, you go camping at one particular spot that does not have a lake. Then we observe that July 4th correlates with camping but does not correlate with swimming (or even negatively correlates with swimming). The model updates towards swimming causing camping. Getting more data on these variables only reinforces the swimming->camping direction.
To update in the other direction, you need to find a variable that correlates with swimming but not with camping. But what if you never find one? What if there’s no simple thing that causes swimming. Say I go swimming based on the roll of a die, but you don’t get to ever see the die. Then you’re toast!
Slightly more generally, for instance, a combination of variables which correlates with low neonatal IQ but not with lead, conditional on some other variables, would suffice (assuming we correctly account for multiple hypothesis testing). And the “conditional on some other variables” part could, in principle, account for SES, insofar as we use enough variables to basically determine SES to precision sufficient for our purposes.
Oh, sure, I get that, but I don’t think you’ll manage to do this, in practice. Like, go ahead and prove me wrong, I guess? Is there a paper that does this for anything I care about? (E.g. exercise and overweight, or lead and IQ, or anything else of note). Ideally I’d get to download the data and check if the results are robust to deleting a variable or to duplicating a variable (when duplicating, I’ll add noise so that they variables aren’t exactly identical).
If you prefer, I can try to come up with artificial data for the lead/IQ thing in which I generate all variables to be downstream of non-observed SES but in which IQ is also slightly downstream of lead (and other things are slightly downstream of other things in a randomly chosen graph). I’ll then let you run your favorite algorithm on it. What’s your favorite algorithm, by the way? What’s been mentioned so far sounds like it should take exponential time (e.g. enumerating over all ordering of the variables, drawing the Bayes net given the ordering, and then picking the one with fewest parameters—that takes exponential time).
(This is getting into the weeds enough that I can’t address the points very quickly anymore, they’d require longer responses, but I’m leaving a minor note about this part:
Suppose every 4th of July, you go camping at one particular spot that does not have a lake. Then we observe that July 4th correlates with camping but does not correlate with swimming (or even negatively correlates with swimming).
For purposes of causality, negative correlation is the same as positive. The only distinction we care about, there, is zero or nonzero correlation.)
For purposes of causality, negative correlation is the same as positive. The only distinction we care about, there, is zero or nonzero correlation.)
That makes sense. I was wrong to emphasize the “even negatively”, and should instead stick to something like “slightly negatively”. You have to care about large vs. small correlations or else you’ll never get started doing any inference (no correlations are ever exactly 0).
I don’t think problem 1 is so easy to handle. It’s true that I’ll have a hard time finding a variable that’s perfectly independent of swimming but correlated with camping. However, I don’t need to be perfect to trick your model.
Suppose every 4th of July, you go camping at one particular spot that does not have a lake. Then we observe that July 4th correlates with camping but does not correlate with swimming (or even negatively correlates with swimming). The model updates towards swimming causing camping. Getting more data on these variables only reinforces the swimming->camping direction.
To update in the other direction, you need to find a variable that correlates with swimming but not with camping. But what if you never find one? What if there’s no simple thing that causes swimming. Say I go swimming based on the roll of a die, but you don’t get to ever see the die. Then you’re toast!
Oh, sure, I get that, but I don’t think you’ll manage to do this, in practice. Like, go ahead and prove me wrong, I guess? Is there a paper that does this for anything I care about? (E.g. exercise and overweight, or lead and IQ, or anything else of note). Ideally I’d get to download the data and check if the results are robust to deleting a variable or to duplicating a variable (when duplicating, I’ll add noise so that they variables aren’t exactly identical).
If you prefer, I can try to come up with artificial data for the lead/IQ thing in which I generate all variables to be downstream of non-observed SES but in which IQ is also slightly downstream of lead (and other things are slightly downstream of other things in a randomly chosen graph). I’ll then let you run your favorite algorithm on it. What’s your favorite algorithm, by the way? What’s been mentioned so far sounds like it should take exponential time (e.g. enumerating over all ordering of the variables, drawing the Bayes net given the ordering, and then picking the one with fewest parameters—that takes exponential time).
(This is getting into the weeds enough that I can’t address the points very quickly anymore, they’d require longer responses, but I’m leaving a minor note about this part:
For purposes of causality, negative correlation is the same as positive. The only distinction we care about, there, is zero or nonzero correlation.)
That makes sense. I was wrong to emphasize the “even negatively”, and should instead stick to something like “slightly negatively”. You have to care about large vs. small correlations or else you’ll never get started doing any inference (no correlations are ever exactly 0).