This or something similar is the starting point for most approaches to causality, but in general there are going to be many factors having a causal relationship with each variable in your model, and so there are plenty of opportunities for the inequality relating f(y|x) and f(x|y) to switch sign. I haven’t done much stuff with causality, though, so take this with a grain of salt. Here is a recent paper in the subject, if you’re interested.
EDIT: I guess what I’m really trying to say is that x may only have a causal influence on y if a bunch of other factors are present, so it can be hard to tell what’s going on just from your graphical model. I’m substantially less confident than 15 seconds ago that this comment makes sense, though.
I guess what I’m really trying to say is that x may only have a causal influence on y if a bunch of other factors are present,
Which can be represented in a straightforward fashion in Jaynes’s notation.
f(y | x0, x1=C… xN=C2)
If x “is a cause” of y when x1...xN, then this conditional will accurately predict y without ever saying “cause”. The causal talk seems to me superfluous mathematically—it’s just describing limiting cases of conditionals.
If you literally think that conditional probabilities describe causation, then you should water your grass to make it rain (because p(rain | grass-is-wet) is higher than p(rain | grass-is-dry)). Causation is not about prediction.
This or something similar is the starting point for most approaches to causality, but in general there are going to be many factors having a causal relationship with each variable in your model, and so there are plenty of opportunities for the inequality relating f(y|x) and f(x|y) to switch sign. I haven’t done much stuff with causality, though, so take this with a grain of salt. Here is a recent paper in the subject, if you’re interested.
EDIT: I guess what I’m really trying to say is that x may only have a causal influence on y if a bunch of other factors are present, so it can be hard to tell what’s going on just from your graphical model. I’m substantially less confident than 15 seconds ago that this comment makes sense, though.
Which can be represented in a straightforward fashion in Jaynes’s notation.
f(y | x0, x1=C… xN=C2)
If x “is a cause” of y when x1...xN, then this conditional will accurately predict y without ever saying “cause”. The causal talk seems to me superfluous mathematically—it’s just describing limiting cases of conditionals.
If you literally think that conditional probabilities describe causation, then you should water your grass to make it rain (because p(rain | grass-is-wet) is higher than p(rain | grass-is-dry)). Causation is not about prediction.