Does anyone know of an “algebra for Bayes nets/causal diagrams”?
More specifics: rather than using a Bayes net to define a distribution, I want to use a Bayes net to state a property which a distribution satisfies. For instance, a distribution P[X, Y, Z] satisfies the diagram X → Y → Z if-and-only-if the distribution factors according to P[X, Y, Z] = P[X] P[Y|X] P[Z|Y].
When using diagrams that way, it’s natural to state a few properties in terms of diagrams, and then derive some other diagrams they imply. For instance, if a distribution P[W, X, Y, Z] satisfies all of:
W → Y → Z
W → X → Y
X → (W, Y) → Z
… then it also satisfies W → X → Y → Z.
What I’m looking for is a set of rules for “combining diagrams” this way, without needing to go back to the underlying factorizations in order to prove things.
David and I have been doing this sort of thing a lot in our work the past few months, and it would be nice if someone else already had a nice write-up of the rules for it.
Does anyone know of an “algebra for Bayes nets/causal diagrams”?
More specifics: rather than using a Bayes net to define a distribution, I want to use a Bayes net to state a property which a distribution satisfies. For instance, a distribution P[X, Y, Z] satisfies the diagram X → Y → Z if-and-only-if the distribution factors according to
P[X, Y, Z] = P[X] P[Y|X] P[Z|Y].
When using diagrams that way, it’s natural to state a few properties in terms of diagrams, and then derive some other diagrams they imply. For instance, if a distribution P[W, X, Y, Z] satisfies all of:
W → Y → Z
W → X → Y
X → (W, Y) → Z
… then it also satisfies W → X → Y → Z.
What I’m looking for is a set of rules for “combining diagrams” this way, without needing to go back to the underlying factorizations in order to prove things.
David and I have been doing this sort of thing a lot in our work the past few months, and it would be nice if someone else already had a nice write-up of the rules for it.