The usual example for Markov networks is four variables connected in a square. The corresponding independence assumption is that any two opposite corners are independent given the other two corners. There is no Bayesian network encoding exactly that.
Okay, I’m recalling the “troublesome” cases that Pearl brings up, which gives me a better idea of what you mean. But this is not a counterexample. It just means that you can’t do it on a Bayes net with binary nodes. You can still represent that situation by merging (either pair of) the screening nodes into one node that covers all combinations of possibilities between them.
Do you have another example?
But again, why would you want that? As I said in the grand^(n)parent, you don’t need to when doing inference.
Sure you do: you want to know which and how many variables you have to look up to make your prediction.
merging (either pair of) the screening nodes into one node
Then the network does not encode the conditional independence between the two variables that you merged.
The task you have to do when making predictions is marginalization: in order to computer P(Rain|WetGrass), you need to compute the sum of P(Rain|WetGrass, X,Y,Z) for all possible values of the variables X, Y, Z that you didn’t observe. Here it is very helpful to have the distribution factored into a tree, since that can make it feasible to do variable elimination (or related algorithms like belief propagation). But the directions on the edges in the tree don’t matter, you can start at any leaf node and work across.
Okay, I’m recalling the “troublesome” cases that Pearl brings up, which gives me a better idea of what you mean. But this is not a counterexample. It just means that you can’t do it on a Bayes net with binary nodes. You can still represent that situation by merging (either pair of) the screening nodes into one node that covers all combinations of possibilities between them.
Do you have another example?
Sure you do: you want to know which and how many variables you have to look up to make your prediction.
Then the network does not encode the conditional independence between the two variables that you merged.
The task you have to do when making predictions is marginalization: in order to computer P(Rain|WetGrass), you need to compute the sum of P(Rain|WetGrass, X,Y,Z) for all possible values of the variables X, Y, Z that you didn’t observe. Here it is very helpful to have the distribution factored into a tree, since that can make it feasible to do variable elimination (or related algorithms like belief propagation). But the directions on the edges in the tree don’t matter, you can start at any leaf node and work across.