Is every line you can draw through the causal model a Markov blanket?
It seems like you’re interested in Markov blankets because the information on one side is independent from the other side given the values of the edges that pass through. But it also looks like the edges in the original graph represent “has any effect on”. Which makes it sound like you’re saying one side is independent from the other except for all of the ways in which it’s not, which seems trivial. What am I missing?
You’re basically correct. The substantive part is that, if I say ”M2 is a Markov blanket separating M1 from M3”, then I’m claiming that M2 is a comprehensive list of all the “ways in which M1 and M3 are not independent”. If we have a Markov blanket, then we know exactly “which channels” the two sides can interact through; we can rule out any other interactions.
Kinda sounds like the important part is not the blankets themselves, but the relationships between them? That is, a Markov blanket is just any partition of the graph, but it’s important that you can assert that M2 is “separating” M1 and M3. (Whereas if you just took 3 random partitions, none of them would necessarily separate the other 2.)
Or is it more like—we don’t actually have any explicit representation of the entire causal model, so we can’t necessarily use a partition to calculate all the edges that cross that partition, and the Markov blanket is like a list of the edges, rather than a list of the nodes? Every partition describes a Markov blanket, but not every set of edges does, so saying that this particular set of edges forms a Markov blanket is a non-trivial statement about those edges?
These are both correct. The first is right in most applications of Markov blankets. The second is relevant mainly in e.g. science, where figuring out the causal structure is part of the problem. In science, we can experimentally test whether M2 mediates the interaction between M1 and M3 (i.e. whether M2 is a Markov blanket between M1 and M3), and then we can back out information about the causal structure from that.
Is every line you can draw through the causal model a Markov blanket?
It seems like you’re interested in Markov blankets because the information on one side is independent from the other side given the values of the edges that pass through. But it also looks like the edges in the original graph represent “has any effect on”. Which makes it sound like you’re saying one side is independent from the other except for all of the ways in which it’s not, which seems trivial. What am I missing?
You’re basically correct. The substantive part is that, if I say ”M2 is a Markov blanket separating M1 from M3”, then I’m claiming that M2 is a comprehensive list of all the “ways in which M1 and M3 are not independent”. If we have a Markov blanket, then we know exactly “which channels” the two sides can interact through; we can rule out any other interactions.
Kinda sounds like the important part is not the blankets themselves, but the relationships between them? That is, a Markov blanket is just any partition of the graph, but it’s important that you can assert that M2 is “separating” M1 and M3. (Whereas if you just took 3 random partitions, none of them would necessarily separate the other 2.)
Or is it more like—we don’t actually have any explicit representation of the entire causal model, so we can’t necessarily use a partition to calculate all the edges that cross that partition, and the Markov blanket is like a list of the edges, rather than a list of the nodes? Every partition describes a Markov blanket, but not every set of edges does, so saying that this particular set of edges forms a Markov blanket is a non-trivial statement about those edges?
These are both correct. The first is right in most applications of Markov blankets. The second is relevant mainly in e.g. science, where figuring out the causal structure is part of the problem. In science, we can experimentally test whether M2 mediates the interaction between M1 and M3 (i.e. whether M2 is a Markov blanket between M1 and M3), and then we can back out information about the causal structure from that.