Sure—there are plenty of cases where a pair of interactions isn’t interesting. In the image net context, probably you’ll care more about screening-off behavior at more abstract levels.
For example, maybe you find that, in your trained network, a hidden representation that seems to correspond to “trunk” isn’t very predictive of the class “tree”. And that one that looks like “leaves” is predictive of “tree”. It’d be useful to know if the reason “trunk” isn’t predictive is that “leaves” screens it off. (This could happen if all the tree trunks in your training images come with leaves in the frame).
Of course, the causality parts of the above analysis don’t address the “how should you assign labels in the first place” problem that the post is most focused on! I’m just saying both the ML parts and the causality parts work well in concert, and are not opposing methods.
Sure—there are plenty of cases where a pair of interactions isn’t interesting. In the image net context, probably you’ll care more about screening-off behavior at more abstract levels.
For example, maybe you find that, in your trained network, a hidden representation that seems to correspond to “trunk” isn’t very predictive of the class “tree”. And that one that looks like “leaves” is predictive of “tree”. It’d be useful to know if the reason “trunk” isn’t predictive is that “leaves” screens it off. (This could happen if all the tree trunks in your training images come with leaves in the frame).
Of course, the causality parts of the above analysis don’t address the “how should you assign labels in the first place” problem that the post is most focused on! I’m just saying both the ML parts and the causality parts work well in concert, and are not opposing methods.