Primarily for predicting how the “object” (ie: component of the universe) in question is going to act. Classifying (in the machine learning sense) what you see as a cat doesn’t tell you whether it will swim or slink (that requires causal modeling). Also, causal knowledge confirmed by time-sequence observation seems to actually make classification a much easier problem: the causal structure of the world, once identifable, is much sparser than the feature-structure of the world. Every cause “radiates” information about many, many effects, so modeling the cause (once you can: causal inference is near the frontier of current statistics) is a much more efficient way to compress the data on effects and thus generalize successfully.
Interesting. ….what is the generative recipe needed for?
Primarily for predicting how the “object” (ie: component of the universe) in question is going to act. Classifying (in the machine learning sense) what you see as a cat doesn’t tell you whether it will swim or slink (that requires causal modeling). Also, causal knowledge confirmed by time-sequence observation seems to actually make classification a much easier problem: the causal structure of the world, once identifable, is much sparser than the feature-structure of the world. Every cause “radiates” information about many, many effects, so modeling the cause (once you can: causal inference is near the frontier of current statistics) is a much more efficient way to compress the data on effects and thus generalize successfully.
Interesting, thanks.