Sorry, I have to go eat dinner, but speaking of dinner, the term we’re looking for here is recipe (or really: naturalized generative model). A symbol is grounded by some sort of causal model; a causal model consists in both a classifier for perceptual features and a recipe for generating an/the object referred to by the causal model. For FAI purposes, we could say that when the agent possesses a non-naturalized and/or uncertain understanding of some symbol (ie: “happiness”), it should exercise a strong degree of normative uncertainty in how it acts towards real-world objects relating to that symbol.
But this is really just a quick note before dinner, sorry.
Primarily for predicting how the “object” (ie: component of the universe) in question is going to act. Classifying (in the machine learning sense) what you see as a cat doesn’t tell you whether it will swim or slink (that requires causal modeling). Also, causal knowledge confirmed by time-sequence observation seems to actually make classification a much easier problem: the causal structure of the world, once identifable, is much sparser than the feature-structure of the world. Every cause “radiates” information about many, many effects, so modeling the cause (once you can: causal inference is near the frontier of current statistics) is a much more efficient way to compress the data on effects and thus generalize successfully.
Sorry, I have to go eat dinner, but speaking of dinner, the term we’re looking for here is recipe (or really: naturalized generative model). A symbol is grounded by some sort of causal model; a causal model consists in both a classifier for perceptual features and a recipe for generating an/the object referred to by the causal model. For FAI purposes, we could say that when the agent possesses a non-naturalized and/or uncertain understanding of some symbol (ie: “happiness”), it should exercise a strong degree of normative uncertainty in how it acts towards real-world objects relating to that symbol.
But this is really just a quick note before dinner, sorry.
Interesting. ….what is the generative recipe needed for?
Primarily for predicting how the “object” (ie: component of the universe) in question is going to act. Classifying (in the machine learning sense) what you see as a cat doesn’t tell you whether it will swim or slink (that requires causal modeling). Also, causal knowledge confirmed by time-sequence observation seems to actually make classification a much easier problem: the causal structure of the world, once identifable, is much sparser than the feature-structure of the world. Every cause “radiates” information about many, many effects, so modeling the cause (once you can: causal inference is near the frontier of current statistics) is a much more efficient way to compress the data on effects and thus generalize successfully.
Interesting, thanks.