To fully confront the ontological crisis that we face, we would have to upgrade our world model to be based on actual physics, and simultaneously translate our utility functions so that their domain is the set of possible states of the new model.
Once upon a time, developmental psychology claimed that human babies learned object permanence as they aged. I don’t know if that’s still the dominant opinion, but it seems at least possible to me, a way that the world could be, if not the way it is. What would that mean, for a baby to go from not having a sense of objects persisting in locations to having it?
First, let’s unpack what an object might be. If there’s a region of silver color in a baby’s visual field, and rather than the region breaking apart in different directions over time, if it stays together, if the blob is roughly invariant to translations, that’s the beginning of an object concept. A lump that stays together. Then the baby notices that the variances are also predictable, that the flowery texture bits are sometimes showing, and the curved bits are sometimes showing, but not usually together, or neither shows when the silver blob looks especially small, relative to its position. From coactivation of features, and maybe higher order statistics, the baby eventually learns a representation of spoon which predicts which features will be visible as a function of some rotation parameters. That’s what a primitive spoon object means in my mind. There are of course other things to incorporate into the spoon model, like force dynamics (its weight, malleability, permeability and sharpness, et cetera), lighting invariances, and haptic textural information.
Permanence would be something like being able to make predictions of the object’s position even when the object is occluded (having a representation of the face behind the hands, which doesn’t compress the present visual scene, but compresses visual scenes across time). Old experiments showed that babies’ object tracking gazes for occluded objects increased with age, which was supposed to be support for a theory of learned object permanence.
Now, if that’s at all how human macroscopic object perception starts out, I think it’s fair to call that “ruled by a hodgepodge of heuristics and prediction algorithms”. However, it seems psychologically implausible that babies undergo a utility function change throughout this process, in the way you seem to mean. If we think of a world model as supplying predictions (note, this is something of an abuse of terminology, since it probably includes both structured, quickly updateable predictions from “model-based” brain regions like hippocampal place cells, as well as slowly revised model-free habit-type predictions) - if we think of a world model as supplying predictions and utility functions as supplying valuations over predicted worlds, then the domain of the utility function is still some kind of predicted state, before and after learning object permanence. Intuitively, worlds without object permanence are a very different hypothesis space, and thus a very different space of appreciable hypothetical realities, than “our” models which “divide cleanly into these 3 parts”, but I think both types fall into a broader category which reward circuitry functions can take as argument. Indeed, if developmental psychology was right about learning object permanence, humans probably spend a few weeks with world models which have graded persistence.
Once upon a time, developmental psychology claimed that human babies learned object permanence as they aged. I don’t know if that’s still the dominant opinion, but it seems at least possible to me, a way that the world could be, if not the way it is. What would that mean, for a baby to go from not having a sense of objects persisting in locations to having it?
First, let’s unpack what an object might be. If there’s a region of silver color in a baby’s visual field, and rather than the region breaking apart in different directions over time, if it stays together, if the blob is roughly invariant to translations, that’s the beginning of an object concept. A lump that stays together. Then the baby notices that the variances are also predictable, that the flowery texture bits are sometimes showing, and the curved bits are sometimes showing, but not usually together, or neither shows when the silver blob looks especially small, relative to its position. From coactivation of features, and maybe higher order statistics, the baby eventually learns a representation of spoon which predicts which features will be visible as a function of some rotation parameters. That’s what a primitive spoon object means in my mind. There are of course other things to incorporate into the spoon model, like force dynamics (its weight, malleability, permeability and sharpness, et cetera), lighting invariances, and haptic textural information.
Permanence would be something like being able to make predictions of the object’s position even when the object is occluded (having a representation of the face behind the hands, which doesn’t compress the present visual scene, but compresses visual scenes across time). Old experiments showed that babies’ object tracking gazes for occluded objects increased with age, which was supposed to be support for a theory of learned object permanence.
Now, if that’s at all how human macroscopic object perception starts out, I think it’s fair to call that “ruled by a hodgepodge of heuristics and prediction algorithms”. However, it seems psychologically implausible that babies undergo a utility function change throughout this process, in the way you seem to mean. If we think of a world model as supplying predictions (note, this is something of an abuse of terminology, since it probably includes both structured, quickly updateable predictions from “model-based” brain regions like hippocampal place cells, as well as slowly revised model-free habit-type predictions) - if we think of a world model as supplying predictions and utility functions as supplying valuations over predicted worlds, then the domain of the utility function is still some kind of predicted state, before and after learning object permanence. Intuitively, worlds without object permanence are a very different hypothesis space, and thus a very different space of appreciable hypothetical realities, than “our” models which “divide cleanly into these 3 parts”, but I think both types fall into a broader category which reward circuitry functions can take as argument. Indeed, if developmental psychology was right about learning object permanence, humans probably spend a few weeks with world models which have graded persistence.