I think the way this could, work, conceptually, is as follows. Maybe the Old Brain does have specific “detectors” for specific events like: are people smiling at me, glaring at me, shouting at me, hitting me; has something that was “mine” been stolen from me; is that cluster of sensations an “agent”; does this hurt, or feel good. These seem to be the kinds of events the small children, most mammals, and even some reptiles seem to be able to understand.
The neocortex then constructs increasingly nuanced models based on these base level events. It builds up a fairly sophisticated cognitive behavior such as, for example, romantic jealousy, or the desire to win a game, or the perception that a specific person is a rival, or a long-term plan to get a college degree, by gradually linking up elements of its learned world model with internal imagined expectations of ending up in states that it natively perceives (with the Old Brain) as good or bad.
Obviously the neocortex isn’t just passively learning, it’s also constantly doing forward-modeling/prediction using its learned model to try to navigate toward desirable states. Imagined instances of burning your hand on a stove are linked with real memories of burning your hand on a stove, and thus imagined plans that would lead to burning your hand on the stove are perceived as undesirable, because the Old Brain knows instinctively (i.e. without needing to learn) that this is a bad outcome.
eta: Not wholly my original thought, but I think one of the main purposes of dreams is to provide large amounts of simulated data aimed at linking up the neocortical model of reality with the Old Brain. The sorts of things that happen in dreams tend to often be very dramatic and scary. I think the sleeping brain is intentionally seeking out parts of the state space that agitate the Old Brain in order to link up the map of the outside world with the inner sense of innate goodness and badness.
The idea that the neocortex is running a learning algorithm that needs some kind of evaluative weighting to start working , isn’t exclusive of the idea that the neocortex can learn to perform its own evaluations.
I think the way this could, work, conceptually, is as follows. Maybe the Old Brain does have specific “detectors” for specific events like: are people smiling at me, glaring at me, shouting at me, hitting me; has something that was “mine” been stolen from me; is that cluster of sensations an “agent”; does this hurt, or feel good. These seem to be the kinds of events the small children, most mammals, and even some reptiles seem to be able to understand.
The neocortex then constructs increasingly nuanced models based on these base level events. It builds up a fairly sophisticated cognitive behavior such as, for example, romantic jealousy, or the desire to win a game, or the perception that a specific person is a rival, or a long-term plan to get a college degree, by gradually linking up elements of its learned world model with internal imagined expectations of ending up in states that it natively perceives (with the Old Brain) as good or bad.
Obviously the neocortex isn’t just passively learning, it’s also constantly doing forward-modeling/prediction using its learned model to try to navigate toward desirable states. Imagined instances of burning your hand on a stove are linked with real memories of burning your hand on a stove, and thus imagined plans that would lead to burning your hand on the stove are perceived as undesirable, because the Old Brain knows instinctively (i.e. without needing to learn) that this is a bad outcome.
eta: Not wholly my original thought, but I think one of the main purposes of dreams is to provide large amounts of simulated data aimed at linking up the neocortical model of reality with the Old Brain. The sorts of things that happen in dreams tend to often be very dramatic and scary. I think the sleeping brain is intentionally seeking out parts of the state space that agitate the Old Brain in order to link up the map of the outside world with the inner sense of innate goodness and badness.
Yeah, that’s pretty much along the lines that I’m thinking. There are a lot of details to flesh out though. I’ve been working in that direction.
The idea that the neocortex is running a learning algorithm that needs some kind of evaluative weighting to start working , isn’t exclusive of the idea that the neocortex can learn to perform its own evaluations.