I’ve tried to apply this framework earlier and realized that I’m confused about how new models are generated. Say I’m taught about the derivative for the first time. This process should result in me getting a ‘derivative’ model, but how is this done? At the point where the model isn’t yet there, how does the neocortex (or another part of the brain?) do that?
Where do the generative models come from is a question I can’t answer well because nobody knows the algorithm, so far as I can tell.
Here’s a special case that’s well understood: time-sequencing. If A happens then B happens, you’ll memorize a model “A then B”. Jeff Hawkins has a paper here with biological details, and Dileep George has a series of nice papers applying this particular algorithm to practical ML problems (keyword: “cloned hidden Markov model”).
Outside of that special case, I can’t say much in detail because I don’t know. Randall O’Reilly argues convincingly here that there’s an error-driven learning mechanism, at least for the vision system (ML people would say: “self-supervised learning involving gradient descent”), although it’s controversial whether the brain can do backprop through multiple hierarchical layers the way PyTorch can. Well, long story. There’s also Hebbian learning. There’s also trial-and-error learning. When you read a book they’ll describe things using analogies, which activates models in your brain which could be helpful ingredients in constructing the new model. Anyway, I dunno.
Although, I now think my primary issue is actually not quite that. It’s more like, when I try to concretely sketch how I now imagine thinking to work, I naturally invoke an additional mysterious steering module that allows me to direct my models to the topics that I want outputs on. I probably want to do this because that’s how my mind feels like: I can steer it where I want it, and then it spits out results.
Now, on the one hand, I don’t doubt that the sense of control is an evolutionarily adaptive deception, and I certainly don’t think Free Will is a real thing. On the other hand, it seems hard to take out the mysterious steering module. I think I was asking about how models are created to fill in that hole, but on second thought, it may not actually be all that connected, unless there is a module which does both.
So, is the sense of deciding what to apply my generative models to subsumed by the model outputs in this framework? Or is there something else?
I realize that the subcortex is steering the neocortex, but I’m still thinking about an evolutionarily uninteresting setting, like me sitting in a safe environment and having my mind contemplate various evolutionarily alien concepts.
Well, we don’t have AGI right now, there must be some missing ingredients… :-)
direct my models to the topics that I want outputs on
Well you can invoke a high-confidence model: “a detailed solution to math problem X involving ingredients A,B,C”. Then the inference algorithm will shuffle through ideas in the brain trying to build a self-consistent model that involves this shell of a thought but fills in the gaps with other pieces that fit. So that would feel like trying to figure something out.
I think that’s more like inference than learning, but of course you can memorize whatever useful new composite models that come up with during this process.
I’ve tried to apply this framework earlier and realized that I’m confused about how new models are generated. Say I’m taught about the derivative for the first time. This process should result in me getting a ‘derivative’ model, but how is this done? At the point where the model isn’t yet there, how does the neocortex (or another part of the brain?) do that?
Where do the generative models come from is a question I can’t answer well because nobody knows the algorithm, so far as I can tell.
Here’s a special case that’s well understood: time-sequencing. If A happens then B happens, you’ll memorize a model “A then B”. Jeff Hawkins has a paper here with biological details, and Dileep George has a series of nice papers applying this particular algorithm to practical ML problems (keyword: “cloned hidden Markov model”).
Outside of that special case, I can’t say much in detail because I don’t know. Randall O’Reilly argues convincingly here that there’s an error-driven learning mechanism, at least for the vision system (ML people would say: “self-supervised learning involving gradient descent”), although it’s controversial whether the brain can do backprop through multiple hierarchical layers the way PyTorch can. Well, long story. There’s also Hebbian learning. There’s also trial-and-error learning. When you read a book they’ll describe things using analogies, which activates models in your brain which could be helpful ingredients in constructing the new model. Anyway, I dunno.
That’s a shame. Seems like an important piece.
Although, I now think my primary issue is actually not quite that. It’s more like, when I try to concretely sketch how I now imagine thinking to work, I naturally invoke an additional mysterious steering module that allows me to direct my models to the topics that I want outputs on. I probably want to do this because that’s how my mind feels like: I can steer it where I want it, and then it spits out results.
Now, on the one hand, I don’t doubt that the sense of control is an evolutionarily adaptive deception, and I certainly don’t think Free Will is a real thing. On the other hand, it seems hard to take out the mysterious steering module. I think I was asking about how models are created to fill in that hole, but on second thought, it may not actually be all that connected, unless there is a module which does both.
So, is the sense of deciding what to apply my generative models to subsumed by the model outputs in this framework? Or is there something else?
I realize that the subcortex is steering the neocortex, but I’m still thinking about an evolutionarily uninteresting setting, like me sitting in a safe environment and having my mind contemplate various evolutionarily alien concepts.
Well, we don’t have AGI right now, there must be some missing ingredients… :-)
Well you can invoke a high-confidence model: “a detailed solution to math problem X involving ingredients A,B,C”. Then the inference algorithm will shuffle through ideas in the brain trying to build a self-consistent model that involves this shell of a thought but fills in the gaps with other pieces that fit. So that would feel like trying to figure something out.
I think that’s more like inference than learning, but of course you can memorize whatever useful new composite models that come up with during this process.