Although, I now think my primary issue is actually not quite that. It’s more like, when I try to concretely sketch how I now imagine thinking to work, I naturally invoke an additional mysterious steering module that allows me to direct my models to the topics that I want outputs on. I probably want to do this because that’s how my mind feels like: I can steer it where I want it, and then it spits out results.
Now, on the one hand, I don’t doubt that the sense of control is an evolutionarily adaptive deception, and I certainly don’t think Free Will is a real thing. On the other hand, it seems hard to take out the mysterious steering module. I think I was asking about how models are created to fill in that hole, but on second thought, it may not actually be all that connected, unless there is a module which does both.
So, is the sense of deciding what to apply my generative models to subsumed by the model outputs in this framework? Or is there something else?
I realize that the subcortex is steering the neocortex, but I’m still thinking about an evolutionarily uninteresting setting, like me sitting in a safe environment and having my mind contemplate various evolutionarily alien concepts.
Well, we don’t have AGI right now, there must be some missing ingredients… :-)
direct my models to the topics that I want outputs on
Well you can invoke a high-confidence model: “a detailed solution to math problem X involving ingredients A,B,C”. Then the inference algorithm will shuffle through ideas in the brain trying to build a self-consistent model that involves this shell of a thought but fills in the gaps with other pieces that fit. So that would feel like trying to figure something out.
I think that’s more like inference than learning, but of course you can memorize whatever useful new composite models that come up with during this process.
That’s a shame. Seems like an important piece.
Although, I now think my primary issue is actually not quite that. It’s more like, when I try to concretely sketch how I now imagine thinking to work, I naturally invoke an additional mysterious steering module that allows me to direct my models to the topics that I want outputs on. I probably want to do this because that’s how my mind feels like: I can steer it where I want it, and then it spits out results.
Now, on the one hand, I don’t doubt that the sense of control is an evolutionarily adaptive deception, and I certainly don’t think Free Will is a real thing. On the other hand, it seems hard to take out the mysterious steering module. I think I was asking about how models are created to fill in that hole, but on second thought, it may not actually be all that connected, unless there is a module which does both.
So, is the sense of deciding what to apply my generative models to subsumed by the model outputs in this framework? Or is there something else?
I realize that the subcortex is steering the neocortex, but I’m still thinking about an evolutionarily uninteresting setting, like me sitting in a safe environment and having my mind contemplate various evolutionarily alien concepts.
Well, we don’t have AGI right now, there must be some missing ingredients… :-)
Well you can invoke a high-confidence model: “a detailed solution to math problem X involving ingredients A,B,C”. Then the inference algorithm will shuffle through ideas in the brain trying to build a self-consistent model that involves this shell of a thought but fills in the gaps with other pieces that fit. So that would feel like trying to figure something out.
I think that’s more like inference than learning, but of course you can memorize whatever useful new composite models that come up with during this process.