If the brain does something which would be impossible on the assumption of cortical uniformity, then that would indeed be a very good reason to reject cortical uniformity. :-)
Does it? I don’t think cortical uniformity implies the separation of motivation and mapping
Oh, sorry, I think I misunderstood the first time. Hmm, so in this particular post I was trying to closely follow the book by keeping the neocortex by itself. That’s not how I normally describe things; normally I talk about the neocortex and basal ganglia working together as a subsystem.
So when I think “I really want to get out of debt”, it’s combination of a thing in the world-model, and a motivation / valence. I do in fact think that those two aspects of the thought are anatomically distinct: I think the meaning of “get out of debt” (a complex set of relationships and associations and so on) are stored as synapses in the neocortex, and the fact that we want that is stored as synapses in the basal ganglia (more specifically, the striatum). But obviously the two are very very closely interconnected.
E.g. see here for that more “conventional” RL-style description.
Reward, on the other hand, strikes me as necessarily a very different module. After all, if you only have a learning algorithm that starts from scratch, there’s nothing within that system that can say that making friends is good and getting attacked by lions is bad, as opposed to the other way around. Right? So you need a hardcoded reward-calculator module, seems to me.
Since we seem to be more on the same page than most other people I’ve talked to about this, perhaps a collaboration between us could be fruitful. Not sure on what exactly, but I’ve been thinking about how to transition into direct work on AGI safety since updating in the past couple years that it is potentially even closer than I’d thought.
As for the brain division, I also think of the neocortex and basal ganglia working together as a subsystem. I actually strengthened my belief in their tight coupling in my last year of grad school when I learned more about the striatum gating thoughts (not just motor actions), and the cerebellum smoothing abstract thoughts (not just motor actions). So now I envision it more like the brain is thousands of mostly repetitive loops of little neocortex region → little basal ganglia region → little hindbrain region → little cerebellum region → same little neocortex region, and that these loops also communicate sideways a bit in each region but mostly in the neocortex. With this understanding, I feel like I can’t at all get behind Jeff H’s idea of safely separating out the neocortical functions from the mid/hind brain functions. I think that an effective AGI general learning algorithm is likely to have to have at least some aspects of those little loops, with striatum gating and cerebellar smoothing, and hippocampal memory linkages.… I do think that the available data in neuroscience is very close, if not already, sufficient for describing the necessary algorithm and it’s just a question of a bit more focused work on sorting out the necessary parts from the unneeded complexity. I pulled back from actively trying to do just that once I realized that gaining that knowledge without sufficient safety preparation could be a bad thing for humanity.
Oh wow, cool! I was still kinda confused when I wrote this post and comment thread above, but a couple months later I wrote Big Picture of Phasic Dopamine which sounds at least somewhat related to what you’re talking about and in particular talks a lot about basal ganglia loops.
Oh, except that post leaves out the cerebellum (for simplicity). I have a VERY simple cerebellum story (see the one-sentence version here) … I’ve done some poking around the literature and talking to people about it, but anyway I currently still stand by my story and am still confused about why all the other cerebellum-modelers make things so much more complicated than that. :-P
We do sound on the same page … I’d love to chat, feel free to email or DM me if you have time.
Does it? I don’t think cortical uniformity implies the separation of motivation and mapping
Oh, sorry, I think I misunderstood the first time. Hmm, so in this particular post I was trying to closely follow the book by keeping the neocortex by itself. That’s not how I normally describe things; normally I talk about the neocortex and basal ganglia working together as a subsystem.
So when I think “I really want to get out of debt”, it’s combination of a thing in the world-model, and a motivation / valence. I do in fact think that those two aspects of the thought are anatomically distinct: I think the meaning of “get out of debt” (a complex set of relationships and associations and so on) are stored as synapses in the neocortex, and the fact that we want that is stored as synapses in the basal ganglia (more specifically, the striatum). But obviously the two are very very closely interconnected.
E.g. see here for that more “conventional” RL-style description.
Reward, on the other hand, strikes me as necessarily a very different module. After all, if you only have a learning algorithm that starts from scratch, there’s nothing within that system that can say that making friends is good and getting attacked by lions is bad, as opposed to the other way around. Right? So you need a hardcoded reward-calculator module, seems to me.
Sorry if I’m still misunderstanding.
I also studied neuroscience for several years, and Jeff’s first book was a major inspiration for me beginning that journey. I agree very much with the points you make in this review and in https://www.lesswrong.com/posts/W6wBmQheDiFmfJqZy/brain-inspired-agi-and-the-lifetime-anchor
Since we seem to be more on the same page than most other people I’ve talked to about this, perhaps a collaboration between us could be fruitful. Not sure on what exactly, but I’ve been thinking about how to transition into direct work on AGI safety since updating in the past couple years that it is potentially even closer than I’d thought.
As for the brain division, I also think of the neocortex and basal ganglia working together as a subsystem. I actually strengthened my belief in their tight coupling in my last year of grad school when I learned more about the striatum gating thoughts (not just motor actions), and the cerebellum smoothing abstract thoughts (not just motor actions). So now I envision it more like the brain is thousands of mostly repetitive loops of little neocortex region → little basal ganglia region → little hindbrain region → little cerebellum region → same little neocortex region, and that these loops also communicate sideways a bit in each region but mostly in the neocortex. With this understanding, I feel like I can’t at all get behind Jeff H’s idea of safely separating out the neocortical functions from the mid/hind brain functions. I think that an effective AGI general learning algorithm is likely to have to have at least some aspects of those little loops, with striatum gating and cerebellar smoothing, and hippocampal memory linkages.… I do think that the available data in neuroscience is very close, if not already, sufficient for describing the necessary algorithm and it’s just a question of a bit more focused work on sorting out the necessary parts from the unneeded complexity. I pulled back from actively trying to do just that once I realized that gaining that knowledge without sufficient safety preparation could be a bad thing for humanity.
Oh wow, cool! I was still kinda confused when I wrote this post and comment thread above, but a couple months later I wrote Big Picture of Phasic Dopamine which sounds at least somewhat related to what you’re talking about and in particular talks a lot about basal ganglia loops.
Oh, except that post leaves out the cerebellum (for simplicity). I have a VERY simple cerebellum story (see the one-sentence version here) … I’ve done some poking around the literature and talking to people about it, but anyway I currently still stand by my story and am still confused about why all the other cerebellum-modelers make things so much more complicated than that. :-P
We do sound on the same page … I’d love to chat, feel free to email or DM me if you have time.