Ah ok your gerrymandering analogy now makes sense.
That was my sketchy understanding of how it works from evol psych and things like Dennett’s books, Pinker, etc.
I think that’s a good summary of the evolved modularity hypothesis. It turns out that we can actually look into the brain and test that hypothesis. Those tests were done, and lo and behold, the brain doesn’t work that way. The universal learning hypothesis emerged as the new theory to explain the new neuroscience data from the last decade or so.
So basically this is what the article is all about. You said earlier you skimmed it, so perhaps I need a better abstract or summary at the top, as oge suggested.
Furthermore, I thought the rationale of this explanation was that it’s hard to see how a universal learning machine can get off the ground evolutionarily (it’s going to be energetically expensive, not fast enough, etc.) whereas task-specific gadgets are easier to evolve (“need to know” principle),
This is a pretty good sounding rationale. It’s also probably wrong. It turns out a small ULM is relatively easy to specify, and also is completely compatible with innate task-specific gadgetry. In other words the universal learning machinery has very little drawbacks. All vertebrates have a similar core architecture based on the basal ganglia. In large brained mammals, the general purpose coprocessors (neocortex, cerebellum) are just expanded more than other structures.
In particular it looks like the brainstem has a bunch of old innate circuitry that the cortex and BG learns how to control (the BG does not just control the cortex), but I didn’t have time to get into the brainstem in the scope of this article.
Ah ok your gerrymandering analogy now makes sense.
I think that’s a good summary of the evolved modularity hypothesis. It turns out that we can actually look into the brain and test that hypothesis. Those tests were done, and lo and behold, the brain doesn’t work that way. The universal learning hypothesis emerged as the new theory to explain the new neuroscience data from the last decade or so.
So basically this is what the article is all about. You said earlier you skimmed it, so perhaps I need a better abstract or summary at the top, as oge suggested.
This is a pretty good sounding rationale. It’s also probably wrong. It turns out a small ULM is relatively easy to specify, and also is completely compatible with innate task-specific gadgetry. In other words the universal learning machinery has very little drawbacks. All vertebrates have a similar core architecture based on the basal ganglia. In large brained mammals, the general purpose coprocessors (neocortex, cerebellum) are just expanded more than other structures.
In particular it looks like the brainstem has a bunch of old innate circuitry that the cortex and BG learns how to control (the BG does not just control the cortex), but I didn’t have time to get into the brainstem in the scope of this article.
Great stuff, thanks! I’ll dig into the article more.