In fact the most successful early precursor AGI we have—the atari deepmind agent—is a pure ANN.
People have been using ANNs for reinforcement learning tasks since at least the TD-Gammon system with varying success. The Deepmind Atari agent is bigger and the task is sexier, but calling it an early precursor AGI seems far fetched.
Consider the case of extreme hydrocephaly—where fluid fills in the center of the brain and replaces most of the brain and squeezes the remainder out to a thin surface near the skull. And yet, these patients can have above average IQs. Optimal dynamic wiring can explain this—the brain is constantly doing global optimization across the wiring structure, adapting to even extreme deformations and damage. How does evolved modularity explain this?
I suppose that the network topology of these brains is essentially normal, isn’t it? If that’s the case, then all the modules are still there, they are just squeezed against the skull wall.
This is nonsense—language processing develops in general purpose cortical modules, there is no specific language circuitry.
If I understand correctly, damage to Broca’s area or Wernicke’s area tends to cause speech impairment. This may be more or less severe depending on the individual, which is consistent with the evolved modularity hypotheses: genetically different individuals may have small differences in the location and shape of the brain modules.
Under the universal learning machine hypothesis, instead, we would expect that speech impairment following localized brain damage to quickly heal in most cases as other brain areas are recruited to the task. Note that there are large rewards for regaining linguistic ability, hence the brain would sacrifice other abilities if it could. This generally does not happen.
In fact, for most people with completely healthy brains it is difficult to learn a new language as well as a native speaker after the age of 10. This suggests that our language processing machinery is hard-wired to a significant extent.
The Deepmind Atari agent is bigger and the task is sexier, but calling it an early precursor AGI seems far fetched.
Hardly. It can learn a wide variety of tasks—many at above human level—in a variety of environments—all with only a few million neurons. It was on the cover of Nature for a reason.
Remember a mouse brain has the same core architecture as a human brain. The main components are all there and basically the same—just smaller—and with different size allocations across modules.
I suppose that the network topology of these brains is essentially normal, isn’t it? If that’s the case, then all the modules are still there, they are just squeezed against the skull wall.
From what I’ve read the topology is radically deformed, modules are lost, timing between remaining modules is totally changed—it’s massive brain damage. It’s so wierd that they can even still think that it has lead some neuroscientists to seriously consider that cognition comes from something other than neurons and synapses.
Under the universal learning machine hypothesis, instead, we would expect that speech impairment following localized brain damage to quickly heal in most cases as other brain areas are recruited to the task.
Not at all—relearning language would take at least as much time and computational power as learning it in the first place. Language is perhaps the most computationally challenging thing that humans learn—it takes roughly a decade to learn up to a high fluent adult level. Children learn faster—they have far more free cortical capacity. All of this is consistent with the ULH, and I bet it can even vaguely predict the time required for relearning language—although measuring the exact extent of damage to language centers is probably difficult .
This suggests that our language processing machinery is hard-wired to a significant extent.
Absolutely not—because you can look at the typical language modules in the microscope, and they are basically the same as the other cortical modules. Furthermore, there is no strong case for any mechanism that can encode any significant genetically predetermined task specific wiring complexity into the cortex. It is just like an ANN—the wiring is random. The modules are all basically the same.
People have been using ANNs for reinforcement learning tasks since at least the TD-Gammon system with varying success. The Deepmind Atari agent is bigger and the task is sexier, but calling it an early precursor AGI seems far fetched.
I suppose that the network topology of these brains is essentially normal, isn’t it? If that’s the case, then all the modules are still there, they are just squeezed against the skull wall.
If I understand correctly, damage to Broca’s area or Wernicke’s area tends to cause speech impairment.
This may be more or less severe depending on the individual, which is consistent with the evolved modularity hypotheses: genetically different individuals may have small differences in the location and shape of the brain modules.
Under the universal learning machine hypothesis, instead, we would expect that speech impairment following localized brain damage to quickly heal in most cases as other brain areas are recruited to the task. Note that there are large rewards for regaining linguistic ability, hence the brain would sacrifice other abilities if it could. This generally does not happen.
In fact, for most people with completely healthy brains it is difficult to learn a new language as well as a native speaker after the age of 10. This suggests that our language processing machinery is hard-wired to a significant extent.
Hardly. It can learn a wide variety of tasks—many at above human level—in a variety of environments—all with only a few million neurons. It was on the cover of Nature for a reason.
Remember a mouse brain has the same core architecture as a human brain. The main components are all there and basically the same—just smaller—and with different size allocations across modules.
From what I’ve read the topology is radically deformed, modules are lost, timing between remaining modules is totally changed—it’s massive brain damage. It’s so wierd that they can even still think that it has lead some neuroscientists to seriously consider that cognition comes from something other than neurons and synapses.
Not at all—relearning language would take at least as much time and computational power as learning it in the first place. Language is perhaps the most computationally challenging thing that humans learn—it takes roughly a decade to learn up to a high fluent adult level. Children learn faster—they have far more free cortical capacity. All of this is consistent with the ULH, and I bet it can even vaguely predict the time required for relearning language—although measuring the exact extent of damage to language centers is probably difficult .
Absolutely not—because you can look at the typical language modules in the microscope, and they are basically the same as the other cortical modules. Furthermore, there is no strong case for any mechanism that can encode any significant genetically predetermined task specific wiring complexity into the cortex. It is just like an ANN—the wiring is random. The modules are all basically the same.