The Deepmind Atari agent is bigger and the task is sexier, but calling it an early precursor AGI seems far fetched.
Hardly. It can learn a wide variety of tasks—many at above human level—in a variety of environments—all with only a few million neurons. It was on the cover of Nature for a reason.
Remember a mouse brain has the same core architecture as a human brain. The main components are all there and basically the same—just smaller—and with different size allocations across modules.
I suppose that the network topology of these brains is essentially normal, isn’t it? If that’s the case, then all the modules are still there, they are just squeezed against the skull wall.
From what I’ve read the topology is radically deformed, modules are lost, timing between remaining modules is totally changed—it’s massive brain damage. It’s so wierd that they can even still think that it has lead some neuroscientists to seriously consider that cognition comes from something other than neurons and synapses.
Under the universal learning machine hypothesis, instead, we would expect that speech impairment following localized brain damage to quickly heal in most cases as other brain areas are recruited to the task.
Not at all—relearning language would take at least as much time and computational power as learning it in the first place. Language is perhaps the most computationally challenging thing that humans learn—it takes roughly a decade to learn up to a high fluent adult level. Children learn faster—they have far more free cortical capacity. All of this is consistent with the ULH, and I bet it can even vaguely predict the time required for relearning language—although measuring the exact extent of damage to language centers is probably difficult .
This suggests that our language processing machinery is hard-wired to a significant extent.
Absolutely not—because you can look at the typical language modules in the microscope, and they are basically the same as the other cortical modules. Furthermore, there is no strong case for any mechanism that can encode any significant genetically predetermined task specific wiring complexity into the cortex. It is just like an ANN—the wiring is random. The modules are all basically the same.
Hardly. It can learn a wide variety of tasks—many at above human level—in a variety of environments—all with only a few million neurons. It was on the cover of Nature for a reason.
Remember a mouse brain has the same core architecture as a human brain. The main components are all there and basically the same—just smaller—and with different size allocations across modules.
From what I’ve read the topology is radically deformed, modules are lost, timing between remaining modules is totally changed—it’s massive brain damage. It’s so wierd that they can even still think that it has lead some neuroscientists to seriously consider that cognition comes from something other than neurons and synapses.
Not at all—relearning language would take at least as much time and computational power as learning it in the first place. Language is perhaps the most computationally challenging thing that humans learn—it takes roughly a decade to learn up to a high fluent adult level. Children learn faster—they have far more free cortical capacity. All of this is consistent with the ULH, and I bet it can even vaguely predict the time required for relearning language—although measuring the exact extent of damage to language centers is probably difficult .
Absolutely not—because you can look at the typical language modules in the microscope, and they are basically the same as the other cortical modules. Furthermore, there is no strong case for any mechanism that can encode any significant genetically predetermined task specific wiring complexity into the cortex. It is just like an ANN—the wiring is random. The modules are all basically the same.