I have some technical background in neuromorphic AI.
There are certainly things that the current deep learning paradigm is bad at which are critical to animal intelligence: e.g. power efficiency, highly recurrent networks, and complex internal dynamics.
It’s unclear to me whether any of these are necessary for AGI. Something, something executive function and global workspace theory?
I once would have said that feedback circuits used in the sensory cortex for predictive coding were a vital component, but apparently transformers can do similar tasks using purely feedforward methods.
My guess is that the scale and technology lead of DL is sufficient that it will hit AGI first, even if a more neuro way might be orders of magnitude more computationally efficient.
Where neuro AI is most useful in the near future is for embodied sensing and control, especially with limited compute or power. However, those constraints would seem to drastically curtail the potential for AGI.
I have some technical background in neuromorphic AI.
There are certainly things that the current deep learning paradigm is bad at which are critical to animal intelligence: e.g. power efficiency, highly recurrent networks, and complex internal dynamics.
It’s unclear to me whether any of these are necessary for AGI. Something, something executive function and global workspace theory?
I once would have said that feedback circuits used in the sensory cortex for predictive coding were a vital component, but apparently transformers can do similar tasks using purely feedforward methods.
My guess is that the scale and technology lead of DL is sufficient that it will hit AGI first, even if a more neuro way might be orders of magnitude more computationally efficient.
Where neuro AI is most useful in the near future is for embodied sensing and control, especially with limited compute or power. However, those constraints would seem to drastically curtail the potential for AGI.