True but misleading? Isn’t the brain’s “architectural prior” a heckuva lot more complex than the things used in DL?
The full specification of the DL system includes the microde, OS, etc. Likewise much of the brain complexity is in the smaller ‘oldbrain’ structures that are the equivalent of a base robot OS. The architectural prior I speak of is the complexity on top of that, which separates us from some ancient earlier vertebrate brain. But again see the brain as a ULM post, which cover the the extensive evidence for emergent learned complexity from simple architecture/algorithms (now the dominant hypothesis in neuroscience).
I’m not convinced these DL analogies are useful—what properties do brains and deepnets share that renders the analogies useful here?
Most everything above the hardware substrate—but i’ve already provided links to sections of my articles addressing the convergence of DL and neurosci with many dozens of references. So it’d probably be better to focus exactly on what specific key analogies/properties you believe diverge.
DL is a pretty specific thing
DL is extremely general—it’s just efficient approximate bayesian inference over circuit spaces. It doesn’t imply any specific architecture, and doesn’t even strongly imply any specific approx inference/learning algorithm (as 1st and approx 2nd order methods are both common).
E.g. what if working memory capacity is limited by the noisiness of neural transmission, and we can reduce the noisiness through gene edits?
Training to increase working memory capacity has near zero effect on IQ or downstream intellectual capabilities—see gwern’s reviews and experiments. Working memory capacity is important in both brains and ANNs (transformers), but it comes from large fast weight synaptic capacity, not simple hacks.
Noise is important for sampling—adequate noise is a feature, not a bug.
[Scaling law theories]
Sure: here’s a few: quantization model, scaling laws from the data manifold, and a statistical model.
The full specification of the DL system includes the microde, OS, etc. Likewise much of the brain complexity is in the smaller ‘oldbrain’ structures that are the equivalent of a base robot OS. The architectural prior I speak of is the complexity on top of that, which separates us from some ancient earlier vertebrate brain. But again see the brain as a ULM post, which cover the the extensive evidence for emergent learned complexity from simple architecture/algorithms (now the dominant hypothesis in neuroscience).
Most everything above the hardware substrate—but i’ve already provided links to sections of my articles addressing the convergence of DL and neurosci with many dozens of references. So it’d probably be better to focus exactly on what specific key analogies/properties you believe diverge.
DL is extremely general—it’s just efficient approximate bayesian inference over circuit spaces. It doesn’t imply any specific architecture, and doesn’t even strongly imply any specific approx inference/learning algorithm (as 1st and approx 2nd order methods are both common).
Training to increase working memory capacity has near zero effect on IQ or downstream intellectual capabilities—see gwern’s reviews and experiments. Working memory capacity is important in both brains and ANNs (transformers), but it comes from large fast weight synaptic capacity, not simple hacks.
Noise is important for sampling—adequate noise is a feature, not a bug.