Memory/circuitry is pretty cheap for the brain, but energy is not. Accessing memory requires moving bits around, which costs energy per unit dist (and this can dominate the cost of computing on bits, at optimally minimal device sizes).
Thus energy efficiency requires computing as close to memory as possible. Thus biology has synapses which are both the bulk compute element and the bulk memory element, in one device, can’t get closer than that.
Neurons are then the ADC and longer distance communication units.
The neuromorphic or processor-in-memory architecture is fundementally a number of OOM more energy efficient than the von neumman architecture—as the latter requires moving each connection weight/synapse across the entire device, whereas the brain only has to move the neuron values—a roughly 10,000x advantage. For VN machines to overcome this gap they end up having to heavily amortize the memory fetches by reusing the values across many computations—matrix matrix multiplication instead of vector matrix multiplication.
Given that you are using the neuromorphic/PIM approach—as you should for energy efficiency—you still have a tradeoff between size and speed. I do believe that smaller animals have faster brains in general, but the tradeoff is complex, and in general larger model size seems to dominate speed for predictive power. This should be obvious in the limit—a fast but very small memory learning machine can’t remember what it’s already learned, and ends up having to burn all it’s compute just relearning things.
Hanson’s EM world sounds about right except I doubt that brain scanning and uploading will precede DL/neurmorphic AGI.
The limits of Moore’s Law are fairly well known in the device physics research community—and there really isn’t multiple OOM of transistor energy efficiency left, we are already pretty close. Moving to neuromorphic/PIM can provide some OOM advantage, but it’s one-time. Continuation of Moore’s Law style growth will soon require exotic computing—reversible/quantum.
Thanks!
Memory/circuitry is pretty cheap for the brain, but energy is not. Accessing memory requires moving bits around, which costs energy per unit dist (and this can dominate the cost of computing on bits, at optimally minimal device sizes).
Thus energy efficiency requires computing as close to memory as possible. Thus biology has synapses which are both the bulk compute element and the bulk memory element, in one device, can’t get closer than that.
Neurons are then the ADC and longer distance communication units.
The neuromorphic or processor-in-memory architecture is fundementally a number of OOM more energy efficient than the von neumman architecture—as the latter requires moving each connection weight/synapse across the entire device, whereas the brain only has to move the neuron values—a roughly 10,000x advantage. For VN machines to overcome this gap they end up having to heavily amortize the memory fetches by reusing the values across many computations—matrix matrix multiplication instead of vector matrix multiplication.
Given that you are using the neuromorphic/PIM approach—as you should for energy efficiency—you still have a tradeoff between size and speed. I do believe that smaller animals have faster brains in general, but the tradeoff is complex, and in general larger model size seems to dominate speed for predictive power. This should be obvious in the limit—a fast but very small memory learning machine can’t remember what it’s already learned, and ends up having to burn all it’s compute just relearning things.
Hanson’s EM world sounds about right except I doubt that brain scanning and uploading will precede DL/neurmorphic AGI.
The limits of Moore’s Law are fairly well known in the device physics research community—and there really isn’t multiple OOM of transistor energy efficiency left, we are already pretty close. Moving to neuromorphic/PIM can provide some OOM advantage, but it’s one-time. Continuation of Moore’s Law style growth will soon require exotic computing—reversible/quantum.
Thank you Jacob, I will have to mull this all over.
Your post made me update majorly on many topics.