Actually the exact opposite is true—ANN neurons and synapses are more powerful per neuron per synapse than their biological equivalents. ANN neurons signal and compute with high precision real numbers with 16 or 32 bits of precision, rather than 1 bit binary pulses with low precision analog summation.
The difference depends entirely on the problem, and the ideal strategy probably involves a complex heterogeneous mix of units of varying precision (which you see in the synaptic distribution in the cortex, btw), but in general with high precision neurons/synapses you need less units to implement the same circuit.
Also, I should mention that some biological neural circuits implement temporal coding (as in the hippocampus), which allows a neuron to send somewhat higher precision signals (on the order of 5 to 8 bits per spike or so). This has other tradeoffs though, so it isn’t worth it in all cases.
Brains are more powerful than current ANNs because current ANNs are incredibly small. All of the recent success in deep learning where ANNs are suddenly dominating everywhere was enabled by using GPUs to train ANNs in the range of 1 to 10 million neurons and 1 to 10 billion synapses—which is basically insect to lizard brain size range. (we aren’t even up to mouse sized ANNs yet)
That is still 3 to 4 orders of magnitude smaller than the human brain—we have a long ways to go still in terms of performance. Thankfully ANN performance is more than doubling every year (combined hardware and software increase).
The fact that they are subpar doesn’t mean that you can learn nothing from ANNs. It also doesn’t mean that ANNs can’t do a variety of tasks in machine learning with them.
This blew my mind a bit. So why the heck are researchers trying to train neural nets when the nodes of those nets are clearly subpar?
Actually the exact opposite is true—ANN neurons and synapses are more powerful per neuron per synapse than their biological equivalents. ANN neurons signal and compute with high precision real numbers with 16 or 32 bits of precision, rather than 1 bit binary pulses with low precision analog summation.
The difference depends entirely on the problem, and the ideal strategy probably involves a complex heterogeneous mix of units of varying precision (which you see in the synaptic distribution in the cortex, btw), but in general with high precision neurons/synapses you need less units to implement the same circuit.
Also, I should mention that some biological neural circuits implement temporal coding (as in the hippocampus), which allows a neuron to send somewhat higher precision signals (on the order of 5 to 8 bits per spike or so). This has other tradeoffs though, so it isn’t worth it in all cases.
Brains are more powerful than current ANNs because current ANNs are incredibly small. All of the recent success in deep learning where ANNs are suddenly dominating everywhere was enabled by using GPUs to train ANNs in the range of 1 to 10 million neurons and 1 to 10 billion synapses—which is basically insect to lizard brain size range. (we aren’t even up to mouse sized ANNs yet)
That is still 3 to 4 orders of magnitude smaller than the human brain—we have a long ways to go still in terms of performance. Thankfully ANN performance is more than doubling every year (combined hardware and software increase).
The fact that they are subpar doesn’t mean that you can learn nothing from ANNs. It also doesn’t mean that ANNs can’t do a variety of tasks in machine learning with them.