Technically true but also irrelevant. At the physical level, a modern digital transistor based computer running an ANN simulation is also vastly more complex than the node-level ANN model.
In terms of simulation complexity, a modern GPU is actually more complex than the brain. It would take at most on the order of 10^17 op/s second to simulate a brain (10^14 synapses @ 10^3 hz), but it takes more than 10^18 op/s second to simulate a GPU (10^9 transistors @ 10^9 hz).
Simulating a brain at any detail level beyond its actual computational power is pointless for AI—the ANN level is the exactly correct level of abstraction for actual performance.
Technically true but also irrelevant. At the physical level, a modern digital transistor based computer running an ANN simulation is also vastly more complex than the node-level ANN model.
ANN are no neurons. We can’t accurately simulate even what a single neuron does. Neurons can express proteins when specific hormones are in their environment. The functioning of roughly a third of the human genome is unknown.
Increasing or decreasing the amount of channels for various substances in the cell membrane takes proteins. That’s part of long term plasticity.
It would take at most on the order of 10^17 op/s second to simulate a brain (10^14 synapses @ 10^3 hz),
That simulation completely ignores neurotransmitters floating around in the brain and many other factors. You can simulate a ANN at one op/synase but a simulation of a real brain is very incomplete at that level.
Actually the exact opposite is true—ANN neurons and synapses are more powerful per neuron per synapse than their biological equivalents. ANN neurons signal and compute with high precision real numbers with 16 or 32 bits of precision, rather than 1 bit binary pulses with low precision analog summation.
The difference depends entirely on the problem, and the ideal strategy probably involves a complex heterogeneous mix of units of varying precision (which you see in the synaptic distribution in the cortex, btw), but in general with high precision neurons/synapses you need less units to implement the same circuit.
Also, I should mention that some biological neural circuits implement temporal coding (as in the hippocampus), which allows a neuron to send somewhat higher precision signals (on the order of 5 to 8 bits per spike or so). This has other tradeoffs though, so it isn’t worth it in all cases.
Brains are more powerful than current ANNs because current ANNs are incredibly small. All of the recent success in deep learning where ANNs are suddenly dominating everywhere was enabled by using GPUs to train ANNs in the range of 1 to 10 million neurons and 1 to 10 billion synapses—which is basically insect to lizard brain size range. (we aren’t even up to mouse sized ANNs yet)
That is still 3 to 4 orders of magnitude smaller than the human brain—we have a long ways to go still in terms of performance. Thankfully ANN performance is more than doubling every year (combined hardware and software increase).
The fact that they are subpar doesn’t mean that you can learn nothing from ANNs. It also doesn’t mean that ANNs can’t do a variety of tasks in machine learning with them.
ANN are no neurons. We can’t accurately simulate even what a single neuron does.
Everything depends on your assumed simulation scale and accuracy. If you want to be pedantic, you could say we can’t even simulate transistors, because clearly our simulations of transistors are not accurate down to the quantum level.
However, the physics of computation allow us to estimate the approximate level of computational scale separation that any conventional (irreversible) physical computer must have to functional correctly (signal reliably in a noisy environment).
The Lanauder limits on switching energies is one bound, but most of the energy (in brains or modern computers) goes to wire transmission energy, and one can derive bounds on signal propagation energy in the vicinity of ~1pJ / bit / mm for reliable signaling. From this we can then plug in the average interconnect distance between synapses and neurons (both directions) and you get a maximum computation rate on the order of 10^15 ops or so, probably closer to 10^13 low precision ops. Deriving all that is well beyond the scope of a little comment.
The Lanauder limits on switching energies is one bound, but most of the energy (in brains or modern computers) goes to wire transmission energy, and one can derive bounds on signal propagation energy in the vicinity of ~1pJ / bit / mm for reliable signaling.
The energy count for signal transmission doesn’t include changing the amount of ion channels a neuron has. You might model short term plasticity but you don’t get long term plasticity.
You also don’t model how hormones and other neurotransmitter float around in the brain. An ANN deals only with the electric signal transmission misses essential parts of how the brain works. That doesn’t make it bad for the purposes of being an ANN but it’s lacking as a model of the brain.
Sure, all of that is true, but of the brain’s 10 watt budget, more than half is spent on electric signaling and computation, so all the other stuff you mention at most increases the intrinsic simulation complexity by a factor of 2.
Are you aware of the complexity of folding of a single protein? It might not take much energy but it’s very complex.
If you have 1000 different types of proteins swimming around in a neurons that interact with each other I don’t think you get that by adding a factor of two.
Technically true but also irrelevant. At the physical level, a modern digital transistor based computer running an ANN simulation is also vastly more complex than the node-level ANN model.
In terms of simulation complexity, a modern GPU is actually more complex than the brain. It would take at most on the order of 10^17 op/s second to simulate a brain (10^14 synapses @ 10^3 hz), but it takes more than 10^18 op/s second to simulate a GPU (10^9 transistors @ 10^9 hz).
Simulating a brain at any detail level beyond its actual computational power is pointless for AI—the ANN level is the exactly correct level of abstraction for actual performance.
ANN are no neurons. We can’t accurately simulate even what a single neuron does. Neurons can express proteins when specific hormones are in their environment. The functioning of roughly a third of the human genome is unknown. Increasing or decreasing the amount of channels for various substances in the cell membrane takes proteins. That’s part of long term plasticity.
That simulation completely ignores neurotransmitters floating around in the brain and many other factors. You can simulate a ANN at one op/synase but a simulation of a real brain is very incomplete at that level.
This blew my mind a bit. So why the heck are researchers trying to train neural nets when the nodes of those nets are clearly subpar?
Actually the exact opposite is true—ANN neurons and synapses are more powerful per neuron per synapse than their biological equivalents. ANN neurons signal and compute with high precision real numbers with 16 or 32 bits of precision, rather than 1 bit binary pulses with low precision analog summation.
The difference depends entirely on the problem, and the ideal strategy probably involves a complex heterogeneous mix of units of varying precision (which you see in the synaptic distribution in the cortex, btw), but in general with high precision neurons/synapses you need less units to implement the same circuit.
Also, I should mention that some biological neural circuits implement temporal coding (as in the hippocampus), which allows a neuron to send somewhat higher precision signals (on the order of 5 to 8 bits per spike or so). This has other tradeoffs though, so it isn’t worth it in all cases.
Brains are more powerful than current ANNs because current ANNs are incredibly small. All of the recent success in deep learning where ANNs are suddenly dominating everywhere was enabled by using GPUs to train ANNs in the range of 1 to 10 million neurons and 1 to 10 billion synapses—which is basically insect to lizard brain size range. (we aren’t even up to mouse sized ANNs yet)
That is still 3 to 4 orders of magnitude smaller than the human brain—we have a long ways to go still in terms of performance. Thankfully ANN performance is more than doubling every year (combined hardware and software increase).
The fact that they are subpar doesn’t mean that you can learn nothing from ANNs. It also doesn’t mean that ANNs can’t do a variety of tasks in machine learning with them.
Everything depends on your assumed simulation scale and accuracy. If you want to be pedantic, you could say we can’t even simulate transistors, because clearly our simulations of transistors are not accurate down to the quantum level.
However, the physics of computation allow us to estimate the approximate level of computational scale separation that any conventional (irreversible) physical computer must have to functional correctly (signal reliably in a noisy environment).
The Lanauder limits on switching energies is one bound, but most of the energy (in brains or modern computers) goes to wire transmission energy, and one can derive bounds on signal propagation energy in the vicinity of ~1pJ / bit / mm for reliable signaling. From this we can then plug in the average interconnect distance between synapses and neurons (both directions) and you get a maximum computation rate on the order of 10^15 ops or so, probably closer to 10^13 low precision ops. Deriving all that is well beyond the scope of a little comment.
The energy count for signal transmission doesn’t include changing the amount of ion channels a neuron has. You might model short term plasticity but you don’t get long term plasticity.
You also don’t model how hormones and other neurotransmitter float around in the brain. An ANN deals only with the electric signal transmission misses essential parts of how the brain works. That doesn’t make it bad for the purposes of being an ANN but it’s lacking as a model of the brain.
Sure, all of that is true, but of the brain’s 10 watt budget, more than half is spent on electric signaling and computation, so all the other stuff you mention at most increases the intrinsic simulation complexity by a factor of 2.
Are you aware of the complexity of folding of a single protein? It might not take much energy but it’s very complex.
If you have 1000 different types of proteins swimming around in a neurons that interact with each other I don’t think you get that by adding a factor of two.