ANN are no neurons. We can’t accurately simulate even what a single neuron does.
Everything depends on your assumed simulation scale and accuracy. If you want to be pedantic, you could say we can’t even simulate transistors, because clearly our simulations of transistors are not accurate down to the quantum level.
However, the physics of computation allow us to estimate the approximate level of computational scale separation that any conventional (irreversible) physical computer must have to functional correctly (signal reliably in a noisy environment).
The Lanauder limits on switching energies is one bound, but most of the energy (in brains or modern computers) goes to wire transmission energy, and one can derive bounds on signal propagation energy in the vicinity of ~1pJ / bit / mm for reliable signaling. From this we can then plug in the average interconnect distance between synapses and neurons (both directions) and you get a maximum computation rate on the order of 10^15 ops or so, probably closer to 10^13 low precision ops. Deriving all that is well beyond the scope of a little comment.
The Lanauder limits on switching energies is one bound, but most of the energy (in brains or modern computers) goes to wire transmission energy, and one can derive bounds on signal propagation energy in the vicinity of ~1pJ / bit / mm for reliable signaling.
The energy count for signal transmission doesn’t include changing the amount of ion channels a neuron has. You might model short term plasticity but you don’t get long term plasticity.
You also don’t model how hormones and other neurotransmitter float around in the brain. An ANN deals only with the electric signal transmission misses essential parts of how the brain works. That doesn’t make it bad for the purposes of being an ANN but it’s lacking as a model of the brain.
Sure, all of that is true, but of the brain’s 10 watt budget, more than half is spent on electric signaling and computation, so all the other stuff you mention at most increases the intrinsic simulation complexity by a factor of 2.
Are you aware of the complexity of folding of a single protein? It might not take much energy but it’s very complex.
If you have 1000 different types of proteins swimming around in a neurons that interact with each other I don’t think you get that by adding a factor of two.
Everything depends on your assumed simulation scale and accuracy. If you want to be pedantic, you could say we can’t even simulate transistors, because clearly our simulations of transistors are not accurate down to the quantum level.
However, the physics of computation allow us to estimate the approximate level of computational scale separation that any conventional (irreversible) physical computer must have to functional correctly (signal reliably in a noisy environment).
The Lanauder limits on switching energies is one bound, but most of the energy (in brains or modern computers) goes to wire transmission energy, and one can derive bounds on signal propagation energy in the vicinity of ~1pJ / bit / mm for reliable signaling. From this we can then plug in the average interconnect distance between synapses and neurons (both directions) and you get a maximum computation rate on the order of 10^15 ops or so, probably closer to 10^13 low precision ops. Deriving all that is well beyond the scope of a little comment.
The energy count for signal transmission doesn’t include changing the amount of ion channels a neuron has. You might model short term plasticity but you don’t get long term plasticity.
You also don’t model how hormones and other neurotransmitter float around in the brain. An ANN deals only with the electric signal transmission misses essential parts of how the brain works. That doesn’t make it bad for the purposes of being an ANN but it’s lacking as a model of the brain.
Sure, all of that is true, but of the brain’s 10 watt budget, more than half is spent on electric signaling and computation, so all the other stuff you mention at most increases the intrinsic simulation complexity by a factor of 2.
Are you aware of the complexity of folding of a single protein? It might not take much energy but it’s very complex.
If you have 1000 different types of proteins swimming around in a neurons that interact with each other I don’t think you get that by adding a factor of two.