I should probably rephrase the brain optimality argument, as it isn’t just about energy per se. The brain is on the pareto efficiency surface—it is optimal with respect to some complex tradeoffs between area/volume, energy, and speed/latency.
Energy is pretty dominant, so it’s much closer to those limits than the rest. The typical futurist understanding about the Landauer limit is not even wrong—way off, as I point out in my earlier reply below and related links.
A consequence of the brain being near optimal for energy of computation for intelligence given it’s structure is that it is also near optimal in terms of intelligence per switching events.
The brain computes with just around 10^14 switching events per second (10^14 synapses * 1 hz average firing rate). That is something of an upper bound for the average firing rate.1
The typical synapse is very small, has a low SNR and thus is equivalent to a low bit op, and only activates maybe 25% of the time.2 We can roughly compare these minimal SNR analog ops with the high precision single bit ops that digital transistors implement. The landauer principle allows us to rate them as reasonably equivalent in computational power.
So the brain computes with just 10^14 switching events per second. That is essentially miraculous. A modern GPU uses perhaps 10^18 switching events per second.
So the important thing here is not just energy—but overall circuit efficiency. The brain is crazy super efficient—and as far as we can tell near optimal—in its use of computation towards intelligence.
This explains why our best SOTA techniques in almost all AI are some version of brain-like ANNs (the key defining principle being search/optimization over circuit space). It predicts that the best we can do for AGI is to reverse engineer the brain. Yes eventually we will scale far beyond the brain, but that doesn’t mean that we will use radically different algorithms.
A consequence of the brain being near optimal for energy of computation for intelligence given its structure is that it is also near optimal in terms of intelligence per switching events.
So the brain computes with just 10^14 switching events per second.
What do you mean by, given its structure? Does this still leave open that a brain with some differences in organization could get more intelligence out of the same number of switching events per second?
Similarly, I assume the same argument applies to all animal brains. Do you happen to have stats on the number of switching events per second for e.g. the chimpanzee?
I should probably rephrase the brain optimality argument, as it isn’t just about energy per se. The brain is on the pareto efficiency surface—it is optimal with respect to some complex tradeoffs between area/volume, energy, and speed/latency.
Energy is pretty dominant, so it’s much closer to those limits than the rest. The typical futurist understanding about the Landauer limit is not even wrong—way off, as I point out in my earlier reply below and related links.
A consequence of the brain being near optimal for energy of computation for intelligence given it’s structure is that it is also near optimal in terms of intelligence per switching events.
The brain computes with just around 10^14 switching events per second (10^14 synapses * 1 hz average firing rate). That is something of an upper bound for the average firing rate.1
The typical synapse is very small, has a low SNR and thus is equivalent to a low bit op, and only activates maybe 25% of the time.2 We can roughly compare these minimal SNR analog ops with the high precision single bit ops that digital transistors implement. The landauer principle allows us to rate them as reasonably equivalent in computational power.
So the brain computes with just 10^14 switching events per second. That is essentially miraculous. A modern GPU uses perhaps 10^18 switching events per second.
So the important thing here is not just energy—but overall circuit efficiency. The brain is crazy super efficient—and as far as we can tell near optimal—in its use of computation towards intelligence.
This explains why our best SOTA techniques in almost all AI are some version of brain-like ANNs (the key defining principle being search/optimization over circuit space). It predicts that the best we can do for AGI is to reverse engineer the brain. Yes eventually we will scale far beyond the brain, but that doesn’t mean that we will use radically different algorithms.
What do you mean by, given its structure? Does this still leave open that a brain with some differences in organization could get more intelligence out of the same number of switching events per second?
Similarly, I assume the same argument applies to all animal brains. Do you happen to have stats on the number of switching events per second for e.g. the chimpanzee?