Eli this doesn’t make sense—the fact that digital logic switches are higher precision and more powerful and thus require more minimal energy makes the brain/mind more impressive, not less.
The energy efficiency per op in the brain is rather poor in one sense—perhaps 10^5 larger than the minimum imposed by physics for a low SNR analog op, but essentially all of this cost is wire energy.
The miraculous thing is how much intelligence the brain/mind achieves for such a tiny amount of computation in terms of low level equivalent bit ops/second. It suggests that brain-like ANNs will absolutely dominate the long term future of AI.
Eli this doesn’t make sense—the fact that digital logic switches are higher precision and more powerful and thus require more minimal energy makes the brain/mind more impressive, not less.
Nuh-uh :-p. The issue is that the brain’s calculations are probabilistic. When doing probabilistic calculations, you can either use very, very precise representations of computable real numbers to represent the probabilities, or you can use various lower-precision but natively stochastic representations, whose distribution over computation outcomes is the distribution being inferred.
Hence why the brain is, on the one hand, very impressive for extracting inferential power from energy and mass, but on the other hand, “not that amazing” in the sense that it, too, begins to add up to normality once you learn a little about how it works.
When doing probabilistic calculations, you can either use very, very precise representations of computable real numbers to represent the probabilities, or you can use various lower-precision but natively stochastic representations, whose distribution over computation outcomes is the distribution being inferred.
Of course—and using say a flop to implement a low precision synaptic op is inefficient by six orders of magnitude or so—but this just strengthens my point. Neuromorphic brain-like AGI thus has huge potential performance improvement to look forward to, even without Moore’s Law.
Neuromorphic brain-like AGI thus has huge potential performance improvement to look forward to, even without Moore’s Law.
Yes, if you could but dissolve your concept of “brain-like”/”neuromorphic” into actual principles about what calculations different neural nets embody.
Eli this doesn’t make sense—the fact that digital logic switches are higher precision and more powerful and thus require more minimal energy makes the brain/mind more impressive, not less.
The energy efficiency per op in the brain is rather poor in one sense—perhaps 10^5 larger than the minimum imposed by physics for a low SNR analog op, but essentially all of this cost is wire energy.
The miraculous thing is how much intelligence the brain/mind achieves for such a tiny amount of computation in terms of low level equivalent bit ops/second. It suggests that brain-like ANNs will absolutely dominate the long term future of AI.
Nuh-uh :-p. The issue is that the brain’s calculations are probabilistic. When doing probabilistic calculations, you can either use very, very precise representations of computable real numbers to represent the probabilities, or you can use various lower-precision but natively stochastic representations, whose distribution over computation outcomes is the distribution being inferred.
Hence why the brain is, on the one hand, very impressive for extracting inferential power from energy and mass, but on the other hand, “not that amazing” in the sense that it, too, begins to add up to normality once you learn a little about how it works.
Of course—and using say a flop to implement a low precision synaptic op is inefficient by six orders of magnitude or so—but this just strengthens my point. Neuromorphic brain-like AGI thus has huge potential performance improvement to look forward to, even without Moore’s Law.
Yes, if you could but dissolve your concept of “brain-like”/”neuromorphic” into actual principles about what calculations different neural nets embody.