When doing probabilistic calculations, you can either use very, very precise representations of computable real numbers to represent the probabilities, or you can use various lower-precision but natively stochastic representations, whose distribution over computation outcomes is the distribution being inferred.
Of course—and using say a flop to implement a low precision synaptic op is inefficient by six orders of magnitude or so—but this just strengthens my point. Neuromorphic brain-like AGI thus has huge potential performance improvement to look forward to, even without Moore’s Law.
Neuromorphic brain-like AGI thus has huge potential performance improvement to look forward to, even without Moore’s Law.
Yes, if you could but dissolve your concept of “brain-like”/”neuromorphic” into actual principles about what calculations different neural nets embody.
Of course—and using say a flop to implement a low precision synaptic op is inefficient by six orders of magnitude or so—but this just strengthens my point. Neuromorphic brain-like AGI thus has huge potential performance improvement to look forward to, even without Moore’s Law.
Yes, if you could but dissolve your concept of “brain-like”/”neuromorphic” into actual principles about what calculations different neural nets embody.