Due to the need to iterate the vs until convergence, the predictive coding network had roughly a 100x greater computational cost than the backprop network.
This seems to imply that artificial NNs are 100x more computationally efficient (at the cost of not being able to grow and probably lower fault tolerance etc.). Still, I’m updating to simulating a brain requiring much less CPU than the neurons in the brain would indicate.
That’s assuming that the brain is using predictive coding to implement backprop, whereas it might instead be doing something that is more computationally efficient given its hardware limitations. (Indeed, the fact that it’s so inefficient should make you update that it’s not likely for the brain to be doing it)
Partly, yes. But partly the computation could be the cheap part compared to the thing it’s trading off against (ability to grow, fault tolerance, …). It is also possible that the brains architecture allows it to include a wider range of inputs that might not be able to model with back-prod (or not efficiently so).
I think that’s premature. This is just one (digital, synchronous) implementation of one model of BNN that can be shown to converge on the same result as backprop. In a neuromorphic implementation of this circuit, the convergence would occur on the same time scale as the forward propagation.
Well, another advantage of the BNN is of course the high parallelism. But that doesn’t change the computational cost (number of FLOPS required), it just spreads it out in parallel.
On page 8 at the end of section 4.1:
This seems to imply that artificial NNs are 100x more computationally efficient (at the cost of not being able to grow and probably lower fault tolerance etc.). Still, I’m updating to simulating a brain requiring much less CPU than the neurons in the brain would indicate.
That’s assuming that the brain is using predictive coding to implement backprop, whereas it might instead be doing something that is more computationally efficient given its hardware limitations. (Indeed, the fact that it’s so inefficient should make you update that it’s not likely for the brain to be doing it)
Partly, yes. But partly the computation could be the cheap part compared to the thing it’s trading off against (ability to grow, fault tolerance, …). It is also possible that the brains architecture allows it to include a wider range of inputs that might not be able to model with back-prod (or not efficiently so).
I think that’s premature. This is just one (digital, synchronous) implementation of one model of BNN that can be shown to converge on the same result as backprop. In a neuromorphic implementation of this circuit, the convergence would occur on the same time scale as the forward propagation.
Well, another advantage of the BNN is of course the high parallelism. But that doesn’t change the computational cost (number of FLOPS required), it just spreads it out in parallel.