Part 1b. UNLimitations of explosive potential of neural nets
NNs are highly parallelizable. Functional units are highly parallelizable.
We’re currently in the midst of performance gains from application of deep learning.
Toolkits and hardware are parallelizing the work stream.
The better we get at it, the more people and resources are drawn into the work.
I see the hardware gains getting multiplied by expanding techniques getting multiplied by expanded toolkit availability getting multiplied by expanding resources.
Building blocks are getting improved very fast right now. It’s going to come down to how hard system integration is.
NNs are highly parallelizable. Functional units are highly parallelizable.
We’re currently in the midst of performance gains from application of deep learning.
Toolkits and hardware are parallelizing the work stream.
The better we get at it, the more people and resources are drawn into the work.
I see the hardware gains getting multiplied by expanding techniques getting multiplied by expanded toolkit availability getting multiplied by expanding resources.
Building blocks are getting improved very fast right now. It’s going to come down to how hard system integration is.