Thank you for making the point about existing network efficiencies! :)
The assumption, years ago, was that AGI would need 200x as many artificial weights and biases when compared to a human’s 80 to 100 Trillion synapses. Yet—we see the models beating our MBA exams, now, using only a fraction of the number of neurons! The article above pointed to the difference between “capable of 20%” and “impacting 20%”—I would guess that we’re already at the “20% capability” mark, in terms of the algorithms themselves. Every time a major company wants to, they can presently reach human-level results with narrow AI that uses 0.05% as many synapses.
Yes. And regarding the narrow AI : one idea I had a few years ago was that sota results from a major company are a process. A replicable, automatable process. So a major company could create a framework where you define your problem, provide large amounts of examples (usually via simulation), and the framework attempts a library of know neural network architectures in parallel and selects the best performing ones.
This would let small companies get sota solutions for their problems.
The current general models suggest that may not be even necessary.
Thank you for making the point about existing network efficiencies! :)
The assumption, years ago, was that AGI would need 200x as many artificial weights and biases when compared to a human’s 80 to 100 Trillion synapses. Yet—we see the models beating our MBA exams, now, using only a fraction of the number of neurons! The article above pointed to the difference between “capable of 20%” and “impacting 20%”—I would guess that we’re already at the “20% capability” mark, in terms of the algorithms themselves. Every time a major company wants to, they can presently reach human-level results with narrow AI that uses 0.05% as many synapses.
Yes. And regarding the narrow AI : one idea I had a few years ago was that sota results from a major company are a process. A replicable, automatable process. So a major company could create a framework where you define your problem, provide large amounts of examples (usually via simulation), and the framework attempts a library of know neural network architectures in parallel and selects the best performing ones.
This would let small companies get sota solutions for their problems.
The current general models suggest that may not be even necessary.