By “not that much gain”, I mean that no amount of algorithmic improvements would change the sublinear scaling of intelligence as a function of compute.
Until AI is at least as sample-efficient and energy-efficient as humans are at learning, there are significant algorithmic gains that are possible. This may not be possible under the current deep-learning paradigm but we know it’s possible under some paradigm since evolution has already accomplished it blindly.
I do share your skepticism that something like an LLM alone could recursively improve itself quickly. Assuming FOOM, my model of how it happened has deep learning as only part of the answer. It’s part of the recursive loop but is used mostly as a general heuristic module, much like the neural net of a chess engine is only a piece of the puzzle; you still need a fast search algorithm that uses the heuristics efficiently.
This seems highly unlikely.
By “not that much gain”, I mean that no amount of algorithmic improvements would change the sublinear scaling of intelligence as a function of compute.
Until AI is at least as sample-efficient and energy-efficient as humans are at learning, there are significant algorithmic gains that are possible. This may not be possible under the current deep-learning paradigm but we know it’s possible under some paradigm since evolution has already accomplished it blindly.
I do share your skepticism that something like an LLM alone could recursively improve itself quickly. Assuming FOOM, my model of how it happened has deep learning as only part of the answer. It’s part of the recursive loop but is used mostly as a general heuristic module, much like the neural net of a chess engine is only a piece of the puzzle; you still need a fast search algorithm that uses the heuristics efficiently.